boundary 2

Tag: digital humanities

  • Siobhan Senier — What Indigenous Literature Can Bring to Electronic Archives

    Siobhan Senier — What Indigenous Literature Can Bring to Electronic Archives

    Siobhan Senier

    Indigenous people are here—here in digital space just as ineluctably as they are in all the other “unexpected places” where historian Philip Deloria (2004) suggests we go looking for them. Indigenous people are on Facebook, Twitter, and YouTube; they are gaming and writing code, podcasting and creating apps; they are building tribal websites that disseminate immediately useful information to community members while asserting their sovereignty. And they are increasingly present in electronic archives. We are seeing the rise of Indigenous digital collections and exhibits at most of the major heritage institutions (e.g., the Smithsonian) as well as at a range of museums, universities and government offices. Such collections carry the promise of giving tribal communities more ready access to materials that, in some cases, have been lost to them for decades or even centuries. They can enable some practical, tribal-nation rebuilding efforts, such as language revitalization projects. From English to Algonquian, an exhibit curated by the American Antiquarian Society, is just one example of a digitally-mediated collaboration between tribal activists and an archiving institution that holds valuable historic Native-language materials.

    “Digital repatriation” is a term now used to describe many Indigenous electronic archives. These projects create electronic surrogates of heritage materials, often housed in non-Native museums and archives, making them more available to their tribal “source communities” as well as to the larger public. But digital repatriation has its limits. It is not, as some have pointed out, a substitute for the return of the original items. Moreover, it does not necessarily challenge the original archival politics. Most current Indigenous digital collections, indeed, are based on materials held in universities, museums and antiquarian societies—the types of institutions that historically had their own agendas of salvage anthropology, and that may or may not have come by their materials ethically in the first place. There are some practical reasons that settler institutions might be first to digitize: they tend to have rather large quantities of material, along with the staff, equipment and server space to undertake significant electronic projects. The best of these projects are critically self-conscious about their responsibilities to tribal communities. And yet the overall effect of digitizing settler collections first is to perpetuate colonial archival biases—biases, for instance, toward baskets and buckskins rather than political petitions; biases toward sepia photographs of elders rather than elders’ letters to state and federal agencies; biases toward more “exotic” images, rather than newsletters showing Native activists successfully challenging settler institutions to acknowledge Indigenous peoples’ continuous, and political presence.

    Those petitions, letters and newsletters do exist, but they tend to reside in the legions of small archives gathered, protected and curated by tribal people themselves, often with gallingly little material support or recognition from outside their communities. While it is true that many Indigenous cultural heritage items have been taken from their source communities for display in remote collecting institutions, it is also true that Indigenous people have continued to maintain their own archives of books, papers and art objects in tribal offices, tribal museums, attics and garages. Such items might be in precarious conditions of preservation, subject to mold, mildew or other damage. They may be incompletely inventoried, or catalogued only in an elder’s memory. And they are hardly ever digitized. A recent survey by the Association of Tribal Archives Libraries and Museums (2013) found that, even though digitization is now the industry standard for libraries and archives, very few tribal collections in the United States are digitizing anything at all. Moreover, the survey found, this often isn’t for lack of desire, but for lack of resources—lack of staff and time, lack of access to adequate equipment and training, lack of broadband.[1]

    Tribally stewarded collections often hold radically different kinds of materials that tell radically different stories from those historically promoted by institutions that thought they were “preserving” cultural remnants. Of particular interest to me as a literary scholar is the Indigenous writing that turns up in tribal and personal archives: tribal newsletters and periodicals; powwow and pageant programs; mimeographed books used to teach language and traditional narratives; recorded oral histories; letters, memoirs and more. Unlike the ethnographers’ photographs, colonial administrators’ records and (sometimes) decontextualized material objects that dominate larger museums, these writings tell stories of Indigenous survival and persistence. In what follows, I give a brief review of some of the best-known Indigenous electronic archives, followed by a consideration of how digitizing Indigenous writing, specifically, could change the way we see such archives. In their own recirculations of their writings online, Native people have shown relatively little interest in the concerns that currently dominate the field of Digital Humanities, including “preservation,” “open access,” “scalability,” and (perhaps the most unfortunate term in this context) “discoverability.” They seem much keener to continue what those literary traditions have in fact always done: assert and enact their communities’ continuous presence and political viability.

    Digital Repatriation and Other Consultative Practices

    Indigenous digital archives are very often based in universities, headed by professional scholars, often with substantial community engagement. The Yale Indian Papers Project, which seeks to improve access to primary documents demonstrating the continuous presence of Indigenous people in New England, elicits editorial assistance from a number of Indigenous scholars and tribal historians. The award-winning Plateau People’s Web Portal at Washington State University takes this collaborative methodology one step further, inviting consultants from neighboring tribal nations to come in to the university archives and select and curate materials for the web. Other digital Indigenous exhibits come from prestigious museums and collecting institutions, like the American Philosophical Society’s “Native American Images Project.” Indeed, with so many libraries, museums and archives now creating digital collections these days (whether in the form of e-books, scanned documents, or full electronic exhibits), materials related to Indigenous people can be found in an ever-growing variety of formats and places. Hence, there is also a rising popularity in portals—regional or state-based sites that can act as gateways to a wide variety of digital collections. Some are specific to Indigenous topics and locations, like the Carlisle Indian School Digital Resource Center, which compiles web-based resources for studying U.S. boarding school history. Others digital portals sweep up Indigenous objects along with other cultural materials, like the Maine Memory Network or the Digital Public Library of America.

    It is not surprising that the bent of most of these collections is decidedly ethnographic, given that Indigenous people the world over have been the subjects of one prolonged imperial looting. Cultural heritage professionals are now legally (or at least ethically) required to repatriate human remains and sacred objects, but in recent years, many have also begun to speak of “digital repatriation.” Just as digital collections of all kinds are providing new access to materials held in far-flung locations, these are arguably a boon to elders or Native people living far away, for instance, from the Smithsonian Museum, to be able to readily view their cultural property. The digitization of heritage and materials can, in fact, help promote cultural revitalization and culturally responsive teaching  (Roy and Christal 2002; Srinivasan et al. 2010). Many such projects aim expressly “to reinstate the role of the cultural object as a generator, rather than an artifact, of cultural information and interpretation” (Brown and Nicholas 2012, 313).

    Nonetheless, Indigenous people may be forgiven if they take a dim view of their cultural heritage items being posted willy nilly on the internet. Some have questioned whether digital repatriation is a subterfuge for forestalling or refusing the return of the original items. Jim Enote (Zuni), Executive Director of the A:shiwi A:wan Museum and Heritage Center, has gone so far as to say that the words “digital” and “repatriation” simply don’t belong in the same sentence, pointing out that nothing in fact is being repatriated, since even the digital item is, in most cases, also created by a non-Native institution (Boast and Enote 2013,  110). Others worry about the common assumption that unfettered access to information is always and everywhere an unqualified good. Anthropologist Kimberly Christen has asked pointedly, “Does Information Really Want to be Free?” Her answer: “For many Indigenous communities in settler societies, the public domain and an information commons are just another colonial mash-up where their cultural materials and knowledge are ‘open’ for the profit and benefit of others, but remain separated from the sociocultural systems in which they were and continue to be used, circulated, and made meaningful” (Christen 2012, 2879-80).

    A truly decolonized archive, then, calls for a critical re-examination of the archive itself. As Ellen Cushman (Cherokee) puts it, “Archives of Indigenous artifacts came into existence in part to elevate the Western tradition through a process of othering ‘primitive’ and Native traditions . . . . Tradition. Collection. Artifacts. Preservation. These tenets of colonial thought structure archives whether in material or digital forms” (Cushman 2013, 119). The most critical digital collections, therefore, are built not only through consultation with Indigenous knowledge-keepers, but also with considerable self-consciousness about the archival endeavor itself. The Yale editors, for instance, explain that “we cannot speak for all the disciplines that have a stake in our work, nor do we represent the perspective of Native people themselves . . . . [Therefore tribal] consultants’ annotations might include Native origin stories, oral sources, and traditional beliefs while also including Euro-American original sources of the same historical event or phenomena, thus offering two kinds of narratives of the past” (Grant-Costa, Glaza, and Sletcher 2012). Other sites may build this archival awareness into the interface itself. Performing Archive: Curtis + the “vanishing race,” for instance, seeks explicitly to “reject enlightenment ideals of the cumulative archive—i.e. that more materials lead to better, more accurate knowledge—in order to emphasize the digital archive as a site of critique and interpretation, wherein access is understood not in terms of access to truth, but to the possibility of past, present, and future performance” (Kim and Wernimont 2014).

    Additional innovations worth mentioning here include the content management system Mukurtu, initially developed by Christen and her colleagues to facilitate culturally responsive archiving for an Aboriginal Australian collection, and quickly embraced by projects worldwide. Recognizing that “Indigenous communities across the globe share similar sets of archival, cultural heritage, and content management needs” (2005:317), Mukurtu lets them build their own digital collections and exhibits, while giving them finely grained control over who can access those materials—e.g., through tribal membership, clan system, family network, or some other benchmark. Christen and her colleague Jane Anderson have also created a system of traditional knowledge (TK) licenses and labels—icons that can be placed on a website to help educate site visitors about the culturally appropriate use of heritage materials. The licenses (e.g., “TK Commercial,” “TK Non-Commercial”) are meant to be legal instruments for owners of heritage material; a tribal museum, for instance, could use them to signal how it intends for electronic material to be used or not used. The TK labels, meanwhile, are extra-legal tools meant to educate users about culturally appropriate approaches to material that may, legalistically, be in the  “public domain,” but from a cultural standpoint have certain restrictions: e.g., “TK Secret/Sacred,” “TK Women Restricted,” “TK Community Use Only.”)

    All of the projects described here, many still in their incipient stages, aim to decolonize archives at their core. They put Indigenous knowledge-keepers in partnership with computing and heritage management professionals to help communities determine how, whether, and why their collections shall be digitized and made available. As such, they have a great deal to teach digital literary projects—literary criticism (if I may) not being a profession historically inclined to consult with living subjects very much at all. Next, I ponder why, despite great strides in both Indigenous digital collections and literary digital collections, the twain have really yet to meet.

    Electronic Textualities: The Pasts and Futures of Indigenous Literature

    While signatures, deeds and other Native-authored texts surface occasionally in the aforementioned heritage projects, digital projects devoted expressly to Indigenous writing are relatively few and far between.[2] Granting that Aboriginal people, like any other people, do produce writings meant to be private, as a literary scholar I am confronted daily with a rather different problem than that of cultural “protection”: a great abundance of poetry, fiction and nonfiction written by Indigenous people, much of which just never sees the larger audiences for which it was intended. How can the insights of the more ethnographically oriented Indigenous digital archives inform digital literary collections, and vice versa? How do questions of repatriation, reciprocity, and culturally sensitive contextualization change, if at all, when we consider Indigenous writing?

    Literary history is another of those unexpected places in which Indians are always found. But while Indigenous literature—both historic and contemporary—has garnered increasing attention in the academy and beyond, the Digital Humanities does not seem to have contributed very much to the expansion and promotion of these canons. Conversely, while DH has produced some dynamic and diverse literary scholarship, scholars in Native American Studies seem to be turning toward this scholarship only slowly. Perhaps digital literary studies has not felt terribly inviting to Indigenous texts; many observers (Earhart 2012; Koh 2015) have remarked that the emerging digital literary canon, indeed, looks an awful lot like the old one, with the lion’s share of the funding and prestige going to predictable figures like William Shakespeare, William Blake, and Walt Whitman. At this moment, I know of no mass movement to digitize Indigenous writing, although a number of “public domain” texts appear in places like the Internet Archive, Google Books, and Project Gutenberg.[3] Indigenous digital literature seems light years away from the kinds of scholarly and technical standards achieved by the Whitman and Rosetti Archives. And without a sizeable or searchable corpus, scholarship on Indigenous literature likewise seems light years from the kinds of text mining, topic modeling and network analysis that is au courant in DH.

    Instead, we see small-scale, emergent digital collections that nevertheless offer strong correctives to master narratives of Indigenous disappearance, and that supply further material for ongoing sovereignty struggles. The Hawaiian language newspaper project is one powerful example. Started as a massive crowdsourcing effort that digitized at least half of the remarkable 100 native-language newspapers produced by Hawaiian people between the 1830s and the 1940s, it calls itself “the largest native-language cache in the Western world,” and promises to change the way Hawaiian history is seen. It might well do so if, as Noenoe Silva (2004, 2) has argued, “[t]he myth of [Indigenous Hawaiian] nonresistance was created in part because mainstream historians have studiously avoided the wealth of material written in Hawaiian.” A grassroots digitization movement like the Hawaiian Nupepa Project makes such studious avoidance much more difficult, and it brings to the larger world of Indigenous digital collections direct examples—through Indigenous literacy—of Indigenous political persistence.

    It thus points to the value of the literary in Indigenous digitization efforts. Jessica Pressman and Lisa Swanstrom (2013) have asked, “What kind of scholarly endeavors are possible when we think of the digital humanities as not just supplying the archives and data-sets for literary interpretation but also as promoting literary practices with an emphasis on aesthetics, on intertextuality, and writerly processes? What kind of scholarly practices and products might emerge from a decisively literary perspective and practice in the digital humanities?” Abenaki historian Lisa Brooks (2012, 309) has asked similar questions from an Indigenous perspective, positing that digital space allows us to challenge conventional notions of literary periodization and of place, to “follow paths of intellectual kinship, moving through rhizomic networks of influence and inquiry.” Brooks and other literary historians have long argued that Indigenous people have deployed alphabetic literacy strategically to (re)build their communities, restore and revitalize their traditions, and exercise their political and cultural sovereignty. Digital literary projects, like the Hawaiian newspaper project, can offer powerful extensions of these practices in electronic space.

    Dawnlandvoices.org: Curating Indigenous Literary Continuance

    These were some of the questions and issues we had in mind when we started dawnlandvoices.org.[4] This archive is emergent—not a straight scan-and-upload of items residing in one physical site or group of sites, but rather a collaboration among tribal authors, tribal collections, and university-based scholars and students. It came out of a print volume, Dawnland Voices: An Anthology of Writing from Indigenous New England (Senier 2014), edited by myself and eleven tribal historians. Organized by tribal nation, the book ranges from the earliest writings (petroglyphs and political petitions) to the newest (hip-hop poetry and blog entries). The print volume already aimed to be a counter-archive, insofar as it represents the literary traditions of “New England,” a region that has built its very identity on colonial dispossession, colonial boundaries and the myth of Indian disappearance. It also already aimed to decolonize the archive, insofar as it distributes editorial authority and control to Indigenous writers, historians and knowledge-keepers. At almost 700 pages, though, Dawnland in book form could only scratch the surface of the wealth of writing that regional Native people have produced, and that remains, for the most part, in their own hands.

    We wanted a living document—one that could expand to include some of the vibrant pieces we could not fit in the book, one that could be revised and reshaped according to ongoing community conversation. And we wanted to keep presenting historic materials alongside new (in this case born-digital) texts, the better to highlight the long history of Indigenous writing in this region. But we also realized that this required resources. We approached the National Endowment for the Humanities and received a $38,000 Preservation and Access grant to explore how digital humanities resources might be better redistributed to empower tribal communities who want to digitize their texts, either for private tribal use or more public dissemination. The partners on this grant included three different, but representative kinds of collections: a tribal museum with some history of professional archiving and private support (the Tomaquag Indian Memorial Museum in Rhode Island); a tribal office that finds itself acting as an unofficial repository for a variety of papers and documents, and that does not have the resources to completely inventory or protect these (the Passamaquoddy Cultural Preservation Office in Maine); and four elders who have amassed a considerable collection of books, papers, and slides from their years working in the Boston Children’s Museum and Plimoth Plantation, and were storing these in their own homes (the Indigenous Resources Collaborative in Massachusetts). Under the terms of the grant, the University of New Hampshire sent digital librarians to each site to set up basic hardware and software for digitization, while training tribal historians in digitization basics. The end result of this two-year pilot project was a small exhibit of sample items from each archive.

    The obstacles to this kind of work for small tribal collections are perhaps not unique, but they are intense. Digitization is expensive, time-consuming, and labor-intensive, even more so for collections that do not have ample (or any) paid staff, that can’t afford to update basic software or that don’t even have reliable internet connections. And there were additional hurdles: while the pressure from DH writ large (and granting institutions individually) is frequently to demonstrate scalability, in the end, the tribal partners on this grant did not coalesce around a shared goal of digitizing their collections wholesale. The Passamaquoddy tribal heritage preservation officer has scanned and uploaded the greatest quantity of material by far, but he has strategically zeroed in on dozens of tribal newsletters containing radical histories of Native resistance and survival in the latter half of the twentieth century. The Tomaquag Museum does want to digitize its entire collection, but it prefers to do so in-house, for optimum control of intellectual property. The Indigenous Resources Collaborative, meanwhile, would rather digitize and curate just a small handful of items as richly as possible. While these elders were initially adamant that they wanted to learn to scan and upload their own documents, they learned quickly just how stultifying this labor is. What excited them much more was the process of selecting individual documents and dreaming about how to best share these online.  An old powwow flyer describing the Mashpee Wampanoag game of fireball, for instance, had them naming elders and players they could interview, with the possibility of adding metadata in the form of video or narrative audio.

    More than a year after articulating this aspiration, the IRC has not begun to conduct or record any such interviews. Such a project is beyond their current energies, time and resources; and to be sure, any continuation of their work on this partner project at dawnlandvoices.org should be compensated, which will mean applying for new grants. But the delay or inaction also points to a larger conundrum: that for all of the Web’s avowed multimodality, indigenous digital collections have generally not reflected the longstanding multimodality of indigenous literatures themselves—in particular, their longstanding and mutually sustaining interplay of oral and written forms. Some (Golumbia 2015) would attribute this to an unwillingness within DH to recognize the kinds of digital language work being done by Indigenous communities worldwide. Perhaps, too, it owes something to the history of violence embedded in “recording” or “preserving” Indigenous oral traditions (Silko 1981); the Indigenous partners with whom I have worked are generally skeptical of the need to put their traditional narratives—or even some of the recorded oral histories they may have stored in cassette—online. Too, there is the time and labor involved in recording. It is now common to hear digital publishers wax enthusiastic about the “affordances” of the Web (it seems so easy, to just add an mp3), but with few exceptions, dawnlandvoices.org has not elicited many recordings, despite our invitations to authors to contribute them.

    Unlike the texts in the most esteemed digital literature archives like the Rosetti Archive (edited, contextualized and encoded to the highest scholarly standard), the texts in dawnlandvoices.org are often rough, edgy, and unfinished; and that, quite possibly, is the way they will remain. Insofar as dawnlandvoices.org aspires to be a “database” at all (and we are not sure that it does), it makes sense at this point for there to be multiple pathways in and out of that collection, multiple ways of formatting and presenting material. It is probably fair to say that most scholars working on indigenous digital archives dream of a day when these sites will have robust community engagement and commentary. At the same time, many would readily admit that it’s not as simple as building it and hoping they will come. David Golumbia (2015) has gone so far as to suggest that what marginalizes Indigenous projects within DH is the archive-centric nature of the field itself—that while “most of the major First Nations groups now maintain rich community/governmental websites with a great deal of information on history, geography, culture, and language. . . none of this work, or little of it, is perceived or labeled as DH.” Thus, the esteemed digital archives might not, in fact, be what tribal communities want most. Brown and Nicholas raise the equally provocative possibility that “[i]nstitutional databases may . . . already have been superseded by social networking sites as digital repositories for cultural information” (2012:315). And, in fact, that most pervasive and understandably-maligned of social-networking sites, Facebook, seems to be serving some tribal museums’, authors’ and historians’ immediate cultural heritage needs surprisingly well. Many post historic photos or their own writings to their walls, and generate fabulously rich commentary: identifications of individuals in pictures, memories of places and events, praise and criticism for poetry. Facebook is a proprietary, and notoriously problematic platform, especially on the issue of intellectual property. And yet it has made room, at least for now, for a kind of fugitive curation that, albeit fragile and fugitive, raises the question of whether such curation should be “institutional” at all. We can see similar things happening on Twitter (as in Daniel Heath Justice’s recent “year of tweets” naming Indigenous authors) and Instagram (where artists like Stephen Paul Judd store, share, and comment on their work).  Outside of DH and settler institutions, indigenous people are creating all kinds of collections that—if they are not “archives” in a way that satisfy professional archivists—seem to do what Native people, individually and collectively, need them to do. At least for today, these collections create what First Nations artists Jason Lewis and Skawennati Tricia Fragnito call “Aboriginally determined territories in cyberspace” (2005).

    What the conversations initiated by Kim Christen, Jane Anderson, Jim Enote and others can bring to digital literature collections is a scrupulously ethical concern for Indigenous intellectual property, an insistence on first voice and community engagement. What Indigenous literature, in turn, can bring to the table is an insistence on politics and sovereignty.  Like many literary scholars, I often struggle with what (if anything) makes “Literature” distinctive. It’s not that baskets or katsina masks cannot be read expressions of sovereignty—they can, and they are. But Native literatures—particularly the kinds saved by Indigenous communities themselves rather than by large collecting institutions and salvage anthropologists—provide some of the most powerful and overt archives of resistance and resurgence. The invisibility of these kinds of tribal stories and tribal ways of knowing and keeping stories is an ongoing concern, even on the “open” Web. It may be that Digital Humanities writ large will continue to struggle against the seeming centrifugal force of traditional literary and cultural canons. It is not likely, however, that Indigenous communities will wait for us.

    _____

    Siobhan Senier is associate professor of English at the University of New Hampshire. She is the editor of Dawnland Voices: An Anthology of Writing from Indigenous New England and dawnlandvoices.org.

    Back to the essay

    _____

    Notes

    [1] A study by Native Public Media (Morris and Meinrath 2009) found that broadband access in and around Native American and Alaska Native communities was less than 10 percent, sometimes as low as 5 to 6 percent.

    [2] Two rare, and newer, exceptions include the Occom Circle Project at Dartmouth College, and the Kim-Wait/Eisenberg Collection at Amherst College.

    [3] The University of Virginia Electronic Texts Center at one time had an excellent collection of Native-authored or Native-related works, but these are now buried within the main digital catalog.

    _____

    Works Cited

    • Association of Tribal Archives Libraries and Museums. 2013. “International Conference Program.” Santa Ana Pueblo, NM.
    • Boast, Robin, and Jim Enote. 2013. “Virtual Repatriation: It Is Neither Virtual nor Repatriation.” In Peter Biehl and Christopher Prescott, eds., Heritage in the Context of Globalization. SpringerBriefs in Archaeology. New York, NY: Springer New York. 103–13.
    • Brooks, Lisa. 2012. “The Primacy of the Present, the Primacy of Place: Navigating the Spiral of History in the Digital World.” PMLA 127:2. 308–16.
    • Brown, Deidre, and George Nicholas. 2012. “Protecting Indigenous Cultural Property in the Age of Digital Democracy: Institutional and Communal Responses to Canadian First Nations and Māori Heritage Concerns.” Journal of Material Culture 17:3. 307–24.
    • Christen, Kimberly. 2005. “Gone Digital: Aboriginal Remix and the Cultural Commons.” International Journal of Cultural Property 12:3. 315–45.
    • Christen, Kimberly. 2012. “Does Information Really Want to Be Free?: Indigenous Knowledge Systems and the Question of Openness.” International Journal of Communication 6. 2870–93.
    • Cushman, Ellen. 2013. “Wampum, Sequoyan, and Story: Decolonizing the Digital Archive.” College English 76:2. 116–35.
    • Deloria, Philip Joseph. 2004. Indians in Unexpected Places. Lawrence: University Press of Kansas.
    • Earhart, Amy. 2012. “Can Information Be Unfettered? Race and the New Digital Humanities Canon.” In Matt Gold, ed., Debates in the Digital Humanities. Minneapolis: University of Minnesota Press.
    • Golumbia, David. 2015. “Postcolonial Studies, Digital Humanities, and the Politics of Language.” Postcolonial Digital Humanities.
    • Grant-Costa, Paul, Tobias Glaza, and Michael Sletcher. 2012. “The Common Pot: Editing Native American Materials.” Scholarly Editing 33.
    • Kim, David J., and Jacqueline Wernimont. 2014. “‘Performing Archive’: Identity, Participation, and Responsibility in the Ethnic Archive.Archive Journal 4.
    • Koh, Adeline. 2015. “A Letter to the Humanities: DH Will Not Save You.” Hybrid Pedagogy, April 19.
    • Lewis, Jason, and Skawennati Fragnito. 2005. “Aboriginal Territories in Cyberspace.” Cultural Survival (June).
    • Morris, Traci L., and Sascha D. Meinrath. 2009. New Media, Technology and Internet Use in Indian Country: Quantitative and Qualitative Analyses. Flagstaff, AZ: Native Public Media.
    • Pressman, Jessica, and Lisa Swanstrom. 2013. “The Literary And/As the Digital HumanitiesDigital Humanities Quarterly 7:1.
    • Roy, Loriene, and Mark Christal. 2002. “Digital Repatriation: Constructing a Culturally Responsive Virtual Museum Tour.” Journal of Library and Information Science 28:1. 14–18.
    • Senier, Siobhan, ed. 2014. Dawnland Voices: An Anthology of Indigenous Writing from New England. Lincoln: University of Nebraska Press.
    • Silko, Leslie Marmon. 1981. “An Old-Time Indian Attack Conducted in Two Parts.” In Geary Hobson, ed. The Remembered Earth: An Anthology of Contemporary Native American Literature. Albuquerque: University of New Mexico Press. 211–16
    • Silva, Noenoe K. 2004. Aloha Betrayed: Native Hawaiian Resistance to American Colonialism. Durham: Duke University Press.
    • Srinivasan, Ramesh, et al. 2010. “Diverse Knowledges and Contact Zones within the Digital Museum.” Science Technology Human Values 35:5. 735–68.

     

     

  • Tim Duffy — Mapping Without Tools: What the Digital Turn Can Learn from the Cartographic Turn

    Tim Duffy — Mapping Without Tools: What the Digital Turn Can Learn from the Cartographic Turn

    Tim Duffy

    Christian Jacob, in The Sovereign Map, describes maps as enablers of fantasy: “Maps and globes allow us to live a voyage reduced to the gaze, stripped of the ups and downs and chance occurrences, a voyage without the narrative, without the narrative, without pitfalls, without even the departure” (2005). Consumers and theorists of maps, more than cartographers themselves, are especially set up to enjoy the “voyage reduced to the gaze” that cartographic artifacts (including texts) are able to provide. An outside view, distant from the production of the artifact, activates the epistemological potential of the artifact in a way that producing the same artifact cannot.

    This dynamic is found at the conceptual level of interpreting cartography as a discipline as well. J.B. Harley, in his famous essay “Deconstructing the Map,” writes that:

    a major roadblock to our understanding is that we still accept uncritically the broad consensus, with relatively few dissenting voices, of what cartographers tell us maps are supposed to be. In particular, we often tend to work from the premise that mappers engage in an unquestionably “scientific” or “objective” form of knowledge creation…It is better for us to begin from the premise that cartography is seldom what cartographers say it is (Harley 1989, 57).

    Harley urges an interpretation of maps outside the purview and authority of the map’s creator, just as a literary scholar would insist on the critic’s ability to understand the text beyond the authority of what the authors say about their texts. There can be, in other words, a power in having distance from the act of making. There is clarity that comes from the role of the thinker outside of the process of creation.

    The goal of this essay is to push back against the valorization of “tools” and “making” in the digital turn, particularly its manifestation in digital humanities (DH), by reflecting on illustrative examples of the cartographic turn, which, from its roots in the sixteenth century through to J.B. Harley’s explosive provocation in 1989 (and beyond) has labored to understand the relationship between the practice of making maps and the experiences of looking at and using them.  By considering the stubborn and defining spiritual roots of cartographic research and the way fantasies of empiricism helped to hide the more nefarious and oppressive applications of their work, I hope to provide a mirror for the state of the digital humanities, a field always under attack, always defining and defending itself, and always fluid in its goals and motions.

    Cartography in the sixteenth century, even as its tools and representational techniques were becoming more and more sophisticated, could never quite abandon the religious legacies of its past, nor did it want to. Roger Bacon in the thirteenth century had claimed that only with a thorough understanding of geography could one understand the Bible. Pauline Moffitt Watts, in her essay “The European Religious Worldview and Its Influence on Mapping” concludes that many maps, including those by Eskrich and Ortelius, preserved a sense of providential and divine meaning even as they sought to narrate smaller, local areas:

    Although the messages these maps present are inescapably bound, their ultimate source—God—transcends and eclipses history. His eternity and omnipresence is signed but not constrained in the figurae, places, people, and events that ornament them. They offer fantastic, sometimes absurd vignettes and pastiches that nonetheless integrate the ephemera into a vision of providential history that maintained its power to make meaning well into the early modern era. (2007, 400)

    The way maps make meaning is contained not just in the technical expertise of the way the maps are constructed but in the visual experiences they provide that “make meaning” for the viewer. By over-prioritizing an emphasis on the way maps are made or on the geometric innovations that make their creation possible, the cartographic historian and theorist would miss the full effect of the work.

    Yet, the spiritual dimensions of mapmaking were not in opposition to technological expertise, and in many cases they went hand in hand. In his book Radical Arts, the Anglo-Dutch scholar Jan van Dorsten describes the spiritual motivations of sixteenth-century cosmographers disappointed by academic theology’s ability to ease the trauma of the European Reformation: “Theology…as the traditional science of revelation had failed visibly to unite mankind in one indisputably “true” perception of God’s plan and the properties of His creature. The new science of cosmography, its students seem to argue, will eventually achieve precisely that, thanks to its non-disputative method” (1970, 56-7). Some mapmakers of the sixteenth century in England, the Netherlands, and elsewhere—including Ortelius and others—imagined that the science and art of describing the created world, a text rivaling scripture in both revelatory potential and divine authorship, would create unity out of the disputation-prone culture of academic theology. Unlike theology, where thinkers are mapping an invisible world held in biblical scripture and apostolic tradition (as well as a millennium’s worth of commentary and exegesis), the liber naturae, the book of nature, is available to the eyes more directly, seemingly less prone to disputation.

    Cartographers were attempting to create an accurate imago mundi—surely that was a more tangible and grounded goal than trying to map divinity. Yet, as Patrick Gautier Dalché notes in his essay “The Reception of Ptolemy’s Geography (End of the Fourteenth to Beginning of the Sixteenth Century),” the modernizing techniques of cartography after the “rediscovery” of Ptolemy’s work, did not exactly follow a straight line of empirical progress:

    The modernization of the imago mundi and the work on modes of representation that developed during the early years of the sixteenth century should not be seen as either more or less successful attempts to integrate new information into existing geographic pictures. Nor should they be seen as steps toward a more “correct” representation, that is, toward conforming to our own notion of correct representation. They were exploratory games played with reality that took people in different directions…Ptolemy was not so much the source of a correct cartography as a stimulus to detailed consideration of an essential fact of cartographic representation: a map is a depiction based on a problematic, arbitrary, and malleable convention. (2007, 360).

    So even as the maps of this period may appear more “correct” to us, they are still engaged in experimentation to a degree that undermines any sense of the map as simply an empirical graphic representation of the earth. The “problematic, arbitrary, and malleable” conventions, used by the cartographer but observed and understood by the cartographic theorist and historian, reveal the sort of synergetic relationship between maker and observer, practitioner and theorist that allow an artifact to come into greater focus.

    Yet, cartography for much of its history turned away from seeing its work as culturally or even politically embedded. David Stoddart, in his history of geography, labels Cook’s voyage to the Pacific in 1769 as the origin point of cartography’s transforming into an empirical science.[1] Stoddart places geography, from that point onward, within the realm of the natural sciences based on, as Derek Gregory observes, “three features of decisive significance for the formation of geography as a distinctly modern, avowedly ‘objective’ science: a concern for realism in description, for systematic classification in collection, and for the comparative method in explanation” (Gregory 1994, 19). What is gone, then, in this march toward empiricism is any sense of culturally embedded codes within the map. The map, like a lab report of scientific findings, is meant to represent what is “actually” there. This term “actually” will come back to haunt us when we turn to the digital humanities.

    Yet, in the long history of mapping, before and after this supposed empirical fulcrum, maps remain slippery and malleable objects that are used for a diverse range of purposes and that reflect the cultural imagination of their makers and observers. As maps took on the appearance of the empirical and began to sublimate the devotional and fantastical aspects they had once shown proudly, they were no less imprinted with cultural knowledge and biases. If anything, the veil of empiricism allowed the cultural, political, and imperial goals of mapmaking to be hidden.

    In William Boelhower’s groundbreaking “Inventing America: The Culture of the Map” he argued precisely that maps had not simply graphically represented America, but rather that America was invented by maps. “Accustomed to the success of scientific discourse and imbued with the Cartesian tradition,” he writes, “the sons of Columbus naturally presumed that their version of reality was the version” (1988, 212). While Europeans believed they were simply mapping what they saw according to empirical principles, they didn’t realize they were actually inventing America in their own discursive image. He elaborates: “The Map is America’s precognition; at its center is not geography in se but the eye of the cartographer. The fact requires new respect for the in-forming relation between the history of modern cartography and the history of the Euro-American’s being-in-the-new-world” (213). Empiricism, then, was empire. “Empirical” maps were making the eye of the cartographer into the ideal “objective” viewer, producing a fictional way of seeing that reflected state power. Boelhower refers to the scale map as a kind of “panopticon” because of the “line’s achievement of an absolute and closed system no longer dependent on the local perspectivism of the image. With map in hand, the physical subject is theoretically everywhere and nowhere, truly a global operator” (222). What appears, then, simply to be the gathering, studying, and representation of data is, in fact, a system of discursive domination in which the cartographer asserts their worldview onto a site. As Boelhower puts it: “Never before had a nation-state sprung so rationally from a cartographic fiction, the Euclidean map imposing concrete form on a territory and a people” (223). America was a cartographic invention meant to appear as empirically identical to how the cartographers made it look.

    To turn again to J.B. Harley’s 1989 bombshell, maps are always evidence of cultural norms and perspectives, even when they try their best to appear sparse and scientific. Referring to “plain scientific maps,” Harley claims that “such maps contain a dimension of ‘symbolic realism’ which is no less a statement of political authority or control than a coat-of-arms or a portrait of a queen placed at the head of an earlier decorative map.” Even “accuracy and austerity of design are now the new talismans of authority culminating in our own age with computer mapping” (60). To represent the world “is to appropriate it” and to “discipline” and “normalize” it (61). The more we move away from cultural markers for the mythical comfort of “empirical” data, the more we find we are creating dominating fictions. There is no representation of data that does not exist within the hierarchies of cultural codes and expectations.

    What this rather eclectic history of cartography reveals is that even when maps and mapmaking attempt to hide or move beyond their cultural and devotional roots, cultural, ethical, and political markers inevitably embed themselves in the map’s role as a broker of power. Maps sort data, but in so doing they create worldviews with real world consequences. As some practitioners of mapmaking in the early modern period, such as those Familists who counted several cartographers among their membership, might have thought their cartographic work provided a more universal and less disputation-prone discursive focus than say, philosophy or theology, they were producing power through their maps, appropriating and taming the world around them in ways only fully accessible to the reader, the historian, the viewer. Harley invites us to push back against a definition of cartographic studies that follows what cartographers themselves believe cartography must be. One can now, like the author of this essay, be a theorist and historian of cartographic culture without ever having made a map. Having one’s work exist outside the power-formation networks of cartographic technology provides a unique view into how maps make meaning and power out in the world. The main goal of this essay, as I turn to the digital humanities, is to encourage those interested in the digital turn to make room for those who study, observe, and critique, but do not make.[2]

    Though the digital turn in the humanities is often celebrated for its wider scope and its ability to allow scholars to interpret—or at least observe—data trends across many more books than one human could read in the research period of an academic project, I would argue that the main thrust of the fantasy of the digital turn can be understood through its preoccupation with a fantasy of access and a view of its labor as fundamentally different than the labor of traditional academic discourse. A radical hybridity is celebrated. Rather than just read books and argue about the contents, the digital humanist is able to draw from a wide variety of sources and expanded data. Michael Witmore, in a recent essay published in New Literary History, celebrates this age of hybridity: “If we speak of hybridization as the point where constraints cease to be either intellectual or physical, where changes in the earth’s mean temperature follow just as inevitably from the ‘political choices of human beings’ as they do from the ‘laws of nature,’ we get a sense of how rich and productive the modernist divide has been. Hybrids have proliferated. Indeed, they seem inexhaustible” (355). Witmore sees digital humanities as existing within this hybridity: “The Latourian theory of hybrids provides a useful starting point for thinking about a field of inquiry in which interpretive claims are supported by evidence obtained via the exhaustive, enumerative resources of computing” (355).  The emphasis on the “exhaustive” and “enumerative” resources of computing would imply, even if this were not Witmore’s intention, that computing opens a depth of evidence not available to the non-hybrid, non-digitally enabled humanist.

    Indeed, in certain corners of DH, one often finds a suspicious eye cast on the value of traditional exegetical practices practiced without any digital engagements. In The Digital Humanist: A Critical Inquiry by Teresa Numerico, Domenico Fiormonte, and Francesco Tomasi, “the authors call on humanists to acquire the skills to become digital humanists,” elaborating: “Humanists must complete a paso doble, a double step: to rediscover the roots of their own discipline and to consider the changes necessary for its renewal. The start of this process is the realization that humanists have indeed played a role in the history of informatics” (2015, x). Numerico, Fiormonte, and Tomasi offer a vision of the humanities as in need of “renewal” rather than under attack from external forces. The suggestion is that the humanities need to rediscover their roots while at the same time taking on the “tools necessary for [their] renewal,” tools which are related to their “role in the history of informatics” and computing. The humanities are then shown to be tied up in a double bind: they have forgotten their roots and they are unable to innovate without the help of considering the digital.

    To offer a political aside: while Numerico, Fiormonte, and Tomasi offer a compelling and necessary history of the humanistic roots of computing, their argument is well in line with right-leaning attacks on the humanities  In their view, the humanities have fallen away from their first purpose, their roots. While the authors of the volume see these roots as connected to the early years of modern computer science, they could just as easily, especially given what early computational humanities looked like, be urging a return to philology and to the world of concordances and indexing that were so important to early and mid-twentieth century literary studies. They might also gesture instead at the deep history of political and philosophical thought out of which the modern university was born, and which were considered fundamental to the very project of university education until only very recently. Barring a return to these roots, the least the humanities can do to survive is to renew itself based on a connection to the digital and to the site of modern work: the computer terminal.

    Of course, what scholarly work is done outside the computer terminal? Journals and, increasingly, whole university press catalogs are being digitized and sold to university libraries on a subscription bases. Scholars read these materials and then type their own words into word processing programs onto machines (even, if like the recent Freewrite Machine released by Astrohaus, the machine attempts to appear as little like a computer as possible) and then, in almost all cases, email their work to editors who then edit it digitally and then publish it either in digitally-enabled print publishing or directly on-line. So why aren’t humanists of all sorts already considered connected to the digital?

    The answer is complicated and, like so many things in DH, depends on which particular theorist or practitioner you ask. Matthew Kirschenbaum writes about how one knows one is a digital humanist:

    You are a digital humanist if you are listened to by those who are already listened to as digital humanists, and they themselves got to be digital humanists by being listened to by others. Jobs, grant funding, fellowships, publishing contracts, speaking invitations—these things do not make one a digital humanist, though they clearly have a material impact on the circumstances of the work one does to get listened to. Put more plainly, if my university hires me as a digital humanist and if I receive a federal grant (say) to do such a thing that is described as digital humanities and if I am then rewarded by my department with promotion for having done it (not least because outside evaluators whom my department is enlisting to listen to as digital humanists have attested to its value to the digital humanities), then, well, yes, I am a digital humanist. Can you be a digital humanist without doing those things? Yes, if you want to be, though you may find yourself being listened to less unless and until you do some thing that is sufficiently noteworthy that reasonable people who themselves do similar things must account for your work, your thing, as a part of the progression of a shared field of interest. (2014, 55)

    Kirschenbaum defines the digital humanist as, mostly, someone who does something that earns the recognition of other digital humanists. He argues that this is not particularly different from the traditional humanities in which publications, grants, jobs, etc. are the standard definition of who is or is not a scholar. Yet, one wonders, especially in the age of the complete collapse of the humanities job market, if such institutional distinctions are either ethical or accurate. What would we call someone with a Ph.D. (or even without) who spends their days readings books, reading scholarly articles, and writing in their own room about the Victorian verse monologue or the early Tudor dramatic interludes? If no one reads a scholar, are they still a scholar? For the creative arts, we seem to have answered this question. We believe that the work of a poet, artist, or philosopher matters much more than their institutional appreciation or memberships during the era of the work’s production. Also, the need to be “listened to” is particularly vexed and reflects some of the political critiques that are often launched at DH. Who is most listened to in our society? White, cisgendered, heterosexual men. In the age of Trump, we are especially attuned to the fact that whom we choose to listen to is not always the most deserving or talented voice, but the one reflecting existent narratives of racial and economic distribution.

    Beyond this, the combined requirement of institutional recognition and economic investment (a salary from a university, a prestigious grant paid out) ties the work of the humanist to institutional rewards. One can be a poet, scholar, thinker in one’s own house, but you can’t be an investment banker or a lawyer or a police officer by self-declaration. The fluid nature of who can be a philosopher, thinker, poet, scholar has always meant that the work, not the institutional affiliation, of a writer/maker matters. Though DH is a diverse body of practitioners doing all sorts of work, it is often framed, sometimes only implicitly, as a return to “work” over “theory.” Kirschenbaum for instance, defending DH against accusations that it is against the traditional work of the humanities, writes: “Digital humanists don’t want to extinguish reading and theory and interpretation and cultural criticism. Digital humanists want to do their work… they want professional recognition and stability, whether as contingent labor, ladder faculty, graduate students, or in ‘alt-ac’ settings” (56). They essentially want the same things any other scholar does. Yet, while digital humanists are on the one hand defined by their ability to be listened to and to have professional recognition and stability, they are also in search of recognition and stability and eager to reshape humanistic work toward a more technological model.

    This leads to a question that is not always explored closely enough in discussions of the digital humanities in higher education. Though scholars are rightly building bridges between STEM and the humanities (rightly pushing for STEAM over STEM), there are major institutional differences between how the humanities and the sciences have traditionally functioned. Scientific research largely happens because of institutional investment of some kind whether from governmental, NGO, or corporate grants. This is why the funding sources of any given study are particularly important to follow. In the humanities, of course, grants also exist and they are a marker of career prestige. No one could doubt the benefit of time spent in a far-away archive or at home writing instead of teaching because of a dissertation-completion grant. Grants, in other words, boost careers but they are not necessary.[3] Very successful humanists depend on only library resources to produce influential work. In many cases, access to a library, a computer, and a desk is all one needs and the digitization of many archives (a phenomenon not free from political and ethical complications) has expanded access to archival materials once only available to students of wealthy institutions with deep special collections budgets or those with grants able to travel and lodge themselves far away for their research.

    All this is to say that a particular valorization of the sciences is risky business for the humanities. Kirschenbaum recommends that since “digital humanities…is sometimes said to suffer from Physics envy,” the field should embrace this label and turn to “a singularly powerful intellectual precedent for examining in close (yes, microscopic) detail the material conditions of knowledge production in scientific settings or configurations. Let us read citation networks and publication venues. Let us examine the usage patterns around particular tools. Let us treat the recensio of data sets” (60). Longing for the humanities to resemble the sciences is nothing new. Longing for data sets instead of individual texts, longing for “particular tools” rather than a philosophical problem or trend can sometimes be a helpful corrective to more Platonic searches for the “spirit” of a work or movement. And yet, there are risks to this approach, not least because the works themselves, that is, the object of inquiry, is treated in such general terms that it becomes essentially invisible. One can miss the tree for the forest and know more about the number of citations of Dante’s Commedia than the original text, or the spirit in which those citations are made. Surely, there is room for both, except when, because of shrinking hiring practices, there isn’t.

    In fact the economic politics of digital humanities has long been a source of at time fiery debate. Daniel Allington, Sarah Brouillette, and David Golumbia, in “Neoliberal Tools (and Archives): A Political History of Digital Humanities,” argue that the digital humanities have long been more defined by their preference for lab and project-based sources of knowledge over traditional humanistic inquiry:

    What Digital Humanities is not about, despite its explicit claims, is the use of digital or quantitative methodologies to answer research questions in the humanities. It is, instead, about the promotion of project-based learning and lab-based research over reading and writing, the rebranding of insecure campus employment as an empowering “alt-ac” career choice, and the redefinition of technical expertise as a form (indeed, the superior form) of humanistic knowledge. (Allington, Brouillette and Golumbia 2016)

    This last point, the valorization of “technical expertise,” is, I would argue, profoundly difficult to perform in a way that doesn’t implicitly devalue the classic toolbox of humanistic inquiry. The motto “More hack, less yack”—a favorite of the THATCamps, collaborative “un-conferences”—encapsulates this idea. Too much unfettered talking could lead to discord, to ambiguity, and to strife. To hack, on the other hand, is understood as something tangible and something implicitly more worthwhile than the production of discourse outside of particular projects and digital labs. Yet Natalia Cecire has noted, “You show up at a THATCamp and suddenly folks are talking about separating content and form as if that were, like, a real thing you could do. It makes the head spin” (Cecire 2011). Context, with all its ambiguities, once the bedrock of humanistic inquiry, is being sidestepped for massive data analysis that, by the very nature of distant reading, cannot account for context to a degree that would satisfy, say, the many Renaissance scholars who trained me. Cecire’s argument is a valuable one. In her post, she does not argue that we should necessarily follow a strategy of “no hack,” only that “we should probably get over the aversion to ‘yack.’” As she notes, “[yack] doesn’t have to replace ‘hack’; the two are not antithetical.”

    As DH continues to define itself, one can detect a sense that digital humanities’ focus on individual pieces or series of data, as well as their work in coding, embeds them in more empirical conversations that do not float to the level of speculation that is so emblematic of what used to be called high theory. This is, for many DH practitioners, a source of great pride. Kirschenbaum ends his essay with the following observation: “there is one thing that digital humanities ineluctably is: digital humanities is work, somebody’s work, somewhere, some thing, always. We know how to talk about work. So let’s talk about this work, in action, this actually existing work” (61). The author’s insistence on “some thing” and “this actually existing work” implies that there is work that is not centered on a thing or on work that actually exists, that the move to more concrete objects of inquiry, toward more empirical subjects, is a defining characteristic of digital humanities.

    This, among other issues, has made many respond to the digital humanities as if they are cooperating with and participating in the corporatized ideologies of Silicon Valley “tech culture.” Whitney Trettien, in an insightful blogpost, claims, “Humanities scholars who engage with technology in non-trivial ways have done a poor job responding to such criticism” and accuses those who criticize digital humanities as “continuing to reify a diverse set of practices as a homogeneous whole.” Let me be clear: I am not claiming that Kirschenbaum or Trettien or any other scholar writing in a theoretical mode about digital humanities is representative of an entire field, but their writing is part of the discursive community and when those of us whose work is enabled by digital resources but who do not work to build digital tools see our work described as a “trivial” engagement with the digital and see our work put in contrast, implicitly but still clearly, with “this actually existing work,” it is hard not to feel as if the humanist working on texts with digital tools (but not about the digital tools or about data derived from digital modeling) were being somehow slighted.

    For instance, in a short essay by Tom Scheinfeldt, “Why Digital Humanities is ‘Nice,’” the author claims: “One of the things that people often notice when they enter the field of digital humanities is how nice everybody is. This can be in stark contrast to other (unnamed) disciplines where suspicion, envy, and territoriality sometimes seem to rule. By contrast, our most commonly used bywords are ‘collegiality,’ ‘openness,’ and ‘collaboration’” (2012, 1). I have to admit I have not noticed what Scheinfeldt claims people often notice (perhaps I have spent too much time on twitter watching digital humanities debates unfurl in less than “nice” ways), but the claim, even as a discursive and defining fiction around DH, helps to understand one thread of the digital humanities’ project of self-definition: we are kind because what we work on is verifiable fact, not complicated and speculative philosophy or theory. Scheinfeldt says as much as he concludes his essay:

    Digital humanities is nice because, as I have described in earlier posts, we’re often more concerned with method than we are with theory. Why should a focus on method make us nice? Because methodological debates are often more easily resolved than theoretical ones. Critics approaching an issue with sharply opposed theories may argue endlessly over evidence and interpretation. Practitioners facing a methodological problem may likewise argue over which tool or method to use. Yet at some point in most methodological debates one of two things happens: either one method or another wins out empirically, or the practical needs of our projects require us simply to pick one and move on. Moreover, as Sean Takats, my colleague at the Roy Rosenzweig Center for History and New Media (CHNM), pointed out to me today, the methodological focus makes it easy for us to “call bullshit.” If anyone takes an argument too far afield, the community of practitioners can always put the argument to rest by asking to see some working code, a useable standard, or some other tangible result. In each case, the focus on method means that arguments are short, and digital humanities stays nice. (2)

    The most obvious question one is left with is: but what is the code doing? Where are the humanities in this vision of the digital? What truly discursive and interpretative work could produce fundamental disagreements that could be resolved simply by verifying the code in a community setting? Also, the celebration of how enforceable community norms are if an argument goes “too far afield” presents a troubling vision of a true discursive community where the appearance of agreement, enforceable through “empirical” testing, is more important than freedom of debate. In our current political climate, one wonders if such empirically-minded groupthink adequately makes room for more vulnerable, and not quite as loud, voices. When the goal is a functioning website or program, Scheinfeldt may be quite right, but when describing discursive work in the humanities, citing text for instance, rarely quells disagreement, but only makes clearer where the battle lines are drawn. This is particularly ironic given how the digital humanities, understood as a giant discursive, never-quite-adequate term for the field, is still defining itself and has been defining itself for decades with essay after essay defining just what DH is.

    I am echoing here some of the arguments offered by Adeline Koh in her essay “Niceness, Building, and Opening the Genealogy of the Digital Humanities: Beyond the Social Contract of Humanities Computing.” In this quite important intervention, Koh argues that DH is centered in two linked characteristics, niceness and technological expertise. Though one might think these requirements are disparate, Koh reveals how they are linked in the formation of a DH social contract:

    In my reading of this discursive structure, each rule reinforces the other. An emphasis on method as it applies to a project—which requires technical knowledge—requires resolution, which in turn leads to niceness and collegiality. To move away from technical knowledge—which appears to happen in [prominent DH scholar Stephen] Ramsay’s formulation of DH 2—is to move away from niceness and toward a darker side of the digital humanities. Proponents of technical knowledge appear to be arguing that to reject an emphasis on method is to reject an emphasis on civility. In other words, these two rules form the basis of an attempt to enforce a digital humanities social contract: necessary conditions (technical knowledge) that impose civic responsibilities (civility and niceness). (100)

    Koh believes that what is necessary to reduce the link between DH social contracts and the tenets of liberalism, is an expanded genealogy of the digital humanities. Koh urges DH to consider its roots beyond humanities computing.[4]

    To demand that one work with technical expertise on “this actually existing work”—whatever that work may end up being—is to state rather clearly that there are guidelines fencing in the digital humanities. As in the history of cartographic studies, the opinions of the makers paying attention to data sets have been allowed to determine what the digital humanities are (or what DH is). Like the moment when J.B. Harley challenged historians and theorists of cartography to ignore what the cartographers say and explore maps and mapmaking outside of the tools needed to make a map, perhaps DH is ready to enter a new phase where it begins its own renewal by no longer valorizing tools, code, and technology and letting the observers, the consumers, the fantasists, and the historians of power and oppression in (without their laptops). Indeed, what DH can learn from the history of cartography is to understand that what DH is, in all its many forms, is seldom (just) what digital humanists say it is.

    _____

    Tim Duffy is a scholar of Renaissance literature, poetics, and spatial philosophy.

    Back to the essay

    _____

    Notes

    [1] See David Stoddart, “Geography—a European Science” in On geography and its history, pp 28-40. For a discussion of Stoddart’s thinking, see Derek Gregory, Geographic Imaginations, pp. 16-21.

    [2] Obviously, critics and writers make, but their critique exists outside of the production of the artifact that they study. Cartographic theorists, as this article will argue, need not be a cartographer themselves any more than a critic or theorist of the digital need be a programmer or creator of digital objects.

    [3] For more on the political problems of dependence on grants, see Waltzer (2012): “One of those conditions is the dependence of the digital humanities upon grants. While the increase in funding available to digital humanities projects is welcome and has led to many innovative projects, an overdependence on grants can shape a field in a particular way. Grants in the humanities last a short period of time, which make them unlikely to fund the long-term positions that are needed to mount any kind of sustained challenge to current employment practices in the humanities. They are competitive, which can lead to skewed reporting on process and results, and reward polish, which often favors the experienced over the novice. They are external, which can force the orientation of the organizations that compete for them outward rather than toward the structure of the local institution and creates the pressure to always be producing” (340-341).

    [4] In her reading of how digital humanities deploys niceness, Koh writes “In my reading of this discursive structure, each rule reinforces the other. An emphasis on method as it applies to a project—which requires technical knowledge—requires resolution, which in turn leads to niceness and collegiality. To move away from technical knowledge…is to move away from niceness and toward a darker side of the digital humanities. Proponents of technical knowledge appear to be arguing that to reject an emphasis on method is to reject an emphasis on civility” (100).

    _____

    Works Cited

    • Allington, Daniel, Sarah Brouillete, and David Golumbia. 2016. “Neoliberal Tools (and Archives): A Political History of Digital Humanities.” Los Angeles Review of Books.
    • Boelhower, William. 1988. “Inventing America: The Culture of the Map” in Revue française d’études américaines 36. 211-224.
    • Cecire, Natalia. 2011. “When DH Was in Vogue; or, THATCamp Theory.”
    • Dalché, Patrick Gautier. 2007. “The Reception of Ptolemy’s Geography (End of the Fourteenth to Beginning of the Sixteenth Century) in Cartography in the European Renaissance, Volume 3, Part 1. Edited by David Woodward. Chicago: University of Chicago Press. 285-364.
    • Fiormonte, Domenico, Teresa Numerico, and Francesca Tomasi. 2015. The Digital Humanist: A Critical Inquiry. New York: Punctum Books
    • Gregory, Derek. 1994. Geographic Imaginations. Cambridge: Blackwell.
    • Harley, J.B. 2011. “Deconstructing the Map” in The Map Reader: Theories of Mapping Practice and Cartographic Representation, First Edition, edited by Martin Dodge, Rob Kitchin and Chris Perkins. New York: John Wiley & Sons, Ltd. 56-64.
    • Jacob, Christian. 2005. The Sovereign Map. Translated by Tom Conley. Chicago:  University of Chicago Press.
    • Kirschenbaum, Matthew. 2014. “What is ‘Digital Humanities,’ and Why Are They Saying Such Terrible Things about It?” Differences 25:1. 46-63.
    • Koh, Adeline. 2014. “Niceness, Building, and Opening the Genealogy of the Digital Humanities: Beyond the Social Contract of Humanities Computing.” Differences 25:1. 93-106.
    • Scheinfeldt, Tom. 2012. “Why Digital Humanities is ‘Nice.’” In Matthew Gold, ed., Debates in the Digital Humanities. Minneapolis: University of Minnesota Press.
    • Trettien, Whitney. 2016. “Creative Destruction/‘Digital Humanities.’” Medium (Aug 24).
    • Watts, Pauline Moffitt. 2007. “The European Religious Worldview and Its Influence on Mapping” in The History of Cartography: Cartography in the European Renaissance, Vol. 3, part 1. Edited by David Woodward. Chicago: University of Chicago Press). 382-400.
    • Waltzer, Luke. 2012. “Digital Humanities and the ‘Ugly Stepchildren’ of American Higher Education.” In Matthew Gold, ed., Debates in the Digital Humanities. Minneapolis: University of Minnesota Press.
    • Witmore, Michael. 2016. “Latour, the Digital Humanities, and the Divided Kingdom of Knowledge.” New Literary History 47:2-3. 353-375.

     

  • Michael Miller — Seeing Ourselves, Loving Our Captors: Mark Jarzombek’s Digital Stockholm Syndrome in the Post-Ontological Age

    Michael Miller — Seeing Ourselves, Loving Our Captors: Mark Jarzombek’s Digital Stockholm Syndrome in the Post-Ontological Age

    a review of Mark Jarzombek, Digital Stockholm Syndrome in the Post-Ontological Age (University of Minnesota Press Forerunners Series, 2016)

    by Michael Miller

    ~

    All existence is Beta, basically. A ceaseless codependent improvement unto death, but then death is not even the end. Nothing will be finalized. There is no end, no closure. The search will outlive us forever

    — Joshua Cohen, Book of Numbers

    Being a (in)human is to be a beta tester

    — Mark Jarzombek, Digital Stockholm Syndrome in the Post-Ontological Age

    Too many people have access to your state of mind

    — Renata Adler, Speedboat

    Whenever I read through Vilém Flusser’s vast body of work and encounter, in print no less, one of the core concepts of his thought—which is that “human communication is unnatural” (2002, 5)––I find it nearly impossible to shake the feeling that the late Czech-Brazilian thinker must have derived some kind of preternatural pleasure from insisting on the ironic gesture’s repetition. Flusser’s rather grim view that “there is no possible form of communication that can communicate concrete experience to others” (2016, 23) leads him to declare that the intersubjective dimension of communication implies inevitably the existence of a society which is, in his eyes, itself an unnatural institution. One can find all over in Flusser’s work traces of his life-long attempt to think through the full philosophical implications of European nihilism, and evidence of this intellectual engagement can be readily found in his theories of communication.

    One of Flusser’s key ideas that draws me in is his notion that human communication affords us the ability to “forget the meaningless context in which we are completely alone and incommunicado, that is, the world in which we are condemned to solitary confinement and death: the world of ‘nature’” (2002, 4). In order to help stave off the inexorable tide of nature’s muted nothingness, Flusser suggests that humans communicate by storing memories, externalized thoughts whose eventual transmission binds two or more people into a system of meaning. Only when an intersubjective system of communication like writing or speech is established between people does the purpose of our enduring commitment to communication become clear: we communicate in order “to become immortal within others (2016, 31). Flusser’s playful positing of the ironic paradox inherent in the improbability of communication—that communication is unnatural to the human but it is also “so incredibly rich despite its limitations” (26)––enacts its own impossibility. In a representatively ironic sense, Flusser’s point is that all we are able to fully understand is our inability to understand fully.

    As Flusser’s theory of communication can be viewed as his response to the twentieth-century’s shifting technical-medial milieu, his ideas about communication and technics eventually led him to conclude that “the original intention of producing the apparatus, namely, to serve the interests of freedom, has turned on itself…In a way, the terms human and apparatus are reversed, and human beings operate as a function of the apparatus. A man gives an apparatus instructions that the apparatus has instructed him to give” (2011,73).[1] Flusser’s skeptical perspective toward the alleged affordances of human mastery over technology is most assuredly not the view that Apple or Google would prefer you harbor (not-so-secretly). Any cursory glance at Wired or the technology blog at Insider Higher Ed, to pick two long-hanging examples, would yield a radically different perspective than the one Flusser puts forth in his work. In fact, Flusser writes, “objects meant to be media may obstruct communication” (2016, 45). If media objects like the technical apparatuses of today actually obstruct communication, then why are we so often led to believe that they facilitate it? And to shift registers just slightly, if everything is said to be an object of some kind—even technical apparatuses––then cannot one be permitted to claim daily communion with all kinds of objects? What happens when an object—and an object as obsolete as a book, no less—speaks to us? Will we still heed its call?

    ***

    Speaking in its expanded capacity as neither narrator nor focalized character, the book as literary object addresses us in a direct and antagonistic fashion in the opening line to Joshua Cohen’s 2015 novel Book of Numbers. “If you’re reading this on a screen, fuck off. I’ll only talk if I’m gripped with both hands” (5), the book-object warns. As Cohen’s narrative tells the story of a struggling writer named Joshua Cohen (whose backstory corresponds mostly to the historical-biographical author Joshua Cohen) who is contracted to ghostwrite the memoir of another Joshua Cohen (who is the CEO of a massive Google-type company named Tetration), the novel’s middle section provides an “unedited” transcript of the conversation between the two Cohens in which the CEO recounts his upbringing and tremendous business success in and around the Bay Area from the late 1970s up to 2013 of the narrative’s present. The novel’s Silicon Valley setting, nominal and characterological doubling, and structural narrative coupling of the two Cohens’ lives makes it all but impossible to distinguish the personal histories of Cohen-the-CEO and Cohen-the-narrator from the cultural history of the development of personal computing and networked information technologies. The history of one Joshua Cohen––or all Joshua Cohens––is indistinguishable from the history of intrusive computational/digital media. “I had access to stuff I shouldn’t have had access to, but then Principal shouldn’t have had such access to me—cameras, mics,” Cohen-the-narrator laments. In other words, as Cohen-the-narrator ghostwrites another Cohen’s memoir within the context of the broad history of personal computing and the emergence of algorithmic governance and surveillance, the novel invites us to consider how the history of an individual––or every individual, it does not really matter––is also nothing more or anything less than the surveilled history of its data usage, which is always written by someone or something else, the ever-present Not-Me (who just might have the same name as me). The Self is nothing but a networked repository of information to be mined in the future.

    While the novel’s opening line addresses its hypothetical reader directly, its relatively benign warning fixes reader and text in a relation of rancor. The object speaks![2] And yet tech-savvy twenty-first century readers are not the only ones who seem to be fed up with books; books too are fed up with us, and perhaps rightly so. In an age when objects are said to speak vibrantly and withdraw infinitely; processes like human cognition are considered to be operative in complex technical-computational systems; and when the only excuse to preserve the category of “subjective experience” we are able to muster is that it affords us the ability “to grasp how networks technically distribute and disperse agency,” it would seem at first glance that the second-person addressee of the novel’s opening line would intuitively have to be a reading, thinking subject.[3] Yet this is the very same reading subject who has been urged by Cohen’s novel to politely “fuck off” if he or she has chosen to read the text on a screen. And though the text does not completely dismiss its readers who still prefer “paper of pulp, covers of board and cloth” (5), a slight change of preposition in its title points exactly to what the book fears most of all: Book as Numbers. The book-object speaks, but only to offer an ominous admonition: neither the book nor its readers ought to be reducible to computable numbers.

    The transduction of literary language into digital bits eliminates the need for a phenomenological, reading subject, and it suggests too that literature––or even just language in a general sense––and humans in particular are ontologically reducible to data objects that can be “read” and subsequently “interpreted” by computational algorithms. As Cohen’s novel collapses the distinction between author, narrator, character, and medium, its narrator observes that “the only record of my one life would be this record of another’s” (9). But in this instance, the record of one’s (or another’s) life is merely the history of how personal computational technologies have effaced the phenomenological subject. How have we arrived at the theoretically permissible premise that “People matter, but they don’t occupy a privileged subject position distinct from everything else in the world” (Huehls 20)? How might the “turn toward ontology” in theory/philosophy be viewed as contributing to our present condition?

    * **

    Mark Jarzombek’s Digital Stockholm Syndrome in the Post-Ontological Age (2016) provides a brief, yet stylistically ironic and incisive interrogation into how recent iterations of post- or inhumanist theory have found a strange bedfellow in the rhetorical boosterism that accompanies the alleged affordances of digital technologies and big data. Despite the differences between these two seemingly unrelated discourses, they both share a particularly critical or diminished conception of the anthro- in “anthropocentrism” that borrows liberally from the postulates of the “ontological turn” in theory/philosophy (Rosenberg n.p.). While the parallels between these discourses are not made explicit in Jarzombek’s book, Digital Stockholm Syndrome asks us to consider how a shared commitment to an ontologically diminished view of “the human” that galvanizes both technological determinism’s anti-humanism and post- or inhumanist theory has found its common expression in recent philosophies of ontology. In other words, the problem Digital Stockholm Syndrome takes up is this: what kind of theory of ontology, Being, and to a lesser extent, subjectivity, appeals equally to contemporary philosophers and Silicon Valley tech-gurus? Jarzombek gestures toward such an inquiry early on: “What is this new ontology?” he asks, and “What were the historical situations that produced it? And how do we adjust to the realities of the new Self?” (x).

    A curious set of related philosophical commitments united by their efforts to “de-center” and occasionally even eject “anthropocentrism” from the critical conversation constitute some of the realities swirling around Jarzombek’s “new Self.”[4] Digital Stockholm Syndrome provocatively locates the conceptual legibility of these philosophical realities squarely within an explicitly algorithmic-computational historical milieu. By inviting such a comparison, Jarzombek’s book encourages us to contemplate how contemporary ontological thought might mediate our understanding of the historical and philosophical parallels that bind the tradition of in humanist philosophical thinking and the rhetoric of twenty-first century digital media.[5]

    In much the same way that Alexander Galloway has argued for a conceptual confluence that exceeds the contingencies of coincidence between “the structure of ontological systems and the structure of the most highly evolved technologies of post-Fordist capitalism” (347), Digital Stockholm Syndrome argues similarly that today’s world is “designed from the micro/molecular level to fuse the algorithmic with the ontological” (italics in original, x).[6] We now understand Being as the informatic/algorithmic byproduct of what ubiquitous computational technologies have gathered and subsequently fed back to us. Our personal histories––or simply the records of our data use (and its subsequent use of us)––comprise what Jarzombek calls our “ontic exhaust…or what data experts call our data exhaust…[which] is meticulously scrutinized, packaged, formatted, processed, sold, and resold to come back to us in the form of entertainment, social media, apps, health insurance, clickbait, data contracts, and the like” (x).

    The empty second-person pronoun is placed on equal ontological footing with, and perhaps even defined by, its credit score, medical records, 4G data usage, Facebook likes, and threefold of its Tweets. “The purpose of these ‘devices,’” Jarzombek writes, “is to produce, magnify, and expose our ontic exhaust” (25). We give our ontic exhaust away for free every time we log into Facebook because it, in return, feeds back to us the only sense of “self” we are able to identify as “me.”[7] If “who we are cannot be traced from the human side of the equation, much less than the analytic side.‘I’ am untraceable” (31), then why do techno-determinists and contemporary oracles of ontology operate otherwise? What accounts for their shared commitment to formalizing ontology? Why must the Self be tracked and accounted for like a map or a ledger?

    As this “new Self,” which Jarzombek calls the “Being-Global” (2), travels around the world and checks its bank statement in Paris or tags a photo of a Facebook friend in Berlin while sitting in a cafe in Amsterdam, it leaks ontic exhaust everywhere it goes. While the hoovering up of ontic exhaust by GPS and commercial satellites “make[s] us global,” it also inadvertently redefines Being as a question of “positioning/depositioning” (1). For Jarzombek, the question of today’s ontology is not so much a matter of asking “what exists?” but of asking “where is it and how can it be found?” Instead of the human who attempts to locate and understand Being, now Being finds us, but only as long as we allow ourselves to be located.

    Today’s ontological thinking, Jarzombek points out, is not really interested in asking questions about Being––it is too “anthropocentric.”[8] Ontology in the twenty-first century attempts to locate Being by gathering data, keeping track, tracking changes, taking inventory, making lists, listing litanies, crunching the numbers, and searching the database. “Can I search for it on Google?” is now the most important question for ontological thought in the twenty-first century.

    Ontological thinking––which today means ontological accounting, or finding ways to account for the ontologically actuarial––is today’s philosophical equivalent to a best practices for data management, except there is no difference between one’s data and one’s Self. Nonetheless, any ontological difference that might have once stubbornly separated you from data about you no longer applies. Digital Stockholm Syndrome identifies this shift with the formulation: “From ontology to trackology” (71).[9] The philosophical shift that has allowed data about the Self to become the ontological equivalent to the Self emerges out of what Jarzombek calls an “animated ontology.”

    In this “animated ontology,” subject position and object position are indistinguishable…The entire system of humanity is microprocessed through the grid of sequestered empiricism” (31, 29). Jarzombek is careful to distinguish his “animated ontology” from the recently rebooted romanticisms which merely turn their objects into vibrant subjects. He notes that “the irony is that whereas the subject (the ‘I’) remains relatively stable in its ability to self-affirm (the lingering by-product of the psychologizing of the modern Self), objectivity (as in the social sciences) collapses into the illusions produced by the global cyclone of the informatic industry” (28).”[10] By devising tricky new ways to flatten ontology (all of which are made via po-faced linguistic fiat), “the human and its (dis/re-)embodied computational signifiers are on equal footing”(32). I do not define my data, but my data define me.

    ***

    Digital Stockholm Syndrome asserts that what exists in today’s ontological systems––systems both philosophical and computational––is what can be tracked and stored as data. Jarzombek sums up our situation with another pithy formulation: “algorithmic modeling + global positioning + human scaling +computational speed=data geopolitics” (12). While the universalization of tracking technologies defines the “global” in Jarzombek’s Being-Global, it also provides us with another way to understand the humanities’ enthusiasm for GIS and other digital mapping platforms as institutional-disciplinary expressions of a “bio-chemo-techno-spiritual-corporate environment that feeds the Human its sense-of-Self” (5).

    Mark Jarzombek, Digital Stockholm Syndrome in the Post-Ontological Age

    One wonders if the incessant cultural and political reminders regarding the humanities’ waning relevance have moved humanists to reconsider the very basic intellectual terms of their broad disciplinary pursuits. How come it is humanities scholars who are in some cases most visibly leading the charge to overturn many decades of humanist thought? Has the internalization of this depleted conception of the human reshaped the basic premises of humanities scholarship, Digital Stockholm Syndrome wonders? What would it even mean to pursue a “humanities” purged of “the human?” And is it fair to wonder if this impoverished image of humanity has trickled down into the formation of new (sub)disciplines?”[11]

    In a late chapter titled “Onto-Paranoia,” Jarzombek finally arrives at a working definition of Digital Stockholm Syndrome: data visualization. For Jarzombek, data-visualization “has been devised by the architects of the digital world” to ease the existential torture—or “onto-torture”—that is produced by Security Threats (59). Security threats are threatening because they remind us that “security is there to obscure the fact that whole purpose is to produce insecurity” (59). When a system fails, or when a problem occurs, we need to be conscious of the fact that the system has not really failed; “it means that the system is working” (61).[12] The Social, the Other, the Not-Me—these are all variations of the same security threat, which is just another way of defining “indeterminacy” (66). So if everything is working the way it should, we rarely consider the full implications of indeterminacy—both technical and philosophical—because to do so might make us paranoid, or worse: we would have to recognize ourselves as (in)human subjects.

    Data-visualizations, however, provide a soothing salve which we can (self-)apply in order to ease the pain of our “onto-torture.” Visualizing data and creating maps of our data use provide us with a useful and also pleasurable tool with which we locate ourselves in the era of “post-ontology.”[13] “We experiment with and develop data visualization and collection tools that allow us to highlight urban phenomena. Our methods borrow from the traditions of science and design by using spatial analytics to expose patterns and communicating those results, through design, to new audiences,” we are told by one data-visualization project (http://civicdatadesignlab.org/).  As we affirm our existence every time we travel around the globe and self-map our location, we silently make our geo-data available for those who care to sift through it and turn it into art or profit.

    “It is a paradox that our self-aestheticizing performance as subjects…feeds into our ever more precise (self-)identification as knowable and predictable (in)human-digital objects,” Jarzombek writes. Yet we ought not to spend too much time contemplating the historical and philosophical complexities that have helped create this paradoxical situation. Perhaps it is best we do not reach the conclusion that mapping the Self as an object on digital platforms increases the creeping unease that arises from the realization that we are mappable, hackable, predictable, digital objects––that our data are us. We could, instead, celebrate how our data (which we are and which is us) is helping to change the world. “’Big data’ will not change the world unless it is collected and synthesized into tools that have a public benefit,” the same data visualization project announces on its website’s homepage.

    While it is true that I may be a little paranoid, I have finally rested easy after having read Digital Stockholm Syndrome because I now know that my data/I are going to good use.[14] Like me, maybe you find comfort in knowing that your existence is nothing more than a few pixels in someone else’s data visualization.

    _____

    Michael Miller is a doctoral candidate in the Department of English at Rice University. His work has appeared or is forthcoming in symplokē and the Journal of Film and Video.

    Back to the essay

    _____

    Notes

    [1] I am reminded of a similar argument advanced by Tung-Hui Hu in his A Prehistory of the Cloud (2016). Encapsulating Flusser’s spirit of healthy skepticism toward technical apparatuses, the situation that both Flusser and Hu fear is one in which “the technology has produced the means of its own interpretation” (xixx).

    [2] It is not my aim to wade explicitly into discussions regarding “object-oriented ontology” or other related philosophical developments. For the purposes of this essay, however, Andrew Cole’s critique of OOO as a “new occasionalism” will be useful. “’New occasionalism,’” Cole writes, “is the idea that when we speak of things, we put them into contact with one another and ourselves” (112). In other words, the speaking of objects makes them objectively real, though this is only possible when everything is considered to be an object. The question, though, is not about what is or is not an object, but is rather what it means to be. For related arguments regarding the relation between OOO/speculative realism/new materialism and mysticism, see Sheldon (2016), Altieri (2016), Wolfendale (2014), O’Gorman (2013), and to a lesser extent Colebrook (2013).

    [3] For the full set of references here, see Bennett (2010), Hayles (2014 and 2016), and Hansen (2015).

    [4] While I cede that no thinker of “post-humanism” worth her philosophical salt would admit the possibility or even desirability of purging the sins of “correlationism” from critical thought all together, I cannot help but view such occasional posturing with a skeptical eye. For example, I find convincing Barbara Herrnstein-Smith’s recent essay “Scientizing the Humanities: Shifts, Negotiations, Collisions,” in which she compares the drive in contemporary critical theory to displace “the human” from humanistic inquiry to the impossible and equally incomprehensible task of overcoming the “‘astro’-centrism of astronomy or the biocentrism of biology” (359).

    [5] In “Modest Proposal for the Inhuman,” Julian Murphet identifies four interrelated strands of post- or inhumanist thought that combine a kind of metaphysical speculation with a full-blown demolition of traditional ontology’s conceptual foundations. They are: “(1) cosmic nihilism, (2) molecular bio-plasticity, (3) technical accelerationism, and (4) animality. These sometimes overlapping trends are severally engaged in the mortification of humankind’s stubborn pretensions to mastery over the domain of the intelligible and the knowable in an era of sentient machines, routine genetic modification, looming ecological disaster, and irrefutable evidence that we share 99 percent of our biological information with chimpanzees” (653).

    [6] The full quotation from Galloway’s essay reads: “Why, within the current renaissance of research in continental philosophy, is there a coincidence between the structure of ontological systems and the structure of the most highly evolved technologies of post-Fordist capitalism? [….] Why, in short, is there a coincidence between today’s ontologies and the software of big business?” (347). Digital Stockholm Syndrome begins by accepting Galloway’s provocations as descriptive instead of speculative. We do not necessarily wonder in 2017 if “there is a coincidence between today’s ontologies and the software of big business”; we now wonder instead how such a confluence came to be.

    [7] Wendy Hui Kyun Chun makes a similar point in her 2016 monograph Updating to Remain the Same: Habitual New Media. She writes, “If users now ‘curate’ their lives, it is because their bodies have become archives” (x-xi). While there is not ample space here to discuss the  full theoretical implications of her book, Chun’s discussion of the inherently gendered dimension to confession, self-curation as self-exposition, and online privacy as something that only the unexposed deserve (hence the need for preemptive confession and self-exposition on the internet) in digital/social media networks is tremendously relevant to Jarzombek’s Digital Stockholm Syndrome, as both texts consider the Self as a set of mutable and “marketable/governable/hackable categories” (Jarzombek 26) that are collected without our knowledge and subsequently fed back to the data/media user in the form of its own packaged and unique identity. For recent similar variations of this argument, see Simanowski (2017) and McNeill (2012).

    I also think Chun’s book offers a helpful tool for thinking through recent confessional memoirs or instances of “auto-theory” (fictionalized or not) like Maggie Nelson’s The Argonauts (2015), Sheila Heti’s How Should a Person Be (2010), Marie Calloway’s what purpose did i serve in your life (2013), and perhaps to a lesser degree Tao Lin’s Richard Yates (2010), Taipei (2013), Natasha Stagg’s Surveys, and Ben Lerner’s Leaving the Atocha Station (2011) and 10:04 (2014). The extent to which these texts’ varied formal-aesthetic techniques can be said to be motivated by political aims is very much up for debate, but nonetheless, I think it is fair to say that many of them revel in the reveal. That is to say, via confession or self-exposition, many of these novels enact the allegedly performative subversion of political power by documenting their protagonists’ and/or narrators’ certain social/political acts of transgression. Chun notes, however, that this strategy of self-revealing performs “resistance as a form of showing off and scandalizing, which thrives off moral outrage. This resistance also mimics power by out-spying, monitoring, watching, and bringing to light, that is, doxing” (151). The term “autotheory,” which was has been applied to Nelson’s The Argonauts in particular, takes on a very different meaning in this context. “Autotheory” can be considered as a theory of the self, or a self-theorization, or perhaps even the idea that personal experience is itself a kind of theory might apply here, too. I wonder, though, how its meaning would change if the prefix “auto” was understood within a media-theoretical framework not as “self” but as “automation.” “Autotheory” becomes, then, an automatization of theory or theoretical thinking, but also a theoretical automatization; or more to the point: what if “autotheory” describes instead a theorization of the Self or experience wherein “the self” is only legible as the product of automated computational-algorithmic processes?

    [8] Echoing the critiques of “correlationism” or “anthropocentrism” or what have you, Jarzombek declares that “The age of anthrocentrism is over” (32).

    [9] Whatever notion of (self)identity the Self might find to be most palatable today, Jarzombek argues, is inevitably mediated via global satellites. “The intermediaries are the satellites hovering above the planet. They are what make us global–what make me global” (1), and as such, they represent the “civilianization” of military technologies (4). What I am trying to suggest is that the concepts and categories of self-identity we work with today are derived from the informatic feedback we receive from long-standing military technologies.

    [10] Here Jarzombek seems to be suggesting that the “object” in the “objectivity” of “the social sciences” has been carelessly conflated with the “object” in “object-oriented” philosophy. The prioritization of all things “objective” in both philosophy and science has inadvertently produced this semantic and conceptual slippage. Data objects about the Self exist, and thus by existing, they determine what is objective about the Self. In this new formulation, what is objective about the Self or subject, in other words, is what can be verified as information about the self. In Indexing It All: The Subject in the Age of Documentation, Information, and Data (2014), Ronald Day argues that these global tracking technologies supplant traditional ontology’s “ideas or concepts of our human manner of being” and have in the process “subsume[d] and subvert[ed] the former roles of personal judgment and critique in personal and social beings and politics” (1). While such technologies might be said to obliterate “traditional” notions of subjectivity, judgment, and critique, Day demonstrates how this simultaneous feeding-forward and feeding back of data-about-the-Self represents the return of autoaffection, though in his formulation self-presence is defined as information or data-about-the-self whose authenticity is produced when it is fact-checked against a biographical database (3)—self-presence is a presencing of data-about-the-Self. This is all to say that the Self’s informational “aboutness”–its representation in and as data–comes to stand in for the Self’s identity, which can only be comprehended as “authentic” in its limited metaphysical capacity as a general informatic or documented “aboutness.”

    [11] Flusser is again instructive on this point, albeit in his own idiosyncratic way­­. Drawing attention to the strange unnatural plurality in the term “humanities,” he writes, “The American term humanities appropriately describes the essence of these disciplines. It underscores that the human being is an unnatural animal” (2002, 3). The plurality of “humanities,” as opposed to the singular “humanity,” constitutes for Flusser a disciplinary admission that not only the category of “the human” is unnatural, but that the study of such an unnatural thing is itself unnatural, as well. I think it is also worth pointing out that in the context of Flusser’s observation, we might begin to situate the rise of “the supplemental humanities” as an attempt to redefine the value of a humanities education. The spatial humanities, the energy humanities, medical humanities, the digital humanities, etc.—it is not difficult to see how these disciplinary off-shoots consider themselves as supplements to whatever it is they think “the humanities” are up to; regardless, their institutional injection into traditional humanistic discourse will undoubtedly improve both(sub)disciplines, with the tacit acknowledgment being that the latter has just a little more to gain from the former in terms of skills, technical know-how, and data management. Many thanks to Aaron Jaffe for bringing this point to my attention.

    [12] In his essay “Algorithmic Catastrophe—The Revenge of Contingency,” Yuk Hui notes that “the anticipation of catastrophe becomes a design principle” (125). Drawing from the work of Bernard Stiegler, Hui shows how the pharmacological dimension of “technics, which aims to overcome contingency, also generates accidents” (127). And so “as the anticipation of catastrophe becomes a design principle…it no longer plays the role it did with the laws of nature” (132). Simply put, by placing algorithmic catastrophe on par with a failure of reason qua the operations of mathematics, Hui demonstrates how “algorithms are open to contingency” only insofar as “contingency is equivalent to a causality, which can be logically and technically deduced” (136). To take Jarzombek’s example of the failing computer or what have you, while the blue screen of death might be understood to represent the faithful execution of its programmed commands, we should also keep in mind that the obverse of Jarzombek’s scenario would force us to come to grips with how the philosophical implications of the “shit happens” logic that underpins contingency-as-(absent) causality “accompanies and normalizes speculative aesthetics” (139).

    [13] I am reminded here of one of the six theses from the manifesto “What would a floating sheep map?,” jointly written by the Floating Sheep Collective, which is a cohort of geography professors. The fifth thesis reads: “Map or be mapped. But not everything can (or should) be mapped.” The Floating Sheep Collective raises in this section crucially important questions regarding ownership of data with regard to marginalized communities. Because it is not always clear when to map and when not to map, they decide that “with mapping squarely at the center of power struggles, perhaps it’s better that not everything be mapped.” If mapping technologies operate as ontological radars–the Self’s data points help point the Self towards its own ontological location in and as data—then it is fair to say that such operations are only philosophically coherent when they are understood to be framed within the parameters outlined by recent iterations of ontological thinking and its concomitant theoretical deflation of the rich conceptual make-up that constitutes the “the human.” You can map the human’s data points, but only insofar as you buy into the idea that points of data map the human. See http://manifesto.floatingsheep.org/.

    [14]Mind/paranoia: they are the same word!”(Jarzombek 71).

    _____

    Works Cited

    • Adler, Renata. Speedboat. New York Review of Books Press, 1976.
    • Altieri, Charles. “Are We Being Materialist Yet?” symplokē 24.1-2 (2016):241-57.
    • Calloway, Marie. what purpose did i serve in your life. Tyrant Books, 2013.
    • Chun, Wendy Hui Kyun. Updating to Remain the Same: Habitual New Media. The MIT Press, 2016.
    • Cohen, Joshua. Book of Numbers. Random House, 2015.
    • Cole, Andrew. “The Call of Things: A Critique of Object-Oriented Ontologies.” minnesota review 80 (2013): 106-118.
    • Colebrook, Claire. “Hypo-Hyper-Hapto-Neuro-Mysticism.” Parrhesia 18 (2013).
    • Day, Ronald. Indexing It All: The Subject in the Age of Documentation, Information, and Data. The MIT Press, 2014.
    • Floating Sheep Collective. “What would a floating sheep map?” http://manifesto.floatingsheep.org/.
    • Flusser, Vilém. Into the Universe of Technical Images. Translated by Nancy Ann Roth. University of Minnesota Press, 2011.
    • –––. The Surprising Phenomenon of Human Communication. 1975. Metaflux, 2016.
    • –––. Writings, edited by Andreas Ströhl. Translated by Erik Eisel. University of Minnesota Press, 2002.
    • Galloway, Alexander R. “The Poverty of Philosophy: Realism and Post-Fordism.” Critical Inquiry 39.2 (2013): 347-366.
    • Hansen, Mark B.N. Feed Forward: On the Future of Twenty-First Century Media. Duke University Press, 2015.
    • Hayles, N. Katherine. “Cognition Everywhere: The Rise of the Cognitive Nonconscious and the Costs of Consciousness.” New Literary History 45.2 (2014):199-220.
    • –––. “The Cognitive Nonconscious: Enlarging the Mind of the Humanities.” Critical Inquiry 42 (Summer 2016): 783-808.
    • Herrnstein-Smith, Barbara. “Scientizing the Humanities: Shifts, Collisions, Negotiations.” Common Knowledge  22.3 (2016):353-72.
    • Heti, Sheila. How Should a Person Be? Picador, 2010.
    • Hu, Tung-Hui. A Prehistory of the Cloud. The MIT Press, 2016.
    • Huehls, Mitchum. After Critique: Twenty-First Century Fiction in a Neoliberal Age. Oxford University Press, 2016.
    • Hui, Yuk. “Algorithmic Catastrophe–The Revenge of Contingency.” Parrhesia 23(2015): 122-43.
    • Jarzombek, Mark. Digital Stockholm Syndrome in the Post-Ontological Age. University of Minnesota Press, 2016.
    • Lin, Tao. Richard Yates. Melville House, 2010.
    • –––. Taipei. Vintage, 2013.
    • McNeill, Laurie. “There Is No ‘I’ in Network: Social Networking Sites and Posthuman Auto/Biography.” Biography 35.1 (2012): 65-82.
    • Murphet, Julian. “A Modest Proposal for the Inhuman.” Modernism/Modernity 23.3(2016): 651-70.
    • Nelson, Maggie. The Argonauts. Graywolf P, 2015.
    • O’Gorman, Marcel. “Speculative Realism in Chains: A Love Story.” Angelaki: Journal of the Theoretical Humanities 18.1 (2013): 31-43.
    • Rosenberg, Jordana. “The Molecularization of Sexuality: On Some Primitivisms of the Present.” Theory and Event 17.2 (2014):  n.p.
    • Sheldon, Rebekah. “Dark Correlationism: Mysticism, Magic, and the New Realisms.” symplokē 24.1-2 (2016): 137-53.
    • Simanowski, Roberto. “Instant Selves: Algorithmic Autobiographies on Social Network Sites.” New German Critique 44.1 (2017): 205-216.
    • Stagg, Natasha. Surveys. Semiotext(e), 2016.
    • Wolfendale, Peter. Object Oriented Philosophy: The Noumenon’s New Clothes. Urbanomic, 2014.
  • Data and Desire in Academic Life

    Data and Desire in Academic Life

    a review of Erez Aiden and Jean-Baptiste Michel, Uncharted: Big Data as a Lens on Human Culture (Riverhead Books, reprint edition, 2014)
    by Benjamin Haber
    ~

    On a recent visit to San Francisco, I found myself trying to purchase groceries when my credit card was declined. As the cashier is telling me this news, and before I really had time to feel any particular way about it, my leg vibrates. I’ve received a text: “Chase Fraud-Did you use card ending in 1234 for $100.40 at a grocery store on 07/01/2015? If YES reply 1, NO reply 2.” After replying “yes” (which was recognized even though I failed to follow instructions), I swiped my card again and was out the door with my food. Many have probably had a similar experience: most if not all credit card companies automatically track purchases for a variety of reasons, including fraud prevention, the tracking of illegal activity, and to offer tailored financial products and services. As I walked out of the store, for a moment, I felt the power of “big data,” how real-time consumer information can be read as be a predictor of a stolen card in less time than I had to consider why my card had been declined. It was a too rare moment of reflection on those networks of activity that modulate our life chances and capacities, mostly below and above our conscious awareness.

    And then I remembered: didn’t I buy my plane ticket with the points from that very credit card? And in fact, hadn’t I used that card on multiple occasions in San Francisco for purchases not much less than the amount my groceries cost. While the near-instantaneous text provided reassurance before I could consciously recognize my anxiety, the automatic card decline was likely not a sophisticated real-time data-enabled prescience, but a rather blunt instrument, flagging the transaction on the basis of two data points: distance from home and amount of purchase. In fact, there is plenty of evidence to suggest that the gap between data collection and processing, between metadata and content and between current reality of data and its speculative future is still quite large. While Target’s pregnancy predicting algorithm was a journalistic sensation, the more mundane computational confusion that has Gmail constantly serving me advertisements for trade and business schools shows the striking gap between the possibilities of what is collected and the current landscape of computationally prodded behavior. The text from Chase, your Klout score, the vibration of your FitBit, or the probabilistic genetic information from 23 and me are all primarily affective investments in mobilizing a desire for data’s future promise. These companies and others are opening of new ground for discourse via affect, creating networked infrastructures for modulating the body and social life.

    I was thinking about this while reading Uncharted: Big Data as a Lens on Human Culture, a love letter to the power and utility of algorithmic processing of the words in books. Though ostensibly about the Google Ngram Viewer, a neat if one-dimensional tool to visualize the word frequency of a portion of the books scanned by Google, Uncharted is also unquestionably involved in the mobilization of desire for quantification. Though about the academy rather than financialization, medicine, sports or any other field being “revolutionized” by big data, its breathless boosterism and obligatory cautions are emblematic of the emergent datafied spirit of capitalism, a celebratory “coming out” of the quantifying systems that constitute the emergent infrastructures of sociality.

    While published fairly recently, in 2013, Uncharted already feels dated in its strangely muted engagement with the variety of serious objections to sprawling corporate and state run data systems in the post-Snowden, post-Target, post-Ashley Madison era (a list that will always be in need of update). There is still the dazzlement about the sheer magnificent size of this potential new suitor—“If you wrote out all five zettabytes that humans produce every year by hand, you would reach the core of the Milky Way” (11)—all the more impressive when explicitly compared to the dusty old technologies of ink and paper. Authors Erez Aiden and Jean-Baptiste Michel are floating in a world of “simple and beautiful” formulas (45), “strange, fascinating and addictive” methods (22), producing “intriguing, perplexing and even fun” conclusions (119) in their drive to colonize the “uncharted continent” (76) that is the English language. The almost erotic desire for this bounty is made more explicit in their tongue-in-cheek characterization of their meetings with Google employees as an “irresistible… mating dance” (22):

    Scholars and scientists approach engineers, product managers, and even high-level executives about getting access to their companies’ data. Sometimes the initial conversation goes well. They go out for coffee. One thing leads to another, and a year later, a brand-new person enters the picture. Unfortunately this person is usually a lawyer. (22)

    There is a lot to unpack in these metaphors, the recasting of academic dependence on data systems designed and controlled by corporate entities as a sexy new opportunity for scholars and scientists. There are important conversations to be had about these circulations of quantified desire; about who gets access to this kind of data, the ethics of working with companies who have an existential interest in profit and shareholder return and the cultural significance of wrapping business transactions in the language of heterosexual coupling. Here however I am mostly interested in the real allure that this passage and others speaks to, and the attendant fear that mostly whispers, at least in a book written by Harvard PhDs with Ted talks to give.

    For most academics in the social sciences and the humanities “big data” is a term more likely to get caught in the throat than inspire butterflies in the stomach. While Aiden and Michel certainly acknowledge that old-fashion textual analysis (50) and theory (20) will have a place in this brave new world of charts and numbers, they provide a number of contrasts to suggest the relative poverty of even the most brilliant scholar in the face of big data. One hypothetical in particular, that is not directly answered but is strongly implied, spoke to my discipline specifically:

    Consider the following question: Which would help you more if your quest was to learn about contemporary human society—unfettered access to a leading university’s department of sociology, packed with experts on how societies function, or unfettered access to Facebook, a company whose goal is to help mediate human social relationships online? (12)

    The existential threat at the heart of this question was catalyzed for many people in Roger Burrows and Mike Savage’s 2007 “The Coming Crisis of Empirical Sociology,” an early canary singing the worry of what Nigel Thrift has called “knowing capitalism” (2005). Knowing capitalism speaks to the ways that capitalism has begun to take seriously the task of “thinking the everyday” (1) by embedding information technologies within “circuits of practice” (5). For Burrows and Savage these practices can and should be seen as a largely unrecognized world of sophisticated and profit-minded sociology that makes the quantitative tools of academics look like “a very poor instrument” in comparison (2007: 891).

    Indeed, as Burrows and Savage note, the now ubiquitous social survey is a technology invented by social scientists, folks who were once seen as strikingly innovative methodologists (888). Despite ever more sophisticated statistical treatments however, the now over 40 year old social survey remains the heart of social scientific quantitative methodology in a radically changed context. And while declining response rates, a constraining nation-based framing and competition from privately-funded surveys have all decreased the efficacy of academic survey research (890), nothing has threatened the discipline like the embedded and “passive” collecting technologies that fuel big data. And with these methodological changes come profound epistemological ones: questions of how, when, why and what we know of the world. These methods are inspiring changing ideas of generalizability and new expectations around the temporality of research. Does it matter, for example, that studies have questioned the accuracy of the FitBit? The growing popularity of these devices suggests at the very least that sociologists should not count on empirical rigor to save them from irrelevance.

    As academia reorganizes around the speculative potential of digital technologies, there is an increasing pile of capital available to those academics able to translate between the discourses of data capitalism and a variety of disciplinary traditions. And the lure of this capital is perhaps strongest in the humanities, whose scholars have been disproportionately affected by state economic retrenchment on education spending that has increasingly prioritized quantitative, instrumental, and skill-based majors. The increasing urgency in the humanities to use bigger and faster tools is reflected in the surprisingly minimal hand wringing over the politics of working with companies like Facebook, Twitter and Google. If there is trepidation in the N-grams project recounted in Uncharted, it is mostly coming from Google, whose lawyers and engineers have little incentive to bother themselves with the politically fraught, theory-driven, Institutional Review Board slow lane of academic production. The power imbalance of this courtship leaves those academics who decide to partner with these companies at the mercy of their epistemological priorities and, as Uncharted demonstrates, the cultural aesthetics of corporate tech.

    This is a vision of the public humanities refracted through the language of public relations and the “measurable outcomes” culture of the American technology industry. Uncharted has taken to heart the power of (re)branding to change the valence of your work: Aiden and Michel would like you to call their big data inflected historical research “culturomics” (22). In addition to a hopeful attempt to coin a buzzy new work about the digital, culturomics linguistically brings the humanities closer to the supposed precision, determination and quantifiability of economics. And lest you think this multivalent bringing of culture to capital—or rather the renegotiation of “the relationship between commerce and the ivory tower” (8)—is unseemly, Aiden and Michel provide an origin story to show how futile this separation has been.

    But the desire for written records has always accompanied economic activity, since transactions are meaningless unless you can clearly keep track of who owns what. As such, early human writing is dominated by wheeling and dealing: a menagerie of bets, chits, and contracts. Long before we had the writings of prophets, we had the writing of profits. (9)

    And no doubt this is true: culture is always already bound up with economy. But the full-throated embrace of culturomics is not a vision of interrogating and reimagining the relationship between economic systems, culture and everyday life; [1] rather it signals the acceptance of the idea of culture as transactional business model. While Google has long imagined itself as a company with a social mission, they are a publicly held company who will be punished by investors if they neglect their bottom line of increasing the engagement of eyeballs on advertisements. The N-gram Viewer does not make Google money, but it perhaps increases public support for their larger book-scanning initiative, which Google clearly sees as a valuable enough project to invest many years of labor and millions of dollars to defend in court.

    This vision of the humanities is transactionary in another way as well. While much of Uncharted is an attempt to demonstrate the profound, game-changing implications of the N-gram viewer, there is a distinctly small-questions, cocktail-party-conversation feel to this type of inquiry that seems ironically most useful in preparing ABD humanities and social science PhDs for jobs in the service industry than in training them for the future of academia. It might be more precise to say that the N-gram viewer is architecturally designed for small answers rather than small questions. All is resolved through linear projection, a winner and a loser or stasis. This is a vision of research where the precise nature of the mediation (what books have been excluded? what is the effect of treating all books as equally revealing of human culture? what about those humans whose voices have been systematically excluded from the written record?) is ignored, and where the actual analysis of books, and indeed the books themselves, are black-boxed from the researcher.

    Uncharted speaks to perils of doing research under the cloud of existential erasure and to the failure of academics to lead with a different vision of the possibilities of quantification. Collaborating with the wealthy corporate titans of data collection requires an acceptance of these companies own existential mandate: make tons of money by monetizing a dizzying array of human activities while speculatively reimagining the future to attempt to maintain that cash flow. For Google, this is a vision where all activities, not just “googling” are collected and analyzed in a seamlessly updating centralized system. Cars, thermostats, video games, photos, businesses are integrated not for the public benefit but because of the power of scale to sell or rent or advertise products. Data is promised as a deterministic balm for the unknowability of life and Google’s participation in academic research gives them the credibility to be your corporate (sen.se) mother. What, might we imagine, are the speculative possibilities of networked data not beholden to shareholder value?
    _____

    Benjamin Haber is a PhD candidate in Sociology at CUNY Graduate Center and a Digital Fellow at The Center for the Humanities. His current research is a cultural and material exploration of emergent infrastructures of corporeal data through a queer theoretical framework. He is organizing a conference called “Queer Circuits in Archival Times: Experimentation and Critique of Networked Data” to be held in New York City in May 2016.

    Back to the essay

    _____

    Notes

    [1] A project desperately needed in academia, where terms like “neoliberalism,” “biopolitics” and “late capitalism” more often than not are used briefly at end of a short section on implications rather than being given the critical attention and nuanced intentionality that they deserve.

    Works Cited

    Savage, Mike, and Roger Burrows. 2007. “The Coming Crisis of Empirical Sociology.” Sociology 41 (5): 885–99.

    Thrift, Nigel. 2005. Knowing Capitalism. London: SAGE.

  • How We Think About Technology (Without Thinking About Politics)

    How We Think About Technology (Without Thinking About Politics)

    N. Katherine Hayles, How We Think: Digital Media and Contemporary Technogenesis (Chicago, 2012)a review of N. Katherine Hayles, How We Think: Digital Media and Contemporary Technogenesis (Chicago, 2012)
    by R. Joshua Scannell

    ~

    In How We Think, N Katherine Hayles addresses a number of increasingly urgent problems facing both the humanities in general and scholars of digital culture in particular. In keeping with the research interests she has explored at least since 2002’s Writing Machines (MIT Press), Hayles examines the intersection of digital technologies and humanities practice to argue that contemporary transformations in the orientation of the University (and elsewhere) are attributable to shifts that ubiquitous digital culture have engendered in embodied cognition. She calls this process of mutual evolution between the computer and the human technogenesis (a term that is mostly widely associated with the work of Bernard Stiegler, although Hayles’s theories often aim in a different direction from Stiegler’s). Hayles argues that technogenesis is the basis for the reorientation of the academy, including students, away from established humanistic practices like close reading. Put another way, not only have we become posthuman (as Hayles discusses in her landmark 1999 University of Chicago Press book, How We Became Posthuman: Virtual Bodies in Cybernetics, Literature, and Informatics), but our brains have begun to evolve to think with computers specifically and digital media generally. Rather than a rearguard eulogy for the humanities that was, Hayles advocates for an opening of the humanities to digital dromology; she sees the Digital Humanities as a particularly fertile ground from which to reimagine the humanities generally.

    Hayles is an exceptional scholar, and while her theory of technogenesis is not particularly novel, she articulates it with a clarity and elegance that are welcome and useful in a field that is often cluttered with good ideas, unintelligibly argued. Her close engagement with work across a range of disciplines – from Hegelian philosophy of mind (Catherine Malabou) to theories of semiosis and new media (Lev Manovich) to experimental literary production – grounds an argument about the necessity of transmedial engagement in an effective praxis. Moreover, she ably shifts generic gears over the course of a relatively short manuscript, moving from quasi-ethnographic engagement with University administrators, to media archaeology a la Friedrich Kittler, to contemporary literary theory, with grace. Her critique of the humanities that is, therefore, doubles as a praxis: she is actually producing the discipline-flouting work that she calls on her colleagues to pursue.

    The debate about the death and/or future of the humanities is weather worn, but Hayles’s theory of technogenesis as a platform for engaging in it is a welcome change. For Hayles, the technogenetic argument centers on temporality, and the multiple temporalities embedded in computer processing and human experience. She envisions this relation as cybernetic, in which computer and human are integrated as a system through the feedback loops of their coemergent temporalities. So, computers speed up human responses, which lag behind innovations, which prompt beta test cycles at quicker rates, which demand humans to behave affectively, nonconsciously. The recursive relationship between human duration and machine temporality effectively mutates both. Humanities professors might complain that their students cannot read “closely” like they used to, but for Hayles this is a fault of those disciplines to imagine methods in step with technological changes. Instead of digital media making us “dumber” by reducing our attention spans, as Nicholas Carr argues, Hayles claims that the movement towards what she calls “hyper reading” is an ontological and biological fact of embodied cognition in the age of digital media. If “how we think” were posed as a question, the answer would be: bodily, quickly, cursorily, affectively, non-consciously.

    Hayles argues that this doesn’t imply an eliminative teleology of human capacity, but rather an opportunity to think through novel, expansive interventions into this cyborg loop. We may be thinking (and feeling, and experiencing) differently than we used to, but this remains a fact of human existence. Digital media has shifted the ontics of our technogenetic reality, but it has not fundamentally altered its ontology. Morphological biology, in fact, entails ontological stability. To be human, and to think like one, is to be with machines, and to think with them. The kids, in other words, are all right.

    This sort of quasi-Derridean or Stieglerian Hegelianism is obviously not uncommon in media theory. As Hayles deploys it, this disposition provides a powerful framework for thinking through the relationship of humans and machines without ontological reductivism on either end. Moreover, she engages this theory in a resolutely material fashion, evading the enervating tendency of many theorists in the humanities to reduce actually existing material processes to metaphor and semiosis. Her engagement with Malabou’s work on brain plasticity is particularly useful here. Malabou has argued that the choice facing the intellectual in the age of contemporary capitalism is between plasticity and self-fashioning. Plasticity is a quintessential demand of contemporary capitalism, whereas self-fashioning opens up radical possibilities for intervention. The distinction between these two potentialities, however, is unclear – and therefore demands an ideological commitment to the latter. Hayles is right to point out that this dialectic insufficiently accounts for the myriad ways in which we are engaged with media, and are in fact produced, bodily, by it.

    But while Hayles’ critique is compelling, the responses she posits may be less so. Against what she sees as Malabou’s snide rejection of the potential of media, she argues

    It is precisely because contemporary technogenesis posits a strong connection between ongoing dynamic adaptation of technics and humans that multiple points of intervention open up. These include making new media…adapting present media to subversive ends…using digital media to reenvision academic practices, environments and strategies…and crafting reflexive representations of media self fashionings…that call attention to their own status as media, in the process raising our awareness of both the possibilities and dangers of such self-fashioning. (83)

    With the exception of the ambiguous labor done by the word “subversive,” this reads like a catalog of demands made by administrators seeking to offload ever-greater numbers of students into MOOCs. This is unfortunately indicative of what is, throughout the book, a basic failure to engage with the political economics of “digital media and contemporary technogenesis.” Not every book must explicitly be political, and there is little more ponderous than the obligatory, token consideration of “the political” that so many media scholars feel compelled to make. And yet, this is a text that claims to explain “how” “we” “think” under post-industrial, cognitive capitalism, and so the lack of this engagement cannot help but show.

    Universities across the country are collapsing due to lack of funding, students are practically reduced to debt bondage to cope with the costs of a desperately near-compulsory higher education that fails to deliver economic promises, “disruptive” deployment of digital media has conjured teratic corporate behemoths that all presume to “make the world a better place” on the backs of extraordinarily exploited workforces. There is no way for an account of the relationship between the human and the digital in this capitalist context not to be political. Given the general failure of the book to take these issues seriously, it is unsurprising that two of Hayles’ central suggestions for addressing the crisis in the humanities are 1) to use voluntary, hobbyist labor to do the intensive research that will serve as the data pool for digital humanities scholars and 2) to increasingly develop University partnerships with major digital conglomerates like Google.

    This reads like a cost-cutting administrator’s fever dream because, in the chapter in which Hayles promulgates novel (one might say “disruptive”) ideas for how best to move the humanities forward, she only speaks to administrators. There is no consideration of labor in this call for the reformation of the humanities. Given the enormous amount of writing that has been done on affective capitalism (Clough 2008), digital labor (Scholz 2012), emotional labor (Van Kleaf 2015), and so many other iterations of exploitation under digital capitalism, it boggles the mind a bit to see an embrace of the Mechanical Turk as a model for the future university.

    While it may be true that humanities education is in crisis – that it lacks funding, that its methods don’t connect with students, that it increasingly must justify its existence on economic grounds – it is unclear that any of these aspects of the crisis are attributable to a lack of engagement with the potentials of digital media, or the recognition that humans are evolving with our computers. All of these crises are just as plausibly attributable to what, among many others, Chandra Mohanty identified ten years ago as the emergence of the corporate university, and the concomitant transformation of the mission of the university from one of fostering democratic discourse to one of maximizing capital (Mohanty 2003). In other words, we might as easily attribute the crisis to the tightening command that contemporary capitalist institutions have over the logic of the university.

    Humanities departments are underfunded precisely because they cannot – almost by definition – justify their existence on monetary grounds. When students are not only acculturated, but are compelled by financial realities and debt, to understand the university as a credentialing institution capable of guaranteeing certain baseline waged occupations – then it is no surprise that they are uninterested in “close reading” of texts. Or, rather, it might be true that students’ “hyperreading” is a consequence of their cognitive evolution with machines. But it is also just as plausibly a consequence of the fact that students often are working full time jobs while taking on full time (or more) course loads. They do not have the time or inclination to read long, difficult texts closely. They do not have the time or inclination because of the consolidating paradigm around what labor, and particularly their labor, is worth. Why pay for a researcher when you can get a hobbyist to do it for free? Why pay for a humanities line when Google and Wikipedia can deliver everything an institution might need to know?

    In a political economy in which Amazon’s reduction of human employees to algorithmically-managed meat wagons is increasingly diagrammatic and “innovative” in industries from service to criminal justice to education, the proposals Hayles is making to ensure the future of the university seem more fifth columnary that emancipatory.

    This stance also evacuates much-needed context from what are otherwise thoroughly interesting, well-crafted arguments. This is particularly true of How We Think’s engagement with Lev Manovich’s claims regarding narrative and database. Speaking reductively, in The Language of New Media (MIT Press, 2001), Manovich argued that under there are two major communicative forms: narrative and database. Narrative, in his telling, is more or less linear, and dependent on human agency to be sensible. Novels and films, despite many modernist efforts to subvert this, tend toward narrative. The database, as opposed to the narrative, arranges information according to patterns, and does not depend on a diachronic point-to-point communicative flow to be intelligible. Rather, the database exists in multiple temporalities, with the accumulation of data for rhizomatic recall of seemingly unrelated information producing improbable patterns of knowledge production. Historically, he argues, narrative has dominated. But with the increasing digitization of cultural output, the database will more and more replace narrative.

    Manovich’s dichotomy of media has been both influential and roundly criticized (not least by Manovich himself in Software Takes Command, Bloomsbury 2013) Hayles convincingly takes it to task for being reductive and instituting a teleology of cultural forms that isn’t borne out by cultural practice. Narrative, obviously, hasn’t gone anywhere. Hayles extends this critique by considering the distinctive ways space and time are mobilized by database and narrative formations. Databases, she argues, depend on interoperability between different software platforms that need to access the stored information. In the case of geographical information services and global positioning services, this interoperability depends on some sort of universal standard against which all information can be measured. Thus, Cartesian space and time are inevitably inserted into database logics, depriving them of the capacity for liveliness. That is to say that the need to standardize the units that measure space and time in machine-readable databases imposes a conceptual grid on the world that is creatively limiting. Narrative, on the other hand, does not depend on interoperability, and therefore does not have an absolute referent against which it must make itself intelligible. Given this, it is capable of complex and variegated temporalities not available to databases. Databases, she concludes, can only operate within spatial parameters, while narrative can represent time in different, more creative ways.

    As an expansion and corrective to Manovich, this argument is compelling. Displacing his teleology and infusing it with a critique of the spatio-temporal work of database technologies and their organization of cultural knowledge is crucial. Hayles bases her claim on a detailed and fascinating comparison between the coding requirements of relational databanks and object-oriented databanks. But, somewhat surprisingly, she takes these different programming language models and metonymizes them as social realities. Temporality in the construction of objects transmutes into temporality as a philosophical category. It’s unclear how this leap holds without an attendant sociopolitical critique. But it is impossible to talk about the cultural logic of computation without talking about the social context in which this computation emerges. In other words, it is absolutely true that the “spatializing” techniques of coders (like clustering) render data points as spatial within the context of the data bank. But it is not an immediately logical leap to then claim that therefore databases as a cultural form are spatial and not temporal.

    Further, in the context of contemporary data science, Hayles’s claims about interoperability are at least somewhat puzzling. Interoperability and standardized referents might be a theoretical necessity for databases to be useful, but the ever-inflating markets around “big data,” data analytics, insights, overcoming data siloing, edge computing, etc, demonstrate quite categorically that interoperability-in-general is not only non-existent, but is productively non-existent. That is to say, there are enormous industries that have developed precisely around efforts to synthesize information generated and stored across non-interoperable datasets. Moreover, data analytics companies provide insights almost entirely based on their capacity to track improbably data patterns and resonances across unlikely temporalities.

    Far from a Cartesian world of absolute space and time, contemporary data science is a quite posthuman enterprise in committing machine learning to stretch, bend and strobe space and time in order to generate the possibility of bankable information. This is both theoretically true in the sense of setting algorithms to work sorting, sifting and analyzing truly incomprehensible amounts of data and materially true in the sense of the massive amount of capital and labor that is invested in building, powering, cooling, staffing and securing data centers. Moreover, the amount of data “in the cloud” has become so massive that analytics companies have quite literally reterritorialized information– particularly trades specializing in high frequency trading, which practice “co- location,” locating data centers geographically closer   the sites from which they will be accessed in order to maximize processing speed.

    Data science functions much like financial derivatives do (Martin 2015). Value in the present is hedged against the probable future spatiotemporal organization of software and material infrastructures capable of rendering a possibly profitable bundling of information in the immediate future. That may not be narrative, but it is certainly temporal. It is a temporality spurred by the queer fluxes of capital.

    All of which circles back to the title of the book. Hayles sets out to explain How We Think. A scholar with such an impeccable track record for pathbreaking analyses of the relationship of the human to technology is setting a high bar for herself with such a goal. In an era in which (in no small part due to her work) it is increasingly unclear who we are, what thinking is or how it happens, it may be an impossible bar to meet. Hayles does an admirable job of trying to inject new paradigms into a narrow academic debate about the future of the humanities. Ultimately, however, there is more resting on the question than the book can account for, not least the livelihoods and futures of her current and future colleagues.
    _____

    R Joshua Scannell is a PhD candidate in sociology at the CUNY Graduate Center. His current research looks at the political economic relations between predictive policing programs and urban informatics systems in New York City. He is the author of Cities: Unauthorized Resistance and Uncertain Sovereignty in the Urban World (Paradigm/Routledge, 2012).

    Back to the essay
    _____

    Patricia T. Clough. 2008. “The Affective Turn.” Theory Culture and Society 25(1) 1-22

    N. Katherine Hayles. 2002. Writing Machines. Cambridge: MIT Press

    N. Katherine Hayles. 1999. How We Became Posthuman: Virtual Bodies in Cybernetics, Literature, and Informatics. Chicago: University of Chicago Press

    Catherine Malabou. 2008. What Should We Do with Our Brain? New York: Fordham University Press

    Lev Manovich. 2001. The Language of New Media. Cambridge: MIT Press.

    Lev Manovich. 2009. Software Takes Command. London: Bloomsbury

    Randy Martin. 2015. Knowledge LTD: Toward a Social Logic of the Derivative. Philadelphia: Temple University Press

    Chandra Mohanty. 2003. Feminism Without Borders: Decolonizing Theory, Practicing Solidarity. Durham: Duke University Press.

    Trebor Scholz, ed. 2012. Digital Labor: The Internet as Playground and Factory. New York: Routledge

    Bernard Stiegler. 1998. Technics and Time, 1: The Fault of Epimetheus. Palo Alto: Stanford University Press

    Kara Van Cleaf. 2015. “Of Woman Born to Mommy Blogged: The Journey from the Personal as Political to the Personal as Commodity.” Women’s Studies Quarterly 43(3/4) 247-265

    Back to the essay

  • Program and Be Programmed

    Program and Be Programmed

    Programmed Visions: Software and Memory (MIT Press, 2013)a review of Wendy Chun, Programmed Visions: Software and Memory (MIT Press, 2013)
    by Zachary Loeb
    ~

    Type a letter on a keyboard and the letter appears on the screen, double-click on a program’s icon and it opens, use the mouse in an art program to draw a line and it appears. Yet knowing how to make a program work is not the same as knowing how or why it works. Even a level of skill approaching mastery of a complicated program does not necessarily mean that the user understands how the software works at a programmatic level. This is captured in the canonical distinctions between users and “power users,” on the one hand, and between users and programmers on the other. Whether being a power user or being a programmer gives one meaningful power over machines themselves should be a more open question than injunctions like Douglas Rushkoff’s “program or be programmed” or the general opinion that every child must learn to code appear to allow.

    Sophisticated computer programs give users a fantastical set of abilities and possibilities. But to what extent does this sense of empowerment depend on faith in the unseen and even unknown codes at work in a given program? We press a key on a keyboard and a letter appears on the screen—but do we really know why? These are some of the questions that Wendy Hui Kyong Chun poses in Programmed Visions: Software and Memory, which provides a useful history of early computing alongside a careful analysis of the ways in which computers are used—and use their users—today. Central to Chun’s analysis is her insistence “that a rigorous engagement with software makes new media studies more, rather than less, vapory” (21), and her book succeeds admirably in this regard.

    The central point of Chun’s argument is that computers (and media in general) rely upon a notion of programmability that has become part of the underlying societal logic of neoliberal capitalism. In a society where computers are tied ever more closely to power, Chun argues that canny manipulation of software restores a sense of control or sovereignty to individual users, even as their very reliance upon this software constitutes a type of disempowerment. Computers are the driving force and grounding metaphor behind an ideology that seeks to determine the future—a future that “can be bought and sold” and which “depends on programmable visions that extrapolate the future—or more precisely, a future—based on the past” (9).

    Yet, one of the pleasures of contemporary computer usage, is that one need not fully understand much of what is going on to be able to enjoy the benefits of the computer. Though we may use computer technology to answer critical questions, this does not necessarily mean we are asking critical questions about computer technology. As Chun explains, echoing Michel Foucault, “software, free or not, is embodied and participates in structures of knowledge-power” (21); users become tangled in these structures once they start using a given device or program. Much of this “knowledge-power” is bound up in the layers of code which make software function, the code is that which gives the machine the directions—that which ensures that the tapping of the letter “r” on the keyboard leads to that letter appearing on the screen. Nevertheless, this code typically goes unseen, especially as it becomes source code, and winds up being buried ever deeper, even though this source code is what “embodies the power of the executive, the power of enforcement” (27). Importantly, the ability to write code, the programmer’s skill, does not in and of itself provide systematic power: computers follow “a set of rules that programmers must follow” (28). A sense of power over certain aspects of a computer is still incumbent upon submitting to the control of other elements of the computer.

    Contemporary computers, and our many computer-esque devices (such as smart phones and tablets), are the primary sites in which most of us encounter the codes and programming about which Chun writes, but she takes lengths to introduce the reader to the history of programming. For it is against the historical backdrop of military research, during the Second World War, that one can clearly see the ways in which notions of control, the unquestioning following of orders, and hierarchies have long been at work within computation and programming. Beyond providing an enlightening aside into the vital role that women played in programming history, analyzing the early history of computing demonstrates how as a means of cutting down on repetitive work structured programming emerged that “limits the logical procedures coders can use, and insists that the program consist of small modular units, which can be called from the main program” (36). Gradually this emphasis on structured programming allows for more and more processes to be left to the machine, and thus processes and codes become hidden from view even as future programmers are taught to conform to the demands that will allow for new programs to successfully make use of these early programs. Therefore the processes that were once a result of expertise come to be assumed aspects of the software—they become automated—and it is this very automation (“automatic programming”) that “allows the production of computer-enabled human-readable code” (41).

    As the codes and programs become hidden by ever more layers of abstraction, the computer simultaneously and paradoxically appears to make more of itself visible (through graphic user interfaces, for example), while the code itself recedes ever further into the background. This transition is central to the computer’s rapid expansion into ever more societal spheres, and it is an expansion that Chun links to the influence of neoliberal ideology. The computer with its easy-to-use interfaces creates users who feel as though they are free and empowered to manipulate the machine even as they rely on the codes and programs that they do not see. Freedom to act becomes couched in code that predetermines the range and type of actions that the users are actually free to take. What transpires, as Chun writes, is that “interfaces and operating systems produce ‘users’—one and all” (67).

    Without fully comprehending the codes that lead from a given action (a user presses a button) to a given result, the user is positioned to believe ever more in the power of the software/hardware hybrid, especially as increased storage capabilities allow for computers to access vast informational troves. In so doing, the technologically-empowered user has been conditioned to expect a programmable world akin to the programmed devices they use to navigate that world—it has “fostered our belief in the world as neoliberal: as an economic game that follows certain rules” (92). And this takes place whether or not we understand who wrote those rules, or how they can be altered.

    This logic of programmability may be linked to inorganic machines, but Chun also demonstrates the ways in which this logic has been applied to the organic world as well. In truth, the idea that the organic can be programmed predates the computer; as Chun explains “breeding encapsulates an early logic of programmability… Eugenics, in other words, was not simply a factor driving the development of high-speed mass calculation at the level of content… but also at the level of operationality” (124). In considering the idea that the organic can be programmed, what emerges is a sense of the way that programming has long been associated with a certain will to exert control over things be they organic or inorganic. Far from being a digression, Chun’s discussion of eugenics provides for a fascinating historic comparison given the way in which its decline in acceptance seems to dovetail with the steady ascendance of the programmable machine.

    The intersection of software and memory (or “software as memory”) is an essential matter to consider given the informational explosion that has occurred with the spread of computers. Yet, as Chun writes eloquently: “information is ‘undead’; neither alive nor dead, neither quite present nor absent” (134), since computers simultaneously promise to make ever more information available while making the future of much of this information precarious (insofar as access may rely upon software and hardware that no longer functions). Chun elucidates the ways in which the shift from analog to digital has permitted a wider number of users to enjoy the benefits of computers while this shift has likewise made much that goes on inside a computer (software and hardware) less transparent. While the machine’s memory may seem ephemeral and (to humans) illegible, accessing information in “storage” involves codes that read by re-writing elsewhere. This “battle of diligence between the passing and the repetitive” characterizing machine memory, Chun argues, “also characterizes content today” (170). Users rely upon a belief that the information they seek will be available and that they will be able to call upon it with a few simple actions, even though they do not see (and usually cannot see) the processes that make this information present and which do or do not allow it to be presented.

    When people make use of computers today they find themselves looking—quite literally—at what the software presents to them, yet in allowing this act of seeing the programming also has determined much of what the user does not see. Programmed Visions is an argument for recognizing that sometimes the power structures that most shape our lives go unseen—even if we are staring right at them.

    * * *

    With Programmed Visions, Chun has crafted a nuanced, insightful, and dense, if highly readable, contribution to discussions about technology, media, and the digital humanities. It is a book that demonstrates Chun’s impressive command of a variety of topics and the way in which she can engagingly shift from history to philosophy to explanations of a more technical sort. Throughout the book Chun deftly draws upon a range of classic and contemporary thinkers, whilst raising and framing new questions and lines of inquiry even as she seeks to provide answers on many other topics.

    Though peppered with many wonderful turns of phrase, Programmed Visions remains a challenging book. While all readers of Programmed Visions will come to it with their own background and knowledge of coding, programming, software, and so forth—the simple truth is that Chun’s point (that many people do not understand software sufficiently) may make many a reader feel somewhat taken aback. For most computer users—even many programmers and many whose research involves the study of technology and media—are quite complicit in the situation that Chun describes. It is the sort of discomforting confrontation that is valuable precisely because of the anxiety it provokes. Most users take for granted that the software will work the way they expect it to—hence the frustration bordering on fury that many people experience when suddenly the machine does something other than that which is expected provoking a maddened outburst of “why aren’t you working!” What Chun helps demonstrate is that it is not so much that the machines betray us, but that we were mistaken in our thinking that machines ever really obeyed us.

    It will be easy for many readers to see themselves as the user that Chun describes—as someone positioned to feel empowered by the devices they use, even as that power depends upon faith in forces the user cannot see, understand, or control. Even power users and programmers, on careful self-reflection may identify with Chun’s relocation of the programmer from a position of authority to a role wherein they too must comply with the strictures of the code presents an important argument for considerations of such labor. Furthermore, the way in which Chun links the power of the machine to the overarching ideology of neoliberalism makes her argument useful for discussions broader than those in media studies and the digital humanities. What makes these arguments particularly interesting is the way in which Chun locates them within thinking about software. As she writes towards the end of the second chapter, “this chapter is not a call to return to an age when one could see and comprehend the actions of our computers. Those days are long gone… Neither is this chapter an indictment of software or programming… It is, however, an argument against common-sense notions of software precisely because of their status as common sense” (92). Such a statement refuses to provide the anxious reader (who has come to see themselves as an uninformed user) with a clear answer, for it suggests that the “common-sense” clear answer is part of what has disempowered them.

    The weaving of historic details regarding computers during World War II and eugenics provide an excellent and challenging atmosphere against which Chun’s arguments regarding programmability can grow. Chun lucidly describes the embodiment and materiality of information and obsolescence that serve as major challenges confronting those who seek to manage and understand the massive informational flux that computer technology has enabled. The idea of information as “undead” is both amusing and evocative as it provides for a rich way of describing the “there but not there” of information, while simultaneously playing upon the slight horror and uneasiness that seems to be lurking below the surface in the confrontation with information.

    As Chun sets herself the difficult task of exploring many areas, there are some topics where the reader may be left wanting more. The section on eugenics presents a troubling and fascinating argument—one which could likely have been a book in and of itself—especially when considered in the context of arguments about cyborg selves and post-humanity, and it is a section that almost seems to have been cut short. Likewise the discussion of race (“a thread that has been largely invisible yet central,” 179), which is brought to the fore in the epilogue, confronts the reader with something that seems like it could in fact be the introduction for another book. It leaves the reader with much to contemplate—though it is the fact that this thread was not truly “largely invisible” that makes the reader upon reaching the epilogue wish that the book could have dealt with that matter at greater length. Yet, these are fairly minor concerns—that Programmed Visions leaves its readers re-reading sections to process them in light of later points is a credit to the text.

    Programmed Visions: Software and Memory is an alternatively troubling, enlightening, and fascinating book. It allows its reader to look at software and hardware in a new way, with a fresh insight about this act of sight. It is a book that plants a question (or perhaps subtly programs one into the reader’s mind): what are you not seeing, what power relations remain invisible, between the moment during which the “?” is hit on the keyboard and the moment it appears on the screen?


    _____

    Zachary Loeb is a writer, activist, librarian, and terrible accordion player. He earned his MSIS from the University of Texas at Austin, and is currently working towards an MA in the Media, Culture, and Communications department at NYU. His research areas include media refusal and resistance to technology, ethical implications of technology, alternative forms of technology, and libraries as models of resistance. Using the moniker “The Luddbrarian” Loeb writes at the blog librarianshipwreck. He has previously reviewed The People’s Platform by Astra Taylor and Social Media: A Critical Introduction by Christian Fuchs for boundary2.org.

    Back to the essay

  • The Eversion of the Digital Humanities

    The Eversion of the Digital Humanities

    image
    by Brian Lennon

    on The Emergence of the Digital Humanities by Steven E. Jones

    1

    Steven E. Jones begins his Introduction to The Emergence of the Digital Humanities (Routledge, 2014) with an anecdote concerning a speaking engagement at the Illinois Institute of Technology in Chicago. “[M]y hosts from the Humanities department,” Jones tells us,

    had also arranged for me to drop in to see the fabrication and rapid-prototyping lab, the Idea Shop at the University Technology Park. In one empty room we looked into, with schematic drawings on the walls, a large tabletop machine jumped to life and began whirring, as an arm with a router moved into position. A minute later, a student emerged from an adjacent room and adjusted something on the keyboard and monitor attached by an extension arm to the frame for the router, then examined an intricately milled block of wood on the table. Next door, someone was demonstrating finely machined parts in various materials, but mostly plastic, wheels within bearings, for example, hot off the 3D printer….

    What exactly, again, was my interest as a humanist in taking this tour, one of my hosts politely asked?1

    It is left almost entirely to more or less clear implication, here, that Jones’s humanities department hosts had arranged the expedition at his request, and mainly or even only to oblige a visitor’s unusual curiosity, which we are encouraged to believe his hosts (if “politely”) found mystifying. Any reader of this book must ask herself, first, if she believes this can really have occurred as reported: and if the answer to that question is yes, if such a genuinely unlikely and unusual scenario — the presumably full-time, salaried employees of an Institute of Technology left baffled by a visitor’s remarkable curiosity about their employer’s very raison d’être — warrants any generalization at all. For that is how Jones proceeds: by generalization, first of all from a strained and improbably dramatic attempt at defamiliarization, in the apparent confidence that this anecdote illuminating the spirit of the digital humanities will charm — whom, exactly?

    It must be said that Jones’s history of “digital humanities” is refreshingly direct and initially, at least, free of obfuscation, linking the emergence of what it denotes to events in roughly the decade preceding the book’s publication, though his reading of those events is tendentious. It was the “chastened” retrenchment after the dot-com bubble in 2000, Jones suggests (rather, just for example, than the bubble’s continued inflation by other means) that produced the modesty of companies like our beloved Facebook and Twitter, along with their modest social networking platform-products, as well as the profound modesty of Google Inc. initiatives like Google Books (“a development of particular interest to humanists,” we are told2) and Google Maps. Jones is clearer-headed when it comes to the disciplinary history of “digital humanities” as a rebaptism of humanities computing and thus — though he doesn’t put it this way — a catachrestic asseveration of traditional (imperial-nationalist) philology like its predecessor:

    It’s my premise that what sets DH apart from other forms of media studies, say, or other approaches to the cultural theory of computing, ultimately comes through its roots in (often text-based) humanities computing, which always had a kind of mixed-reality focus on physical artifacts and archives.3

    Jones is also clear-headed on the usage history of “digital humanities” as a phrase in the English language, linking it to moments of consolidation marked by Blackwell’s Companion to Digital Humanities, the establishment of the National Endowment for the Humanities Office for the Digital Humanities, and higher-education journalism covering the annual Modern Language Association of America conventions. It is perhaps this sensitivity to “digital humanities” as a phrase whose roots lie not in original scholarship or cultural criticism itself (as was still the case with “deconstruction” or “postmodernism,” even at their most shopworn) but in the dependent, even parasitic domains of reference publishing, grant-making, and journalism that leads Jones to declare “digital humanities” a “fork” of humanities computing, rather than a Kuhnian paradigm shift marking otherwise insoluble structural conflict in an intellectual discpline.

    At least at first. Having suggested it, Jones then discards the metaphor drawn from the tree structures of software version control, turning to “another set of metaphors” describing the digital humanities as having emerged not “out of the primordial soup” but “into the spotlight” (Jones, 5). We are left to guess at the provenance of this second metaphor, but its purpose is clear: to construe the digital humanities, both phenomenally and phenomenologically, as the product of a “shift in focus, driven […] by a new set of contexts, generating attention to a range of new activities” (5).

    Change; shift; new, new, new. Not a branch or a fork, not even a trunk: we’re now in the ecoverse of history and historical time, in its collision with the present. The appearance and circulation of the English-language phrase “digital humanities” can be documented — that is one of the things that professors of English like Jones do especially well, when they care to. But “changes in the culture,” much more broadly, within only the last ten years or so? No scholar in any discipline is particularly well trained, well positioned, or even well suited to diagnosing those; and scholars in English studies won’t be at the top of anyone’s list. Indeed, Jones very quickly appeals to “author William Gibson” for help, settling on the emergence of the digital humanities as a response to what Gibson called “the eversion of cyberspace,” in its ostensibly post-panopticist colonization of the physical world.6 It makes for a rather inarticulate and self-deflating statement of argument, in which on its first appearance eversion, ambiguously, appears to denote the response as much as its condition or object:

    My thesis is simple: I think that the cultural response to changes in technology, the eversion, provides an essential context for understanding the emergence of DH as a new field of study in the new millennium.7

    Jones offers weak support for the grandiose claim that “we can roughly date the watershed moment when the preponderant collective perception changed to 2004–2008″ (21). Second Life “peaked,” we are told, while World of Warcraft “was taking off”; Nintendo introduced the Wii; then Facebook “came into its own,” and was joined by Twitter and Foursquare, then Apple’s iPhone. Even then (and setting aside the question of whether such benchmarking is acceptable evidence), for the most part Jones’s argument, such as it is, is that something is happening because we are talking about something happening.

    But who are we? Jones’s is the typical deference of the scholar to the creative artist, unwilling to challenge the latter’s utter dependence on meme engineering, at least where someone like Gibson is concerned; and Jones’s subsequent turn to the work of a scholar like N. Katherine Hayles on the history of cybernetics comes too late to amend the impression that the order of things here is marked first by gadgets, memes, and conversations about gadgets and memes, and only subsequently by ideas and arguments about ideas. The generally unflattering company among whom Hayles is placed (Clay Shirky, Nathan Jurgenson) does little to move us out of the shallows, and Jones’s profoundly limited range of literary reference, even within a profoundly narrowed frame — it’s Gibson, Gibson, Gibson all the time, with the usual cameos by Bruce Sterling and Neal Stephenson — doesn’t help either.

    Jones does have one problem with the digital humanities: it ignores games. “My own interest in games met with resistance from some anonymous peer reviewers for the program for the DH 2013 conference, for example,” he tells us (33). “[T]he digital humanities, at least in some quarters, has been somewhat slow to embrace the study of games” (59). “The digital humanities could do worse than look to games” (36). And so on: there is genuine resentment here.

    But nobody wants to give a hater a slice of the pie, and a Roman peace mandates that such resentment be sublated if it is to be, as we say, taken seriously. And so in a magical resolution of that tension, the digital humanities turns out to be constituted by what it accidentally ignores or actively rejects, in this case — a solution that sweeps antagonism under the rug as we do in any other proper family. “[C]omputer-based video games embody procedures and structures that speak to the fundamental concerns of the digital humanities” (33). “Contemporary video games offer vital examples of digital humanities in practice” (59). If gaming “sounds like what I’ve been describing as the agenda of the digital humanities, it’s no accident” (144).

    Some will applaud Jones’s niceness on this count. It may strike others as desperately friendly, a lingering under a big tent as provisional as any other tent, someday to be replaced by a building, if not by nothing. Few of us will deny recognition to Second Life, World of Warcraft, Wii, Facebook, Twitter, etc. as cultural presences, at least for now. But Jones’s book is also marked by slighter and less sensibly chosen benchmarks, less sensibly chosen because Jones’s treatment of them, in a book whose ambition is to preach to the choir, simply imputes their cultural presence. Such brute force argument drives the pathos that Jones surely feels, as a scholar — in the recognition that among modern institutions, it is only scholarship and the law that preserve any memory at all — into a kind of melancholic unconscious, from whence his objects return to embarrass him. “[A]s I write this,” we read, “QR codes show no signs yet of fading away” (41). Quod erat demonstrandum.

    And it is just there, in such a melancholic unconscious, that the triumphalism of the book’s title, and the “emergence of the digital humanities” that it purports to mark, claim, or force into recognition, straightforwardly gives itself away. For the digital humanities will pass away, and rather than being absorbed into the current order of things, as digital humanities enthusiasts like to believe happened to “high theory” (it didn’t happen), the digital humanities seems more likely, at this point, to end as a blank anachronism, overwritten by the next conjuncture in line with its own critical mass of prognostications.

    2

    To be sure, who could deny the fact of significant “changes in the culture” since 2000, in the United States at least, and at regular intervals: 2001, 2008, 2013…? Warfare — military in character, but when that won’t do, economic; of any interval, but especially when prolonged and deliberately open-ended; of any intensity, but especially when flagrantly extrajudicial and opportunistically, indeed sadistically asymmetrical — will do that to you. No one who sets out to historicize the historical present can afford to ignore the facts of present history, at the very least — but the fact is that Jones finds such facts unworthy of comment, and in that sense, for all its pretense to worldliness, The Emergence of the Digital Humanities is an entirely typical product of the so-called ivory tower, wherein arcane and plain speech alike are crafted to euphemize and thus redirect and defuse the conflicts of the university with other social institutions, especially those other institutions who command the university to do this or do that. To take the ambiguity of Jones’s thesis statement (as quoted above) at its word: what if the cultural response that Jones asks us to imagine, here, is indeed and itself the “eversion” of the digital humanities, in one of the metaphorical senses he doesn’t quite consider: an autotomy or self-amputation that, as McLuhan so enjoyed suggesting in so many different ways, serves to deflect the fact of the world as a whole?

    There are few moments of outright ignorance in The Emergence of the Digital Humanities — how could there be, in the security of such a narrow channel?6 Still, pace Jones’s basic assumption here (it is not quite an argument), we might understand the emergence of the digital humanities as the emergence of a conversation that is not about something — cultural change, etc. — as much as it is an attempt to avoid conversing about something: to avoid discussing such cultural change in its most salient and obvious flesh-and-concrete manifestations. “DH is, of course, a socially constructed phenomenon,” Jones tells us (7) — yet “the social,” here, is limited to what Jones himself selects, and selectively indeed. “This is not a question of technological determinism,” he insists. “It’s a matter of recognizing that DH emerged, not in isolation, but as part of larger changes in the culture at large and that culture’s technological infrastructure” (8). Yet the largeness of those larger changes is smaller than any truly reasonable reader, reading any history of the past decade, might have reason to expect. How pleasant that such historical change was “intertwined with culture, creativity, and commerce” (8) — not brutality, bootlicking, and bank fraud. Not even the modest and rather opportunistic gloom of Gibson’s 2010 New York Times op-ed entitled “Google’s Earth” finds its way into Jones’s discourse, despite the extended treatment that Gibson’s “eversion” gets here.

    From our most ostensibly traditional scholarly colleagues, toiling away in their genuine and genuinely book-dusty modesty, we don’t expect much respect for the present moment (which is why they often surprise us). But The Emergence of the Digital Humanities is, at least in ambition, a book about cultural change over the last decade. And such historiographic elision is substantive — enough so to warrant impatient response. While one might not want to say that nothing good can have emerged from the cultural change of the period in question, it would be infantile to deny that conditions have been unpropitious in the extreme, possibly as unpropitious as they have ever been, in U.S. postwar history — and that claims for the value of what emerges into institutionality and institutionalization, under such conditions, deserve extra care and, indeed defense in advance, if one wants not to invite a reasonably caustic skepticism.

    When Jones does engage in such defense, it is weakly argued. To construe the emergence of the digital humanities as non-meaninglessly concurrent with the emergence of yet another wave of mass educational automation (in the MOOC hype that crested in 2013), for example, is wrong not because Jones can demonstrate that their concurrence is the concurrence of two entirely segregated genealogies — one rooted in Silicon Valley ideology and product marketing, say, and one utterly and completely uncaused and untouched by it — but because to observe their concurrence is “particularly galling” to many self-identified DH practitioners (11). Well, excuse me for galling you! “DH practitioners I know,” Jones informs us, “are well aware of [the] complications and complicities” of emergence in an age of precarious labor, “and they’re often busy answering, complicating, and resisting such opportunistic and simplistic views” (10). Argumentative non sequitur aside, that sounds like a lot of work undertaken in self-defense — more than anyone really ought to have to do, if they’re near to the right side of history. Finally, “those outside DH,” Jones opines in an attempt at counter-critique, “often underestimate the theoretical sophistication of many in computing,” who “know better than many of their humanist critics that their science is provisional and contingent” (10): a statement that will only earn Jones super-demerits from those of such humanist critics — they are more numerous than the likes of Jones ever seem to suspect — who came to the humanities with scientific and/or technical aptitudes, sometimes with extensive educational and/or professional training and experience, and whose “sometimes world-weary and condescending skepticism” (10) is sometimes very well-informed and well-justified indeed, and certain to outlive Jones’s winded jabs at it.

    Jones is especially clumsy in confronting the charge that the digital humanities is marked by a forgetting or evasion of the commitment to cultural criticism foregrounded by other, older and now explicitly competing formations, like so-called new media studies. Citing the suggestion by “media scholar Nick Montfort” that “work in the digital humanities is usually considered to be the digitization and analysis of pre-digital cultural artifacts, not the investigation of contemporary computational media,” Jones remarks that “Montfort’s own work […] seems to me to belie the distinction,”7 as if Montfort — or anyone making such a statement — were simply deluded about his own work, or about his experience of a social economy of intellectual attention under identifiably specific social and historical conditions, or else merely expressing pain at being excluded from a social space to which he desired admission, rather than objecting on principle to a secessionist act of imagination.8

    3

    Jones tells us that he doesn’t “mean to gloss over the uneven distribution of [network] technologies around the world, or the serious social and political problems associated with manufacturing and discarding the devices and maintaining the server farms and cell towers on which the network depends” — but he goes ahead and does it anyway, and without apology or evident regret. “[I]t’s not my topic in this book,” we are told, “and I’ve deliberately restricted my focus to the already-networked world” (3). The message is clear: this is a book for readers who will accept such circumscription, in what they read and contemplate. Perhaps this is what marks the emergence of the digital humanities, in the re-emergence of license for restrictive intellectual ambition and a generally restrictive purview: a bracketing of the world that was increasingly discredited, and discredited with increasing ferocity, just by the way, in the academic humanities in the course of the three decades preceding the first Silicon Valley bubble. Jones suggests that “it can be too easy to assume a qualitative hierarchical difference in the impact of networked technology, too easy to extend the deeper biases of privilege into binary theories of the global ‘digital divide’” (4), and one wonders what authority to grant to such a pronouncement when articulated by someone who admits he is not interested, at least in this book, in thinking about how an — how any — other half lives. It’s the latter, not the former, that is the easy choice here. (Against a single, entirely inconsequential squib in Computer Business Review entitled “Report: Global Digital Divide Getting Worse,” an almost obnoxiously perfunctory footnote pits “a United Nations Telecoms Agency report” from 2012. This is not scholarship.)

    Thus it is that, read closely, the demand for finitude in the one capacity in which we are non-mortal — in thought and intellectual ambition — and the more or less cheerful imagination of an implied reader satisfied by such finitude, become passive microaggressions aimed at another mode of the production of knowledge, whose expansive focus on a theoretical totality of social antagonism (what Jones calls “hierarchical difference”) and justice (what he calls “binary theories”) makes the author of The Emergence of the Digital Humanities uncomfortable, at least on its pages.

    That’s fine, of course. No: no, it’s not. What I mean to say is that it’s unfair to write as if the author of The Emergence of the Digital Humanities alone bears responsibility for this particular, certainly overdetermined state of affairs. He doesn’t — how could he? But he’s getting no help, either, from most of those who will be more or less pleased by the title of his book, and by its argument, such as it is: because they want to believe they have “emerged” along with it, and with that tension resolved, its discomforts relieved. Jones’s book doesn’t seriously challenge that desire, its (few) hedges and provisos notwithstanding. If that desire is more anxious now than ever, as digital humanities enthusiasts find themselves scrutinized from all sides, it is with good reason.
    _____

    Brian Lennon is Associate Professor of English and Comparative Literature at Pennsylvania State University and the author of In Babel’s Shadow: Multilingual Literatures, Monolingual States (University of Minnesota Press, 2010).
    _____

    notes:
    1. Jones, 1.
    Back to the essay

    2. Jones, 4. “Interest” is presumed to be affirmative, here, marking one elision of the range of humanistic critical and scholarly attitudes toward Google generally and the Google Books project in particular. And of the unequivocally less affirmative “interest” of creative writers as represented by the Authors Guild, just for example, Jones has nothing to say: another elision.
    Back to the essay

    3. Jones, 13.
    Back to the essay

    4. See Gibson.
    Back to the essay

    5. Jones, 5.
    Back to the essay

    6. As eager as any other digital humanities enthusiast to accept Franco Moretti’s legitimation of DH, but apparently incurious about the intellectual formation, career and body of work that led such a big fish to such a small pond, Jones opines that Moretti’s “call for a distant reading” stands “opposed to the close reading that has been central to literary studies since the late nineteenth century” (Jones, 62). “Late nineteenth century” when exactly, and where (and how, and why)? one wonders. But to judge by what Jones sees fit to say by way of explanation — that is, nothing at all — this is mere hearsay.
    Back to the essay

    7. Jones, 5. See also Montfort.
    Back to the essay

    8. As further evidence that Montfort’s statement is a mischaracterization or expresses a misunderstanding, Jones suggests the fact that “[t]he Electronic Literature Organization itself, an important center of gravity for the study of computational media in which Montfort has been instrumental, was for a time housed at the Maryland Institute for Technology in the Humanities (MITH), a preeminent DH center where Matthew Kirschenbaum served as faculty advisor” (Jones, 5–6). The non sequiturs continue: “digital humanities” includes the study of computing and media because “self-identified practitioners doing DH” study computing and media (Jones, 6); the study of computing and media is also “digital humanities” because the study of computing and digital media might be performed at institutions like MITH or George Mason University’s Roy Rosenzweig Center for History and New Media, which are “digital humanities centers” (although the phrase “digital humanities” appears nowhere in their names); “digital humanities” also adequately describes work in “media archaeology” or “media history,” because such work has “continued to influence DH” (Jones, 6); new media studies is a component of the digital humanities because some scholars suggest it is so, and others cannot be heard to object, at least after one has placed one’s fingers in one’s ears; and so on.
    Back to the essay

    (feature image: “Bandeau – Manifeste des Digital Humanities,” uncredited; originally posted on flickr.)

  • The Digital Turn

    The Digital Turn

    800px-Culture_d'amandiers

    David Golumbia and The b2 Review look to digital culture

    ~
    I am pleased and honored to have been asked by the editors of boundary 2 to inaugurate a new section on digital culture for The b2 Review.

    The editors asked me to write a couple of sentences for the print journal to indicate the direction the new section will take, which I’ve included here:

    In the new section of the b2 Review, we’ll be bringing the same level of critical intelligence and insight—and some of the same voices—to the study of digital culture that boundary 2 has long brought to other areas of literary and cultural studies. Our main focus will be on scholarly books about digital technology and culture, but we will also branch out to articles, legal proceedings, videos, social media, digital humanities projects, and other emerging digital forms.

    While some might think it late in the day for boundary 2 to be joining the game of digital cultural criticism, I take the time lag between the moment at which thoroughgoing digitization became an unavoidable reality (sometime during the 1990s) and the first of the major literary studies journals to dedicate part of itself to digital culture as indicative of a welcome and necessary caution with regard to the breathless enthusiasm of digital utopianism. As humanists our primary intellectual commitment is to the deeply embedded texts, figures, and themes that constitute human culture, and precisely the intensity and thoroughgoing nature of the putative digital revolution must give somebody pause—and if not humanists, who?

    Today, the most overt mark of the digital in humanities scholarship goes by the name Digital Humanities, but it remains notable how little interaction there is between the rest of literary studies and that which comes under the DH rubric. That lack of interaction goes in both directions: DH scholars rarely cite or engage directly with the work the rest of us do, and the rest of literary studies rarely cites DH work, especially when DH is taken in its “narrow” or most heavily quantitative form. The enterprises seem, at times, to be entirely at odds, and the rhetoric of the digital enthusiasts who populate DH does little to forestall this impression. Indeed, my own membership in the field of DH has long been a vexed question, despite being one of the first English professors in the country to be hired to a position for which the primary specialization was explicitly indicated as Digital Humanities (at the University of Virginia in 2003), and despite being a humanist whose primary area is “digital studies,” and the inability of scholars “to be” or “not to be” members of a field in which they work is one of the several ways that DH does not resemble other developments in the always-changing world of literary studies.

    800px-054_Culture_de_fraises_en_hauteur_et_sous_serre_à_Plougastel

    Earlier this month, along with my colleague Jennifer Rhee, I organized a symposium called Critical Approaches to Digital Humanities sponsored by the MATX PhD program at Virginia Commonwealth University, where Prof. Rhee and I teach in the English Department. One of the conference participants, Fiona Barnett of Duke and HASTAC, prepared a Storify version of the Twitter activity at the symposium that provides some sense of the proceedings. While it followed on the heels and was continuous with panels such as the ‘Dark Side of the Digital Humanities’ at the 2013 MLA Annual Convention, and several at recent American Studies Association Conventions, among others, this was to our knowledge the first standalone DH event that resembled other humanities conferences as they are conducted today. Issues of race, class, gender, sexuality, and ability were primary; cultural representation and its relation to (or lack of relation to) identity politics was of primary concern; close reading of texts both likely and unlikely figured prominently; the presenters were diverse along several different axes. This arose not out of deliberate planning so much as organically from the speakers whose work spoke to the questions we wanted to raise.

    I mention the symposium to draw attention to what I think it represents, and what the launching of a digital culture section by boundary 2 also represents: the considered turning of the great ship of humanistic study toward the digital. For too long enthusiasts alone have been able to stake out this territory and claim special and even exclusive insight with regard to the digital, following typical “hacker” or cyberlibertarian assertions about the irrelevance of any work that does not proceed directly out of knowledge of the computer. That such claims could even be taken seriously has, I think, produced a kind of stunned silence on the part of many humanists, because it is both so confrontational and so antithetical to the remit of the literary humanities from comparative philology to the New Criticism to deconstruction, feminism and queer theory. That the core of the literary humanities as represented by so august an institution as boundary 2 should turn its attention there both validates the sense of digital enthusiasts of the medium’s importance, but should also provoke them toward a responsibility toward the project and history of the humanities that, so far, many of them have treated with a disregard that at times might be characterized as cavalier.

    -David Golumbia

    Browse All Digital Studies Reviews