boundary 2

Tag: Humanities

  • Tim Duffy — Mapping Without Tools: What the Digital Turn Can Learn from the Cartographic Turn

    Tim Duffy — Mapping Without Tools: What the Digital Turn Can Learn from the Cartographic Turn

    Tim Duffy

    Christian Jacob, in The Sovereign Map, describes maps as enablers of fantasy: “Maps and globes allow us to live a voyage reduced to the gaze, stripped of the ups and downs and chance occurrences, a voyage without the narrative, without the narrative, without pitfalls, without even the departure” (2005). Consumers and theorists of maps, more than cartographers themselves, are especially set up to enjoy the “voyage reduced to the gaze” that cartographic artifacts (including texts) are able to provide. An outside view, distant from the production of the artifact, activates the epistemological potential of the artifact in a way that producing the same artifact cannot.

    This dynamic is found at the conceptual level of interpreting cartography as a discipline as well. J.B. Harley, in his famous essay “Deconstructing the Map,” writes that:

    a major roadblock to our understanding is that we still accept uncritically the broad consensus, with relatively few dissenting voices, of what cartographers tell us maps are supposed to be. In particular, we often tend to work from the premise that mappers engage in an unquestionably “scientific” or “objective” form of knowledge creation…It is better for us to begin from the premise that cartography is seldom what cartographers say it is (Harley 1989, 57).

    Harley urges an interpretation of maps outside the purview and authority of the map’s creator, just as a literary scholar would insist on the critic’s ability to understand the text beyond the authority of what the authors say about their texts. There can be, in other words, a power in having distance from the act of making. There is clarity that comes from the role of the thinker outside of the process of creation.

    The goal of this essay is to push back against the valorization of “tools” and “making” in the digital turn, particularly its manifestation in digital humanities (DH), by reflecting on illustrative examples of the cartographic turn, which, from its roots in the sixteenth century through to J.B. Harley’s explosive provocation in 1989 (and beyond) has labored to understand the relationship between the practice of making maps and the experiences of looking at and using them.  By considering the stubborn and defining spiritual roots of cartographic research and the way fantasies of empiricism helped to hide the more nefarious and oppressive applications of their work, I hope to provide a mirror for the state of the digital humanities, a field always under attack, always defining and defending itself, and always fluid in its goals and motions.

    Cartography in the sixteenth century, even as its tools and representational techniques were becoming more and more sophisticated, could never quite abandon the religious legacies of its past, nor did it want to. Roger Bacon in the thirteenth century had claimed that only with a thorough understanding of geography could one understand the Bible. Pauline Moffitt Watts, in her essay “The European Religious Worldview and Its Influence on Mapping” concludes that many maps, including those by Eskrich and Ortelius, preserved a sense of providential and divine meaning even as they sought to narrate smaller, local areas:

    Although the messages these maps present are inescapably bound, their ultimate source—God—transcends and eclipses history. His eternity and omnipresence is signed but not constrained in the figurae, places, people, and events that ornament them. They offer fantastic, sometimes absurd vignettes and pastiches that nonetheless integrate the ephemera into a vision of providential history that maintained its power to make meaning well into the early modern era. (2007, 400)

    The way maps make meaning is contained not just in the technical expertise of the way the maps are constructed but in the visual experiences they provide that “make meaning” for the viewer. By over-prioritizing an emphasis on the way maps are made or on the geometric innovations that make their creation possible, the cartographic historian and theorist would miss the full effect of the work.

    Yet, the spiritual dimensions of mapmaking were not in opposition to technological expertise, and in many cases they went hand in hand. In his book Radical Arts, the Anglo-Dutch scholar Jan van Dorsten describes the spiritual motivations of sixteenth-century cosmographers disappointed by academic theology’s ability to ease the trauma of the European Reformation: “Theology…as the traditional science of revelation had failed visibly to unite mankind in one indisputably “true” perception of God’s plan and the properties of His creature. The new science of cosmography, its students seem to argue, will eventually achieve precisely that, thanks to its non-disputative method” (1970, 56-7). Some mapmakers of the sixteenth century in England, the Netherlands, and elsewhere—including Ortelius and others—imagined that the science and art of describing the created world, a text rivaling scripture in both revelatory potential and divine authorship, would create unity out of the disputation-prone culture of academic theology. Unlike theology, where thinkers are mapping an invisible world held in biblical scripture and apostolic tradition (as well as a millennium’s worth of commentary and exegesis), the liber naturae, the book of nature, is available to the eyes more directly, seemingly less prone to disputation.

    Cartographers were attempting to create an accurate imago mundi—surely that was a more tangible and grounded goal than trying to map divinity. Yet, as Patrick Gautier Dalché notes in his essay “The Reception of Ptolemy’s Geography (End of the Fourteenth to Beginning of the Sixteenth Century),” the modernizing techniques of cartography after the “rediscovery” of Ptolemy’s work, did not exactly follow a straight line of empirical progress:

    The modernization of the imago mundi and the work on modes of representation that developed during the early years of the sixteenth century should not be seen as either more or less successful attempts to integrate new information into existing geographic pictures. Nor should they be seen as steps toward a more “correct” representation, that is, toward conforming to our own notion of correct representation. They were exploratory games played with reality that took people in different directions…Ptolemy was not so much the source of a correct cartography as a stimulus to detailed consideration of an essential fact of cartographic representation: a map is a depiction based on a problematic, arbitrary, and malleable convention. (2007, 360).

    So even as the maps of this period may appear more “correct” to us, they are still engaged in experimentation to a degree that undermines any sense of the map as simply an empirical graphic representation of the earth. The “problematic, arbitrary, and malleable” conventions, used by the cartographer but observed and understood by the cartographic theorist and historian, reveal the sort of synergetic relationship between maker and observer, practitioner and theorist that allow an artifact to come into greater focus.

    Yet, cartography for much of its history turned away from seeing its work as culturally or even politically embedded. David Stoddart, in his history of geography, labels Cook’s voyage to the Pacific in 1769 as the origin point of cartography’s transforming into an empirical science.[1] Stoddart places geography, from that point onward, within the realm of the natural sciences based on, as Derek Gregory observes, “three features of decisive significance for the formation of geography as a distinctly modern, avowedly ‘objective’ science: a concern for realism in description, for systematic classification in collection, and for the comparative method in explanation” (Gregory 1994, 19). What is gone, then, in this march toward empiricism is any sense of culturally embedded codes within the map. The map, like a lab report of scientific findings, is meant to represent what is “actually” there. This term “actually” will come back to haunt us when we turn to the digital humanities.

    Yet, in the long history of mapping, before and after this supposed empirical fulcrum, maps remain slippery and malleable objects that are used for a diverse range of purposes and that reflect the cultural imagination of their makers and observers. As maps took on the appearance of the empirical and began to sublimate the devotional and fantastical aspects they had once shown proudly, they were no less imprinted with cultural knowledge and biases. If anything, the veil of empiricism allowed the cultural, political, and imperial goals of mapmaking to be hidden.

    In William Boelhower’s groundbreaking “Inventing America: The Culture of the Map” he argued precisely that maps had not simply graphically represented America, but rather that America was invented by maps. “Accustomed to the success of scientific discourse and imbued with the Cartesian tradition,” he writes, “the sons of Columbus naturally presumed that their version of reality was the version” (1988, 212). While Europeans believed they were simply mapping what they saw according to empirical principles, they didn’t realize they were actually inventing America in their own discursive image. He elaborates: “The Map is America’s precognition; at its center is not geography in se but the eye of the cartographer. The fact requires new respect for the in-forming relation between the history of modern cartography and the history of the Euro-American’s being-in-the-new-world” (213). Empiricism, then, was empire. “Empirical” maps were making the eye of the cartographer into the ideal “objective” viewer, producing a fictional way of seeing that reflected state power. Boelhower refers to the scale map as a kind of “panopticon” because of the “line’s achievement of an absolute and closed system no longer dependent on the local perspectivism of the image. With map in hand, the physical subject is theoretically everywhere and nowhere, truly a global operator” (222). What appears, then, simply to be the gathering, studying, and representation of data is, in fact, a system of discursive domination in which the cartographer asserts their worldview onto a site. As Boelhower puts it: “Never before had a nation-state sprung so rationally from a cartographic fiction, the Euclidean map imposing concrete form on a territory and a people” (223). America was a cartographic invention meant to appear as empirically identical to how the cartographers made it look.

    To turn again to J.B. Harley’s 1989 bombshell, maps are always evidence of cultural norms and perspectives, even when they try their best to appear sparse and scientific. Referring to “plain scientific maps,” Harley claims that “such maps contain a dimension of ‘symbolic realism’ which is no less a statement of political authority or control than a coat-of-arms or a portrait of a queen placed at the head of an earlier decorative map.” Even “accuracy and austerity of design are now the new talismans of authority culminating in our own age with computer mapping” (60). To represent the world “is to appropriate it” and to “discipline” and “normalize” it (61). The more we move away from cultural markers for the mythical comfort of “empirical” data, the more we find we are creating dominating fictions. There is no representation of data that does not exist within the hierarchies of cultural codes and expectations.

    What this rather eclectic history of cartography reveals is that even when maps and mapmaking attempt to hide or move beyond their cultural and devotional roots, cultural, ethical, and political markers inevitably embed themselves in the map’s role as a broker of power. Maps sort data, but in so doing they create worldviews with real world consequences. As some practitioners of mapmaking in the early modern period, such as those Familists who counted several cartographers among their membership, might have thought their cartographic work provided a more universal and less disputation-prone discursive focus than say, philosophy or theology, they were producing power through their maps, appropriating and taming the world around them in ways only fully accessible to the reader, the historian, the viewer. Harley invites us to push back against a definition of cartographic studies that follows what cartographers themselves believe cartography must be. One can now, like the author of this essay, be a theorist and historian of cartographic culture without ever having made a map. Having one’s work exist outside the power-formation networks of cartographic technology provides a unique view into how maps make meaning and power out in the world. The main goal of this essay, as I turn to the digital humanities, is to encourage those interested in the digital turn to make room for those who study, observe, and critique, but do not make.[2]

    Though the digital turn in the humanities is often celebrated for its wider scope and its ability to allow scholars to interpret—or at least observe—data trends across many more books than one human could read in the research period of an academic project, I would argue that the main thrust of the fantasy of the digital turn can be understood through its preoccupation with a fantasy of access and a view of its labor as fundamentally different than the labor of traditional academic discourse. A radical hybridity is celebrated. Rather than just read books and argue about the contents, the digital humanist is able to draw from a wide variety of sources and expanded data. Michael Witmore, in a recent essay published in New Literary History, celebrates this age of hybridity: “If we speak of hybridization as the point where constraints cease to be either intellectual or physical, where changes in the earth’s mean temperature follow just as inevitably from the ‘political choices of human beings’ as they do from the ‘laws of nature,’ we get a sense of how rich and productive the modernist divide has been. Hybrids have proliferated. Indeed, they seem inexhaustible” (355). Witmore sees digital humanities as existing within this hybridity: “The Latourian theory of hybrids provides a useful starting point for thinking about a field of inquiry in which interpretive claims are supported by evidence obtained via the exhaustive, enumerative resources of computing” (355).  The emphasis on the “exhaustive” and “enumerative” resources of computing would imply, even if this were not Witmore’s intention, that computing opens a depth of evidence not available to the non-hybrid, non-digitally enabled humanist.

    Indeed, in certain corners of DH, one often finds a suspicious eye cast on the value of traditional exegetical practices practiced without any digital engagements. In The Digital Humanist: A Critical Inquiry by Teresa Numerico, Domenico Fiormonte, and Francesco Tomasi, “the authors call on humanists to acquire the skills to become digital humanists,” elaborating: “Humanists must complete a paso doble, a double step: to rediscover the roots of their own discipline and to consider the changes necessary for its renewal. The start of this process is the realization that humanists have indeed played a role in the history of informatics” (2015, x). Numerico, Fiormonte, and Tomasi offer a vision of the humanities as in need of “renewal” rather than under attack from external forces. The suggestion is that the humanities need to rediscover their roots while at the same time taking on the “tools necessary for [their] renewal,” tools which are related to their “role in the history of informatics” and computing. The humanities are then shown to be tied up in a double bind: they have forgotten their roots and they are unable to innovate without the help of considering the digital.

    To offer a political aside: while Numerico, Fiormonte, and Tomasi offer a compelling and necessary history of the humanistic roots of computing, their argument is well in line with right-leaning attacks on the humanities  In their view, the humanities have fallen away from their first purpose, their roots. While the authors of the volume see these roots as connected to the early years of modern computer science, they could just as easily, especially given what early computational humanities looked like, be urging a return to philology and to the world of concordances and indexing that were so important to early and mid-twentieth century literary studies. They might also gesture instead at the deep history of political and philosophical thought out of which the modern university was born, and which were considered fundamental to the very project of university education until only very recently. Barring a return to these roots, the least the humanities can do to survive is to renew itself based on a connection to the digital and to the site of modern work: the computer terminal.

    Of course, what scholarly work is done outside the computer terminal? Journals and, increasingly, whole university press catalogs are being digitized and sold to university libraries on a subscription bases. Scholars read these materials and then type their own words into word processing programs onto machines (even, if like the recent Freewrite Machine released by Astrohaus, the machine attempts to appear as little like a computer as possible) and then, in almost all cases, email their work to editors who then edit it digitally and then publish it either in digitally-enabled print publishing or directly on-line. So why aren’t humanists of all sorts already considered connected to the digital?

    The answer is complicated and, like so many things in DH, depends on which particular theorist or practitioner you ask. Matthew Kirschenbaum writes about how one knows one is a digital humanist:

    You are a digital humanist if you are listened to by those who are already listened to as digital humanists, and they themselves got to be digital humanists by being listened to by others. Jobs, grant funding, fellowships, publishing contracts, speaking invitations—these things do not make one a digital humanist, though they clearly have a material impact on the circumstances of the work one does to get listened to. Put more plainly, if my university hires me as a digital humanist and if I receive a federal grant (say) to do such a thing that is described as digital humanities and if I am then rewarded by my department with promotion for having done it (not least because outside evaluators whom my department is enlisting to listen to as digital humanists have attested to its value to the digital humanities), then, well, yes, I am a digital humanist. Can you be a digital humanist without doing those things? Yes, if you want to be, though you may find yourself being listened to less unless and until you do some thing that is sufficiently noteworthy that reasonable people who themselves do similar things must account for your work, your thing, as a part of the progression of a shared field of interest. (2014, 55)

    Kirschenbaum defines the digital humanist as, mostly, someone who does something that earns the recognition of other digital humanists. He argues that this is not particularly different from the traditional humanities in which publications, grants, jobs, etc. are the standard definition of who is or is not a scholar. Yet, one wonders, especially in the age of the complete collapse of the humanities job market, if such institutional distinctions are either ethical or accurate. What would we call someone with a Ph.D. (or even without) who spends their days readings books, reading scholarly articles, and writing in their own room about the Victorian verse monologue or the early Tudor dramatic interludes? If no one reads a scholar, are they still a scholar? For the creative arts, we seem to have answered this question. We believe that the work of a poet, artist, or philosopher matters much more than their institutional appreciation or memberships during the era of the work’s production. Also, the need to be “listened to” is particularly vexed and reflects some of the political critiques that are often launched at DH. Who is most listened to in our society? White, cisgendered, heterosexual men. In the age of Trump, we are especially attuned to the fact that whom we choose to listen to is not always the most deserving or talented voice, but the one reflecting existent narratives of racial and economic distribution.

    Beyond this, the combined requirement of institutional recognition and economic investment (a salary from a university, a prestigious grant paid out) ties the work of the humanist to institutional rewards. One can be a poet, scholar, thinker in one’s own house, but you can’t be an investment banker or a lawyer or a police officer by self-declaration. The fluid nature of who can be a philosopher, thinker, poet, scholar has always meant that the work, not the institutional affiliation, of a writer/maker matters. Though DH is a diverse body of practitioners doing all sorts of work, it is often framed, sometimes only implicitly, as a return to “work” over “theory.” Kirschenbaum for instance, defending DH against accusations that it is against the traditional work of the humanities, writes: “Digital humanists don’t want to extinguish reading and theory and interpretation and cultural criticism. Digital humanists want to do their work… they want professional recognition and stability, whether as contingent labor, ladder faculty, graduate students, or in ‘alt-ac’ settings” (56). They essentially want the same things any other scholar does. Yet, while digital humanists are on the one hand defined by their ability to be listened to and to have professional recognition and stability, they are also in search of recognition and stability and eager to reshape humanistic work toward a more technological model.

    This leads to a question that is not always explored closely enough in discussions of the digital humanities in higher education. Though scholars are rightly building bridges between STEM and the humanities (rightly pushing for STEAM over STEM), there are major institutional differences between how the humanities and the sciences have traditionally functioned. Scientific research largely happens because of institutional investment of some kind whether from governmental, NGO, or corporate grants. This is why the funding sources of any given study are particularly important to follow. In the humanities, of course, grants also exist and they are a marker of career prestige. No one could doubt the benefit of time spent in a far-away archive or at home writing instead of teaching because of a dissertation-completion grant. Grants, in other words, boost careers but they are not necessary.[3] Very successful humanists depend on only library resources to produce influential work. In many cases, access to a library, a computer, and a desk is all one needs and the digitization of many archives (a phenomenon not free from political and ethical complications) has expanded access to archival materials once only available to students of wealthy institutions with deep special collections budgets or those with grants able to travel and lodge themselves far away for their research.

    All this is to say that a particular valorization of the sciences is risky business for the humanities. Kirschenbaum recommends that since “digital humanities…is sometimes said to suffer from Physics envy,” the field should embrace this label and turn to “a singularly powerful intellectual precedent for examining in close (yes, microscopic) detail the material conditions of knowledge production in scientific settings or configurations. Let us read citation networks and publication venues. Let us examine the usage patterns around particular tools. Let us treat the recensio of data sets” (60). Longing for the humanities to resemble the sciences is nothing new. Longing for data sets instead of individual texts, longing for “particular tools” rather than a philosophical problem or trend can sometimes be a helpful corrective to more Platonic searches for the “spirit” of a work or movement. And yet, there are risks to this approach, not least because the works themselves, that is, the object of inquiry, is treated in such general terms that it becomes essentially invisible. One can miss the tree for the forest and know more about the number of citations of Dante’s Commedia than the original text, or the spirit in which those citations are made. Surely, there is room for both, except when, because of shrinking hiring practices, there isn’t.

    In fact the economic politics of digital humanities has long been a source of at time fiery debate. Daniel Allington, Sarah Brouillette, and David Golumbia, in “Neoliberal Tools (and Archives): A Political History of Digital Humanities,” argue that the digital humanities have long been more defined by their preference for lab and project-based sources of knowledge over traditional humanistic inquiry:

    What Digital Humanities is not about, despite its explicit claims, is the use of digital or quantitative methodologies to answer research questions in the humanities. It is, instead, about the promotion of project-based learning and lab-based research over reading and writing, the rebranding of insecure campus employment as an empowering “alt-ac” career choice, and the redefinition of technical expertise as a form (indeed, the superior form) of humanistic knowledge. (Allington, Brouillette and Golumbia 2016)

    This last point, the valorization of “technical expertise,” is, I would argue, profoundly difficult to perform in a way that doesn’t implicitly devalue the classic toolbox of humanistic inquiry. The motto “More hack, less yack”—a favorite of the THATCamps, collaborative “un-conferences”—encapsulates this idea. Too much unfettered talking could lead to discord, to ambiguity, and to strife. To hack, on the other hand, is understood as something tangible and something implicitly more worthwhile than the production of discourse outside of particular projects and digital labs. Yet Natalia Cecire has noted, “You show up at a THATCamp and suddenly folks are talking about separating content and form as if that were, like, a real thing you could do. It makes the head spin” (Cecire 2011). Context, with all its ambiguities, once the bedrock of humanistic inquiry, is being sidestepped for massive data analysis that, by the very nature of distant reading, cannot account for context to a degree that would satisfy, say, the many Renaissance scholars who trained me. Cecire’s argument is a valuable one. In her post, she does not argue that we should necessarily follow a strategy of “no hack,” only that “we should probably get over the aversion to ‘yack.’” As she notes, “[yack] doesn’t have to replace ‘hack’; the two are not antithetical.”

    As DH continues to define itself, one can detect a sense that digital humanities’ focus on individual pieces or series of data, as well as their work in coding, embeds them in more empirical conversations that do not float to the level of speculation that is so emblematic of what used to be called high theory. This is, for many DH practitioners, a source of great pride. Kirschenbaum ends his essay with the following observation: “there is one thing that digital humanities ineluctably is: digital humanities is work, somebody’s work, somewhere, some thing, always. We know how to talk about work. So let’s talk about this work, in action, this actually existing work” (61). The author’s insistence on “some thing” and “this actually existing work” implies that there is work that is not centered on a thing or on work that actually exists, that the move to more concrete objects of inquiry, toward more empirical subjects, is a defining characteristic of digital humanities.

    This, among other issues, has made many respond to the digital humanities as if they are cooperating with and participating in the corporatized ideologies of Silicon Valley “tech culture.” Whitney Trettien, in an insightful blogpost, claims, “Humanities scholars who engage with technology in non-trivial ways have done a poor job responding to such criticism” and accuses those who criticize digital humanities as “continuing to reify a diverse set of practices as a homogeneous whole.” Let me be clear: I am not claiming that Kirschenbaum or Trettien or any other scholar writing in a theoretical mode about digital humanities is representative of an entire field, but their writing is part of the discursive community and when those of us whose work is enabled by digital resources but who do not work to build digital tools see our work described as a “trivial” engagement with the digital and see our work put in contrast, implicitly but still clearly, with “this actually existing work,” it is hard not to feel as if the humanist working on texts with digital tools (but not about the digital tools or about data derived from digital modeling) were being somehow slighted.

    For instance, in a short essay by Tom Scheinfeldt, “Why Digital Humanities is ‘Nice,’” the author claims: “One of the things that people often notice when they enter the field of digital humanities is how nice everybody is. This can be in stark contrast to other (unnamed) disciplines where suspicion, envy, and territoriality sometimes seem to rule. By contrast, our most commonly used bywords are ‘collegiality,’ ‘openness,’ and ‘collaboration’” (2012, 1). I have to admit I have not noticed what Scheinfeldt claims people often notice (perhaps I have spent too much time on twitter watching digital humanities debates unfurl in less than “nice” ways), but the claim, even as a discursive and defining fiction around DH, helps to understand one thread of the digital humanities’ project of self-definition: we are kind because what we work on is verifiable fact, not complicated and speculative philosophy or theory. Scheinfeldt says as much as he concludes his essay:

    Digital humanities is nice because, as I have described in earlier posts, we’re often more concerned with method than we are with theory. Why should a focus on method make us nice? Because methodological debates are often more easily resolved than theoretical ones. Critics approaching an issue with sharply opposed theories may argue endlessly over evidence and interpretation. Practitioners facing a methodological problem may likewise argue over which tool or method to use. Yet at some point in most methodological debates one of two things happens: either one method or another wins out empirically, or the practical needs of our projects require us simply to pick one and move on. Moreover, as Sean Takats, my colleague at the Roy Rosenzweig Center for History and New Media (CHNM), pointed out to me today, the methodological focus makes it easy for us to “call bullshit.” If anyone takes an argument too far afield, the community of practitioners can always put the argument to rest by asking to see some working code, a useable standard, or some other tangible result. In each case, the focus on method means that arguments are short, and digital humanities stays nice. (2)

    The most obvious question one is left with is: but what is the code doing? Where are the humanities in this vision of the digital? What truly discursive and interpretative work could produce fundamental disagreements that could be resolved simply by verifying the code in a community setting? Also, the celebration of how enforceable community norms are if an argument goes “too far afield” presents a troubling vision of a true discursive community where the appearance of agreement, enforceable through “empirical” testing, is more important than freedom of debate. In our current political climate, one wonders if such empirically-minded groupthink adequately makes room for more vulnerable, and not quite as loud, voices. When the goal is a functioning website or program, Scheinfeldt may be quite right, but when describing discursive work in the humanities, citing text for instance, rarely quells disagreement, but only makes clearer where the battle lines are drawn. This is particularly ironic given how the digital humanities, understood as a giant discursive, never-quite-adequate term for the field, is still defining itself and has been defining itself for decades with essay after essay defining just what DH is.

    I am echoing here some of the arguments offered by Adeline Koh in her essay “Niceness, Building, and Opening the Genealogy of the Digital Humanities: Beyond the Social Contract of Humanities Computing.” In this quite important intervention, Koh argues that DH is centered in two linked characteristics, niceness and technological expertise. Though one might think these requirements are disparate, Koh reveals how they are linked in the formation of a DH social contract:

    In my reading of this discursive structure, each rule reinforces the other. An emphasis on method as it applies to a project—which requires technical knowledge—requires resolution, which in turn leads to niceness and collegiality. To move away from technical knowledge—which appears to happen in [prominent DH scholar Stephen] Ramsay’s formulation of DH 2—is to move away from niceness and toward a darker side of the digital humanities. Proponents of technical knowledge appear to be arguing that to reject an emphasis on method is to reject an emphasis on civility. In other words, these two rules form the basis of an attempt to enforce a digital humanities social contract: necessary conditions (technical knowledge) that impose civic responsibilities (civility and niceness). (100)

    Koh believes that what is necessary to reduce the link between DH social contracts and the tenets of liberalism, is an expanded genealogy of the digital humanities. Koh urges DH to consider its roots beyond humanities computing.[4]

    To demand that one work with technical expertise on “this actually existing work”—whatever that work may end up being—is to state rather clearly that there are guidelines fencing in the digital humanities. As in the history of cartographic studies, the opinions of the makers paying attention to data sets have been allowed to determine what the digital humanities are (or what DH is). Like the moment when J.B. Harley challenged historians and theorists of cartography to ignore what the cartographers say and explore maps and mapmaking outside of the tools needed to make a map, perhaps DH is ready to enter a new phase where it begins its own renewal by no longer valorizing tools, code, and technology and letting the observers, the consumers, the fantasists, and the historians of power and oppression in (without their laptops). Indeed, what DH can learn from the history of cartography is to understand that what DH is, in all its many forms, is seldom (just) what digital humanists say it is.

    _____

    Tim Duffy is a scholar of Renaissance literature, poetics, and spatial philosophy.

    Back to the essay

    _____

    Notes

    [1] See David Stoddart, “Geography—a European Science” in On geography and its history, pp 28-40. For a discussion of Stoddart’s thinking, see Derek Gregory, Geographic Imaginations, pp. 16-21.

    [2] Obviously, critics and writers make, but their critique exists outside of the production of the artifact that they study. Cartographic theorists, as this article will argue, need not be a cartographer themselves any more than a critic or theorist of the digital need be a programmer or creator of digital objects.

    [3] For more on the political problems of dependence on grants, see Waltzer (2012): “One of those conditions is the dependence of the digital humanities upon grants. While the increase in funding available to digital humanities projects is welcome and has led to many innovative projects, an overdependence on grants can shape a field in a particular way. Grants in the humanities last a short period of time, which make them unlikely to fund the long-term positions that are needed to mount any kind of sustained challenge to current employment practices in the humanities. They are competitive, which can lead to skewed reporting on process and results, and reward polish, which often favors the experienced over the novice. They are external, which can force the orientation of the organizations that compete for them outward rather than toward the structure of the local institution and creates the pressure to always be producing” (340-341).

    [4] In her reading of how digital humanities deploys niceness, Koh writes “In my reading of this discursive structure, each rule reinforces the other. An emphasis on method as it applies to a project—which requires technical knowledge—requires resolution, which in turn leads to niceness and collegiality. To move away from technical knowledge…is to move away from niceness and toward a darker side of the digital humanities. Proponents of technical knowledge appear to be arguing that to reject an emphasis on method is to reject an emphasis on civility” (100).

    _____

    Works Cited

    • Allington, Daniel, Sarah Brouillete, and David Golumbia. 2016. “Neoliberal Tools (and Archives): A Political History of Digital Humanities.” Los Angeles Review of Books.
    • Boelhower, William. 1988. “Inventing America: The Culture of the Map” in Revue française d’études américaines 36. 211-224.
    • Cecire, Natalia. 2011. “When DH Was in Vogue; or, THATCamp Theory.”
    • Dalché, Patrick Gautier. 2007. “The Reception of Ptolemy’s Geography (End of the Fourteenth to Beginning of the Sixteenth Century) in Cartography in the European Renaissance, Volume 3, Part 1. Edited by David Woodward. Chicago: University of Chicago Press. 285-364.
    • Fiormonte, Domenico, Teresa Numerico, and Francesca Tomasi. 2015. The Digital Humanist: A Critical Inquiry. New York: Punctum Books
    • Gregory, Derek. 1994. Geographic Imaginations. Cambridge: Blackwell.
    • Harley, J.B. 2011. “Deconstructing the Map” in The Map Reader: Theories of Mapping Practice and Cartographic Representation, First Edition, edited by Martin Dodge, Rob Kitchin and Chris Perkins. New York: John Wiley & Sons, Ltd. 56-64.
    • Jacob, Christian. 2005. The Sovereign Map. Translated by Tom Conley. Chicago:  University of Chicago Press.
    • Kirschenbaum, Matthew. 2014. “What is ‘Digital Humanities,’ and Why Are They Saying Such Terrible Things about It?” Differences 25:1. 46-63.
    • Koh, Adeline. 2014. “Niceness, Building, and Opening the Genealogy of the Digital Humanities: Beyond the Social Contract of Humanities Computing.” Differences 25:1. 93-106.
    • Scheinfeldt, Tom. 2012. “Why Digital Humanities is ‘Nice.’” In Matthew Gold, ed., Debates in the Digital Humanities. Minneapolis: University of Minnesota Press.
    • Trettien, Whitney. 2016. “Creative Destruction/‘Digital Humanities.’” Medium (Aug 24).
    • Watts, Pauline Moffitt. 2007. “The European Religious Worldview and Its Influence on Mapping” in The History of Cartography: Cartography in the European Renaissance, Vol. 3, part 1. Edited by David Woodward. Chicago: University of Chicago Press). 382-400.
    • Waltzer, Luke. 2012. “Digital Humanities and the ‘Ugly Stepchildren’ of American Higher Education.” In Matthew Gold, ed., Debates in the Digital Humanities. Minneapolis: University of Minnesota Press.
    • Witmore, Michael. 2016. “Latour, the Digital Humanities, and the Divided Kingdom of Knowledge.” New Literary History 47:2-3. 353-375.

     

  • Data and Desire in Academic Life

    Data and Desire in Academic Life

    a review of Erez Aiden and Jean-Baptiste Michel, Uncharted: Big Data as a Lens on Human Culture (Riverhead Books, reprint edition, 2014)
    by Benjamin Haber
    ~

    On a recent visit to San Francisco, I found myself trying to purchase groceries when my credit card was declined. As the cashier is telling me this news, and before I really had time to feel any particular way about it, my leg vibrates. I’ve received a text: “Chase Fraud-Did you use card ending in 1234 for $100.40 at a grocery store on 07/01/2015? If YES reply 1, NO reply 2.” After replying “yes” (which was recognized even though I failed to follow instructions), I swiped my card again and was out the door with my food. Many have probably had a similar experience: most if not all credit card companies automatically track purchases for a variety of reasons, including fraud prevention, the tracking of illegal activity, and to offer tailored financial products and services. As I walked out of the store, for a moment, I felt the power of “big data,” how real-time consumer information can be read as be a predictor of a stolen card in less time than I had to consider why my card had been declined. It was a too rare moment of reflection on those networks of activity that modulate our life chances and capacities, mostly below and above our conscious awareness.

    And then I remembered: didn’t I buy my plane ticket with the points from that very credit card? And in fact, hadn’t I used that card on multiple occasions in San Francisco for purchases not much less than the amount my groceries cost. While the near-instantaneous text provided reassurance before I could consciously recognize my anxiety, the automatic card decline was likely not a sophisticated real-time data-enabled prescience, but a rather blunt instrument, flagging the transaction on the basis of two data points: distance from home and amount of purchase. In fact, there is plenty of evidence to suggest that the gap between data collection and processing, between metadata and content and between current reality of data and its speculative future is still quite large. While Target’s pregnancy predicting algorithm was a journalistic sensation, the more mundane computational confusion that has Gmail constantly serving me advertisements for trade and business schools shows the striking gap between the possibilities of what is collected and the current landscape of computationally prodded behavior. The text from Chase, your Klout score, the vibration of your FitBit, or the probabilistic genetic information from 23 and me are all primarily affective investments in mobilizing a desire for data’s future promise. These companies and others are opening of new ground for discourse via affect, creating networked infrastructures for modulating the body and social life.

    I was thinking about this while reading Uncharted: Big Data as a Lens on Human Culture, a love letter to the power and utility of algorithmic processing of the words in books. Though ostensibly about the Google Ngram Viewer, a neat if one-dimensional tool to visualize the word frequency of a portion of the books scanned by Google, Uncharted is also unquestionably involved in the mobilization of desire for quantification. Though about the academy rather than financialization, medicine, sports or any other field being “revolutionized” by big data, its breathless boosterism and obligatory cautions are emblematic of the emergent datafied spirit of capitalism, a celebratory “coming out” of the quantifying systems that constitute the emergent infrastructures of sociality.

    While published fairly recently, in 2013, Uncharted already feels dated in its strangely muted engagement with the variety of serious objections to sprawling corporate and state run data systems in the post-Snowden, post-Target, post-Ashley Madison era (a list that will always be in need of update). There is still the dazzlement about the sheer magnificent size of this potential new suitor—“If you wrote out all five zettabytes that humans produce every year by hand, you would reach the core of the Milky Way” (11)—all the more impressive when explicitly compared to the dusty old technologies of ink and paper. Authors Erez Aiden and Jean-Baptiste Michel are floating in a world of “simple and beautiful” formulas (45), “strange, fascinating and addictive” methods (22), producing “intriguing, perplexing and even fun” conclusions (119) in their drive to colonize the “uncharted continent” (76) that is the English language. The almost erotic desire for this bounty is made more explicit in their tongue-in-cheek characterization of their meetings with Google employees as an “irresistible… mating dance” (22):

    Scholars and scientists approach engineers, product managers, and even high-level executives about getting access to their companies’ data. Sometimes the initial conversation goes well. They go out for coffee. One thing leads to another, and a year later, a brand-new person enters the picture. Unfortunately this person is usually a lawyer. (22)

    There is a lot to unpack in these metaphors, the recasting of academic dependence on data systems designed and controlled by corporate entities as a sexy new opportunity for scholars and scientists. There are important conversations to be had about these circulations of quantified desire; about who gets access to this kind of data, the ethics of working with companies who have an existential interest in profit and shareholder return and the cultural significance of wrapping business transactions in the language of heterosexual coupling. Here however I am mostly interested in the real allure that this passage and others speaks to, and the attendant fear that mostly whispers, at least in a book written by Harvard PhDs with Ted talks to give.

    For most academics in the social sciences and the humanities “big data” is a term more likely to get caught in the throat than inspire butterflies in the stomach. While Aiden and Michel certainly acknowledge that old-fashion textual analysis (50) and theory (20) will have a place in this brave new world of charts and numbers, they provide a number of contrasts to suggest the relative poverty of even the most brilliant scholar in the face of big data. One hypothetical in particular, that is not directly answered but is strongly implied, spoke to my discipline specifically:

    Consider the following question: Which would help you more if your quest was to learn about contemporary human society—unfettered access to a leading university’s department of sociology, packed with experts on how societies function, or unfettered access to Facebook, a company whose goal is to help mediate human social relationships online? (12)

    The existential threat at the heart of this question was catalyzed for many people in Roger Burrows and Mike Savage’s 2007 “The Coming Crisis of Empirical Sociology,” an early canary singing the worry of what Nigel Thrift has called “knowing capitalism” (2005). Knowing capitalism speaks to the ways that capitalism has begun to take seriously the task of “thinking the everyday” (1) by embedding information technologies within “circuits of practice” (5). For Burrows and Savage these practices can and should be seen as a largely unrecognized world of sophisticated and profit-minded sociology that makes the quantitative tools of academics look like “a very poor instrument” in comparison (2007: 891).

    Indeed, as Burrows and Savage note, the now ubiquitous social survey is a technology invented by social scientists, folks who were once seen as strikingly innovative methodologists (888). Despite ever more sophisticated statistical treatments however, the now over 40 year old social survey remains the heart of social scientific quantitative methodology in a radically changed context. And while declining response rates, a constraining nation-based framing and competition from privately-funded surveys have all decreased the efficacy of academic survey research (890), nothing has threatened the discipline like the embedded and “passive” collecting technologies that fuel big data. And with these methodological changes come profound epistemological ones: questions of how, when, why and what we know of the world. These methods are inspiring changing ideas of generalizability and new expectations around the temporality of research. Does it matter, for example, that studies have questioned the accuracy of the FitBit? The growing popularity of these devices suggests at the very least that sociologists should not count on empirical rigor to save them from irrelevance.

    As academia reorganizes around the speculative potential of digital technologies, there is an increasing pile of capital available to those academics able to translate between the discourses of data capitalism and a variety of disciplinary traditions. And the lure of this capital is perhaps strongest in the humanities, whose scholars have been disproportionately affected by state economic retrenchment on education spending that has increasingly prioritized quantitative, instrumental, and skill-based majors. The increasing urgency in the humanities to use bigger and faster tools is reflected in the surprisingly minimal hand wringing over the politics of working with companies like Facebook, Twitter and Google. If there is trepidation in the N-grams project recounted in Uncharted, it is mostly coming from Google, whose lawyers and engineers have little incentive to bother themselves with the politically fraught, theory-driven, Institutional Review Board slow lane of academic production. The power imbalance of this courtship leaves those academics who decide to partner with these companies at the mercy of their epistemological priorities and, as Uncharted demonstrates, the cultural aesthetics of corporate tech.

    This is a vision of the public humanities refracted through the language of public relations and the “measurable outcomes” culture of the American technology industry. Uncharted has taken to heart the power of (re)branding to change the valence of your work: Aiden and Michel would like you to call their big data inflected historical research “culturomics” (22). In addition to a hopeful attempt to coin a buzzy new work about the digital, culturomics linguistically brings the humanities closer to the supposed precision, determination and quantifiability of economics. And lest you think this multivalent bringing of culture to capital—or rather the renegotiation of “the relationship between commerce and the ivory tower” (8)—is unseemly, Aiden and Michel provide an origin story to show how futile this separation has been.

    But the desire for written records has always accompanied economic activity, since transactions are meaningless unless you can clearly keep track of who owns what. As such, early human writing is dominated by wheeling and dealing: a menagerie of bets, chits, and contracts. Long before we had the writings of prophets, we had the writing of profits. (9)

    And no doubt this is true: culture is always already bound up with economy. But the full-throated embrace of culturomics is not a vision of interrogating and reimagining the relationship between economic systems, culture and everyday life; [1] rather it signals the acceptance of the idea of culture as transactional business model. While Google has long imagined itself as a company with a social mission, they are a publicly held company who will be punished by investors if they neglect their bottom line of increasing the engagement of eyeballs on advertisements. The N-gram Viewer does not make Google money, but it perhaps increases public support for their larger book-scanning initiative, which Google clearly sees as a valuable enough project to invest many years of labor and millions of dollars to defend in court.

    This vision of the humanities is transactionary in another way as well. While much of Uncharted is an attempt to demonstrate the profound, game-changing implications of the N-gram viewer, there is a distinctly small-questions, cocktail-party-conversation feel to this type of inquiry that seems ironically most useful in preparing ABD humanities and social science PhDs for jobs in the service industry than in training them for the future of academia. It might be more precise to say that the N-gram viewer is architecturally designed for small answers rather than small questions. All is resolved through linear projection, a winner and a loser or stasis. This is a vision of research where the precise nature of the mediation (what books have been excluded? what is the effect of treating all books as equally revealing of human culture? what about those humans whose voices have been systematically excluded from the written record?) is ignored, and where the actual analysis of books, and indeed the books themselves, are black-boxed from the researcher.

    Uncharted speaks to perils of doing research under the cloud of existential erasure and to the failure of academics to lead with a different vision of the possibilities of quantification. Collaborating with the wealthy corporate titans of data collection requires an acceptance of these companies own existential mandate: make tons of money by monetizing a dizzying array of human activities while speculatively reimagining the future to attempt to maintain that cash flow. For Google, this is a vision where all activities, not just “googling” are collected and analyzed in a seamlessly updating centralized system. Cars, thermostats, video games, photos, businesses are integrated not for the public benefit but because of the power of scale to sell or rent or advertise products. Data is promised as a deterministic balm for the unknowability of life and Google’s participation in academic research gives them the credibility to be your corporate (sen.se) mother. What, might we imagine, are the speculative possibilities of networked data not beholden to shareholder value?
    _____

    Benjamin Haber is a PhD candidate in Sociology at CUNY Graduate Center and a Digital Fellow at The Center for the Humanities. His current research is a cultural and material exploration of emergent infrastructures of corporeal data through a queer theoretical framework. He is organizing a conference called “Queer Circuits in Archival Times: Experimentation and Critique of Networked Data” to be held in New York City in May 2016.

    Back to the essay

    _____

    Notes

    [1] A project desperately needed in academia, where terms like “neoliberalism,” “biopolitics” and “late capitalism” more often than not are used briefly at end of a short section on implications rather than being given the critical attention and nuanced intentionality that they deserve.

    Works Cited

    Savage, Mike, and Roger Burrows. 2007. “The Coming Crisis of Empirical Sociology.” Sociology 41 (5): 885–99.

    Thrift, Nigel. 2005. Knowing Capitalism. London: SAGE.

  • The Digital Turn

    The Digital Turn

    800px-Culture_d'amandiers

    David Golumbia and The b2 Review look to digital culture

    ~
    I am pleased and honored to have been asked by the editors of boundary 2 to inaugurate a new section on digital culture for The b2 Review.

    The editors asked me to write a couple of sentences for the print journal to indicate the direction the new section will take, which I’ve included here:

    In the new section of the b2 Review, we’ll be bringing the same level of critical intelligence and insight—and some of the same voices—to the study of digital culture that boundary 2 has long brought to other areas of literary and cultural studies. Our main focus will be on scholarly books about digital technology and culture, but we will also branch out to articles, legal proceedings, videos, social media, digital humanities projects, and other emerging digital forms.

    While some might think it late in the day for boundary 2 to be joining the game of digital cultural criticism, I take the time lag between the moment at which thoroughgoing digitization became an unavoidable reality (sometime during the 1990s) and the first of the major literary studies journals to dedicate part of itself to digital culture as indicative of a welcome and necessary caution with regard to the breathless enthusiasm of digital utopianism. As humanists our primary intellectual commitment is to the deeply embedded texts, figures, and themes that constitute human culture, and precisely the intensity and thoroughgoing nature of the putative digital revolution must give somebody pause—and if not humanists, who?

    Today, the most overt mark of the digital in humanities scholarship goes by the name Digital Humanities, but it remains notable how little interaction there is between the rest of literary studies and that which comes under the DH rubric. That lack of interaction goes in both directions: DH scholars rarely cite or engage directly with the work the rest of us do, and the rest of literary studies rarely cites DH work, especially when DH is taken in its “narrow” or most heavily quantitative form. The enterprises seem, at times, to be entirely at odds, and the rhetoric of the digital enthusiasts who populate DH does little to forestall this impression. Indeed, my own membership in the field of DH has long been a vexed question, despite being one of the first English professors in the country to be hired to a position for which the primary specialization was explicitly indicated as Digital Humanities (at the University of Virginia in 2003), and despite being a humanist whose primary area is “digital studies,” and the inability of scholars “to be” or “not to be” members of a field in which they work is one of the several ways that DH does not resemble other developments in the always-changing world of literary studies.

    800px-054_Culture_de_fraises_en_hauteur_et_sous_serre_à_Plougastel

    Earlier this month, along with my colleague Jennifer Rhee, I organized a symposium called Critical Approaches to Digital Humanities sponsored by the MATX PhD program at Virginia Commonwealth University, where Prof. Rhee and I teach in the English Department. One of the conference participants, Fiona Barnett of Duke and HASTAC, prepared a Storify version of the Twitter activity at the symposium that provides some sense of the proceedings. While it followed on the heels and was continuous with panels such as the ‘Dark Side of the Digital Humanities’ at the 2013 MLA Annual Convention, and several at recent American Studies Association Conventions, among others, this was to our knowledge the first standalone DH event that resembled other humanities conferences as they are conducted today. Issues of race, class, gender, sexuality, and ability were primary; cultural representation and its relation to (or lack of relation to) identity politics was of primary concern; close reading of texts both likely and unlikely figured prominently; the presenters were diverse along several different axes. This arose not out of deliberate planning so much as organically from the speakers whose work spoke to the questions we wanted to raise.

    I mention the symposium to draw attention to what I think it represents, and what the launching of a digital culture section by boundary 2 also represents: the considered turning of the great ship of humanistic study toward the digital. For too long enthusiasts alone have been able to stake out this territory and claim special and even exclusive insight with regard to the digital, following typical “hacker” or cyberlibertarian assertions about the irrelevance of any work that does not proceed directly out of knowledge of the computer. That such claims could even be taken seriously has, I think, produced a kind of stunned silence on the part of many humanists, because it is both so confrontational and so antithetical to the remit of the literary humanities from comparative philology to the New Criticism to deconstruction, feminism and queer theory. That the core of the literary humanities as represented by so august an institution as boundary 2 should turn its attention there both validates the sense of digital enthusiasts of the medium’s importance, but should also provoke them toward a responsibility toward the project and history of the humanities that, so far, many of them have treated with a disregard that at times might be characterized as cavalier.

    -David Golumbia

    Browse All Digital Studies Reviews

  • Video: Africa Theorises (Tony Bogues and Achille Mbembe)

    Video: Africa Theorises (Tony Bogues and Achille Mbembe)

    Coverage of The University of Cape Town’s “Africa Theorises” has arrived – a conversation between our esteemed colleague Anthony Bogues and the renowned scholar Achille Mbembe. Topics include the “redrawing of the global intellectual map,” the “flight from theory” and “scientism,” the waning hegemony of the “Western Archive,” the possibilities of “liberty,” and the “modes of being human.”