b2o: boundary 2 online

Category: The Digital Turn

  • Tim Duffy — Mapping Without Tools: What the Digital Turn Can Learn from the Cartographic Turn

    Tim Duffy — Mapping Without Tools: What the Digital Turn Can Learn from the Cartographic Turn

    Tim Duffy

    Christian Jacob, in The Sovereign Map, describes maps as enablers of fantasy: “Maps and globes allow us to live a voyage reduced to the gaze, stripped of the ups and downs and chance occurrences, a voyage without the narrative, without the narrative, without pitfalls, without even the departure” (2005). Consumers and theorists of maps, more than cartographers themselves, are especially set up to enjoy the “voyage reduced to the gaze” that cartographic artifacts (including texts) are able to provide. An outside view, distant from the production of the artifact, activates the epistemological potential of the artifact in a way that producing the same artifact cannot.

    This dynamic is found at the conceptual level of interpreting cartography as a discipline as well. J.B. Harley, in his famous essay “Deconstructing the Map,” writes that:

    a major roadblock to our understanding is that we still accept uncritically the broad consensus, with relatively few dissenting voices, of what cartographers tell us maps are supposed to be. In particular, we often tend to work from the premise that mappers engage in an unquestionably “scientific” or “objective” form of knowledge creation…It is better for us to begin from the premise that cartography is seldom what cartographers say it is (Harley 1989, 57).

    Harley urges an interpretation of maps outside the purview and authority of the map’s creator, just as a literary scholar would insist on the critic’s ability to understand the text beyond the authority of what the authors say about their texts. There can be, in other words, a power in having distance from the act of making. There is clarity that comes from the role of the thinker outside of the process of creation.

    The goal of this essay is to push back against the valorization of “tools” and “making” in the digital turn, particularly its manifestation in digital humanities (DH), by reflecting on illustrative examples of the cartographic turn, which, from its roots in the sixteenth century through to J.B. Harley’s explosive provocation in 1989 (and beyond) has labored to understand the relationship between the practice of making maps and the experiences of looking at and using them.  By considering the stubborn and defining spiritual roots of cartographic research and the way fantasies of empiricism helped to hide the more nefarious and oppressive applications of their work, I hope to provide a mirror for the state of the digital humanities, a field always under attack, always defining and defending itself, and always fluid in its goals and motions.

    Cartography in the sixteenth century, even as its tools and representational techniques were becoming more and more sophisticated, could never quite abandon the religious legacies of its past, nor did it want to. Roger Bacon in the thirteenth century had claimed that only with a thorough understanding of geography could one understand the Bible. Pauline Moffitt Watts, in her essay “The European Religious Worldview and Its Influence on Mapping” concludes that many maps, including those by Eskrich and Ortelius, preserved a sense of providential and divine meaning even as they sought to narrate smaller, local areas:

    Although the messages these maps present are inescapably bound, their ultimate source—God—transcends and eclipses history. His eternity and omnipresence is signed but not constrained in the figurae, places, people, and events that ornament them. They offer fantastic, sometimes absurd vignettes and pastiches that nonetheless integrate the ephemera into a vision of providential history that maintained its power to make meaning well into the early modern era. (2007, 400)

    The way maps make meaning is contained not just in the technical expertise of the way the maps are constructed but in the visual experiences they provide that “make meaning” for the viewer. By over-prioritizing an emphasis on the way maps are made or on the geometric innovations that make their creation possible, the cartographic historian and theorist would miss the full effect of the work.

    Yet, the spiritual dimensions of mapmaking were not in opposition to technological expertise, and in many cases they went hand in hand. In his book Radical Arts, the Anglo-Dutch scholar Jan van Dorsten describes the spiritual motivations of sixteenth-century cosmographers disappointed by academic theology’s ability to ease the trauma of the European Reformation: “Theology…as the traditional science of revelation had failed visibly to unite mankind in one indisputably “true” perception of God’s plan and the properties of His creature. The new science of cosmography, its students seem to argue, will eventually achieve precisely that, thanks to its non-disputative method” (1970, 56-7). Some mapmakers of the sixteenth century in England, the Netherlands, and elsewhere—including Ortelius and others—imagined that the science and art of describing the created world, a text rivaling scripture in both revelatory potential and divine authorship, would create unity out of the disputation-prone culture of academic theology. Unlike theology, where thinkers are mapping an invisible world held in biblical scripture and apostolic tradition (as well as a millennium’s worth of commentary and exegesis), the liber naturae, the book of nature, is available to the eyes more directly, seemingly less prone to disputation.

    Cartographers were attempting to create an accurate imago mundi—surely that was a more tangible and grounded goal than trying to map divinity. Yet, as Patrick Gautier Dalché notes in his essay “The Reception of Ptolemy’s Geography (End of the Fourteenth to Beginning of the Sixteenth Century),” the modernizing techniques of cartography after the “rediscovery” of Ptolemy’s work, did not exactly follow a straight line of empirical progress:

    The modernization of the imago mundi and the work on modes of representation that developed during the early years of the sixteenth century should not be seen as either more or less successful attempts to integrate new information into existing geographic pictures. Nor should they be seen as steps toward a more “correct” representation, that is, toward conforming to our own notion of correct representation. They were exploratory games played with reality that took people in different directions…Ptolemy was not so much the source of a correct cartography as a stimulus to detailed consideration of an essential fact of cartographic representation: a map is a depiction based on a problematic, arbitrary, and malleable convention. (2007, 360).

    So even as the maps of this period may appear more “correct” to us, they are still engaged in experimentation to a degree that undermines any sense of the map as simply an empirical graphic representation of the earth. The “problematic, arbitrary, and malleable” conventions, used by the cartographer but observed and understood by the cartographic theorist and historian, reveal the sort of synergetic relationship between maker and observer, practitioner and theorist that allow an artifact to come into greater focus.

    Yet, cartography for much of its history turned away from seeing its work as culturally or even politically embedded. David Stoddart, in his history of geography, labels Cook’s voyage to the Pacific in 1769 as the origin point of cartography’s transforming into an empirical science.[1] Stoddart places geography, from that point onward, within the realm of the natural sciences based on, as Derek Gregory observes, “three features of decisive significance for the formation of geography as a distinctly modern, avowedly ‘objective’ science: a concern for realism in description, for systematic classification in collection, and for the comparative method in explanation” (Gregory 1994, 19). What is gone, then, in this march toward empiricism is any sense of culturally embedded codes within the map. The map, like a lab report of scientific findings, is meant to represent what is “actually” there. This term “actually” will come back to haunt us when we turn to the digital humanities.

    Yet, in the long history of mapping, before and after this supposed empirical fulcrum, maps remain slippery and malleable objects that are used for a diverse range of purposes and that reflect the cultural imagination of their makers and observers. As maps took on the appearance of the empirical and began to sublimate the devotional and fantastical aspects they had once shown proudly, they were no less imprinted with cultural knowledge and biases. If anything, the veil of empiricism allowed the cultural, political, and imperial goals of mapmaking to be hidden.

    In William Boelhower’s groundbreaking “Inventing America: The Culture of the Map” he argued precisely that maps had not simply graphically represented America, but rather that America was invented by maps. “Accustomed to the success of scientific discourse and imbued with the Cartesian tradition,” he writes, “the sons of Columbus naturally presumed that their version of reality was the version” (1988, 212). While Europeans believed they were simply mapping what they saw according to empirical principles, they didn’t realize they were actually inventing America in their own discursive image. He elaborates: “The Map is America’s precognition; at its center is not geography in se but the eye of the cartographer. The fact requires new respect for the in-forming relation between the history of modern cartography and the history of the Euro-American’s being-in-the-new-world” (213). Empiricism, then, was empire. “Empirical” maps were making the eye of the cartographer into the ideal “objective” viewer, producing a fictional way of seeing that reflected state power. Boelhower refers to the scale map as a kind of “panopticon” because of the “line’s achievement of an absolute and closed system no longer dependent on the local perspectivism of the image. With map in hand, the physical subject is theoretically everywhere and nowhere, truly a global operator” (222). What appears, then, simply to be the gathering, studying, and representation of data is, in fact, a system of discursive domination in which the cartographer asserts their worldview onto a site. As Boelhower puts it: “Never before had a nation-state sprung so rationally from a cartographic fiction, the Euclidean map imposing concrete form on a territory and a people” (223). America was a cartographic invention meant to appear as empirically identical to how the cartographers made it look.

    To turn again to J.B. Harley’s 1989 bombshell, maps are always evidence of cultural norms and perspectives, even when they try their best to appear sparse and scientific. Referring to “plain scientific maps,” Harley claims that “such maps contain a dimension of ‘symbolic realism’ which is no less a statement of political authority or control than a coat-of-arms or a portrait of a queen placed at the head of an earlier decorative map.” Even “accuracy and austerity of design are now the new talismans of authority culminating in our own age with computer mapping” (60). To represent the world “is to appropriate it” and to “discipline” and “normalize” it (61). The more we move away from cultural markers for the mythical comfort of “empirical” data, the more we find we are creating dominating fictions. There is no representation of data that does not exist within the hierarchies of cultural codes and expectations.

    What this rather eclectic history of cartography reveals is that even when maps and mapmaking attempt to hide or move beyond their cultural and devotional roots, cultural, ethical, and political markers inevitably embed themselves in the map’s role as a broker of power. Maps sort data, but in so doing they create worldviews with real world consequences. As some practitioners of mapmaking in the early modern period, such as those Familists who counted several cartographers among their membership, might have thought their cartographic work provided a more universal and less disputation-prone discursive focus than say, philosophy or theology, they were producing power through their maps, appropriating and taming the world around them in ways only fully accessible to the reader, the historian, the viewer. Harley invites us to push back against a definition of cartographic studies that follows what cartographers themselves believe cartography must be. One can now, like the author of this essay, be a theorist and historian of cartographic culture without ever having made a map. Having one’s work exist outside the power-formation networks of cartographic technology provides a unique view into how maps make meaning and power out in the world. The main goal of this essay, as I turn to the digital humanities, is to encourage those interested in the digital turn to make room for those who study, observe, and critique, but do not make.[2]

    Though the digital turn in the humanities is often celebrated for its wider scope and its ability to allow scholars to interpret—or at least observe—data trends across many more books than one human could read in the research period of an academic project, I would argue that the main thrust of the fantasy of the digital turn can be understood through its preoccupation with a fantasy of access and a view of its labor as fundamentally different than the labor of traditional academic discourse. A radical hybridity is celebrated. Rather than just read books and argue about the contents, the digital humanist is able to draw from a wide variety of sources and expanded data. Michael Witmore, in a recent essay published in New Literary History, celebrates this age of hybridity: “If we speak of hybridization as the point where constraints cease to be either intellectual or physical, where changes in the earth’s mean temperature follow just as inevitably from the ‘political choices of human beings’ as they do from the ‘laws of nature,’ we get a sense of how rich and productive the modernist divide has been. Hybrids have proliferated. Indeed, they seem inexhaustible” (355). Witmore sees digital humanities as existing within this hybridity: “The Latourian theory of hybrids provides a useful starting point for thinking about a field of inquiry in which interpretive claims are supported by evidence obtained via the exhaustive, enumerative resources of computing” (355).  The emphasis on the “exhaustive” and “enumerative” resources of computing would imply, even if this were not Witmore’s intention, that computing opens a depth of evidence not available to the non-hybrid, non-digitally enabled humanist.

    Indeed, in certain corners of DH, one often finds a suspicious eye cast on the value of traditional exegetical practices practiced without any digital engagements. In The Digital Humanist: A Critical Inquiry by Teresa Numerico, Domenico Fiormonte, and Francesco Tomasi, “the authors call on humanists to acquire the skills to become digital humanists,” elaborating: “Humanists must complete a paso doble, a double step: to rediscover the roots of their own discipline and to consider the changes necessary for its renewal. The start of this process is the realization that humanists have indeed played a role in the history of informatics” (2015, x). Numerico, Fiormonte, and Tomasi offer a vision of the humanities as in need of “renewal” rather than under attack from external forces. The suggestion is that the humanities need to rediscover their roots while at the same time taking on the “tools necessary for [their] renewal,” tools which are related to their “role in the history of informatics” and computing. The humanities are then shown to be tied up in a double bind: they have forgotten their roots and they are unable to innovate without the help of considering the digital.

    To offer a political aside: while Numerico, Fiormonte, and Tomasi offer a compelling and necessary history of the humanistic roots of computing, their argument is well in line with right-leaning attacks on the humanities  In their view, the humanities have fallen away from their first purpose, their roots. While the authors of the volume see these roots as connected to the early years of modern computer science, they could just as easily, especially given what early computational humanities looked like, be urging a return to philology and to the world of concordances and indexing that were so important to early and mid-twentieth century literary studies. They might also gesture instead at the deep history of political and philosophical thought out of which the modern university was born, and which were considered fundamental to the very project of university education until only very recently. Barring a return to these roots, the least the humanities can do to survive is to renew itself based on a connection to the digital and to the site of modern work: the computer terminal.

    Of course, what scholarly work is done outside the computer terminal? Journals and, increasingly, whole university press catalogs are being digitized and sold to university libraries on a subscription bases. Scholars read these materials and then type their own words into word processing programs onto machines (even, if like the recent Freewrite Machine released by Astrohaus, the machine attempts to appear as little like a computer as possible) and then, in almost all cases, email their work to editors who then edit it digitally and then publish it either in digitally-enabled print publishing or directly on-line. So why aren’t humanists of all sorts already considered connected to the digital?

    The answer is complicated and, like so many things in DH, depends on which particular theorist or practitioner you ask. Matthew Kirschenbaum writes about how one knows one is a digital humanist:

    You are a digital humanist if you are listened to by those who are already listened to as digital humanists, and they themselves got to be digital humanists by being listened to by others. Jobs, grant funding, fellowships, publishing contracts, speaking invitations—these things do not make one a digital humanist, though they clearly have a material impact on the circumstances of the work one does to get listened to. Put more plainly, if my university hires me as a digital humanist and if I receive a federal grant (say) to do such a thing that is described as digital humanities and if I am then rewarded by my department with promotion for having done it (not least because outside evaluators whom my department is enlisting to listen to as digital humanists have attested to its value to the digital humanities), then, well, yes, I am a digital humanist. Can you be a digital humanist without doing those things? Yes, if you want to be, though you may find yourself being listened to less unless and until you do some thing that is sufficiently noteworthy that reasonable people who themselves do similar things must account for your work, your thing, as a part of the progression of a shared field of interest. (2014, 55)

    Kirschenbaum defines the digital humanist as, mostly, someone who does something that earns the recognition of other digital humanists. He argues that this is not particularly different from the traditional humanities in which publications, grants, jobs, etc. are the standard definition of who is or is not a scholar. Yet, one wonders, especially in the age of the complete collapse of the humanities job market, if such institutional distinctions are either ethical or accurate. What would we call someone with a Ph.D. (or even without) who spends their days readings books, reading scholarly articles, and writing in their own room about the Victorian verse monologue or the early Tudor dramatic interludes? If no one reads a scholar, are they still a scholar? For the creative arts, we seem to have answered this question. We believe that the work of a poet, artist, or philosopher matters much more than their institutional appreciation or memberships during the era of the work’s production. Also, the need to be “listened to” is particularly vexed and reflects some of the political critiques that are often launched at DH. Who is most listened to in our society? White, cisgendered, heterosexual men. In the age of Trump, we are especially attuned to the fact that whom we choose to listen to is not always the most deserving or talented voice, but the one reflecting existent narratives of racial and economic distribution.

    Beyond this, the combined requirement of institutional recognition and economic investment (a salary from a university, a prestigious grant paid out) ties the work of the humanist to institutional rewards. One can be a poet, scholar, thinker in one’s own house, but you can’t be an investment banker or a lawyer or a police officer by self-declaration. The fluid nature of who can be a philosopher, thinker, poet, scholar has always meant that the work, not the institutional affiliation, of a writer/maker matters. Though DH is a diverse body of practitioners doing all sorts of work, it is often framed, sometimes only implicitly, as a return to “work” over “theory.” Kirschenbaum for instance, defending DH against accusations that it is against the traditional work of the humanities, writes: “Digital humanists don’t want to extinguish reading and theory and interpretation and cultural criticism. Digital humanists want to do their work… they want professional recognition and stability, whether as contingent labor, ladder faculty, graduate students, or in ‘alt-ac’ settings” (56). They essentially want the same things any other scholar does. Yet, while digital humanists are on the one hand defined by their ability to be listened to and to have professional recognition and stability, they are also in search of recognition and stability and eager to reshape humanistic work toward a more technological model.

    This leads to a question that is not always explored closely enough in discussions of the digital humanities in higher education. Though scholars are rightly building bridges between STEM and the humanities (rightly pushing for STEAM over STEM), there are major institutional differences between how the humanities and the sciences have traditionally functioned. Scientific research largely happens because of institutional investment of some kind whether from governmental, NGO, or corporate grants. This is why the funding sources of any given study are particularly important to follow. In the humanities, of course, grants also exist and they are a marker of career prestige. No one could doubt the benefit of time spent in a far-away archive or at home writing instead of teaching because of a dissertation-completion grant. Grants, in other words, boost careers but they are not necessary.[3] Very successful humanists depend on only library resources to produce influential work. In many cases, access to a library, a computer, and a desk is all one needs and the digitization of many archives (a phenomenon not free from political and ethical complications) has expanded access to archival materials once only available to students of wealthy institutions with deep special collections budgets or those with grants able to travel and lodge themselves far away for their research.

    All this is to say that a particular valorization of the sciences is risky business for the humanities. Kirschenbaum recommends that since “digital humanities…is sometimes said to suffer from Physics envy,” the field should embrace this label and turn to “a singularly powerful intellectual precedent for examining in close (yes, microscopic) detail the material conditions of knowledge production in scientific settings or configurations. Let us read citation networks and publication venues. Let us examine the usage patterns around particular tools. Let us treat the recensio of data sets” (60). Longing for the humanities to resemble the sciences is nothing new. Longing for data sets instead of individual texts, longing for “particular tools” rather than a philosophical problem or trend can sometimes be a helpful corrective to more Platonic searches for the “spirit” of a work or movement. And yet, there are risks to this approach, not least because the works themselves, that is, the object of inquiry, is treated in such general terms that it becomes essentially invisible. One can miss the tree for the forest and know more about the number of citations of Dante’s Commedia than the original text, or the spirit in which those citations are made. Surely, there is room for both, except when, because of shrinking hiring practices, there isn’t.

    In fact the economic politics of digital humanities has long been a source of at time fiery debate. Daniel Allington, Sarah Brouillette, and David Golumbia, in “Neoliberal Tools (and Archives): A Political History of Digital Humanities,” argue that the digital humanities have long been more defined by their preference for lab and project-based sources of knowledge over traditional humanistic inquiry:

    What Digital Humanities is not about, despite its explicit claims, is the use of digital or quantitative methodologies to answer research questions in the humanities. It is, instead, about the promotion of project-based learning and lab-based research over reading and writing, the rebranding of insecure campus employment as an empowering “alt-ac” career choice, and the redefinition of technical expertise as a form (indeed, the superior form) of humanistic knowledge. (Allington, Brouillette and Golumbia 2016)

    This last point, the valorization of “technical expertise,” is, I would argue, profoundly difficult to perform in a way that doesn’t implicitly devalue the classic toolbox of humanistic inquiry. The motto “More hack, less yack”—a favorite of the THATCamps, collaborative “un-conferences”—encapsulates this idea. Too much unfettered talking could lead to discord, to ambiguity, and to strife. To hack, on the other hand, is understood as something tangible and something implicitly more worthwhile than the production of discourse outside of particular projects and digital labs. Yet Natalia Cecire has noted, “You show up at a THATCamp and suddenly folks are talking about separating content and form as if that were, like, a real thing you could do. It makes the head spin” (Cecire 2011). Context, with all its ambiguities, once the bedrock of humanistic inquiry, is being sidestepped for massive data analysis that, by the very nature of distant reading, cannot account for context to a degree that would satisfy, say, the many Renaissance scholars who trained me. Cecire’s argument is a valuable one. In her post, she does not argue that we should necessarily follow a strategy of “no hack,” only that “we should probably get over the aversion to ‘yack.’” As she notes, “[yack] doesn’t have to replace ‘hack’; the two are not antithetical.”

    As DH continues to define itself, one can detect a sense that digital humanities’ focus on individual pieces or series of data, as well as their work in coding, embeds them in more empirical conversations that do not float to the level of speculation that is so emblematic of what used to be called high theory. This is, for many DH practitioners, a source of great pride. Kirschenbaum ends his essay with the following observation: “there is one thing that digital humanities ineluctably is: digital humanities is work, somebody’s work, somewhere, some thing, always. We know how to talk about work. So let’s talk about this work, in action, this actually existing work” (61). The author’s insistence on “some thing” and “this actually existing work” implies that there is work that is not centered on a thing or on work that actually exists, that the move to more concrete objects of inquiry, toward more empirical subjects, is a defining characteristic of digital humanities.

    This, among other issues, has made many respond to the digital humanities as if they are cooperating with and participating in the corporatized ideologies of Silicon Valley “tech culture.” Whitney Trettien, in an insightful blogpost, claims, “Humanities scholars who engage with technology in non-trivial ways have done a poor job responding to such criticism” and accuses those who criticize digital humanities as “continuing to reify a diverse set of practices as a homogeneous whole.” Let me be clear: I am not claiming that Kirschenbaum or Trettien or any other scholar writing in a theoretical mode about digital humanities is representative of an entire field, but their writing is part of the discursive community and when those of us whose work is enabled by digital resources but who do not work to build digital tools see our work described as a “trivial” engagement with the digital and see our work put in contrast, implicitly but still clearly, with “this actually existing work,” it is hard not to feel as if the humanist working on texts with digital tools (but not about the digital tools or about data derived from digital modeling) were being somehow slighted.

    For instance, in a short essay by Tom Scheinfeldt, “Why Digital Humanities is ‘Nice,’” the author claims: “One of the things that people often notice when they enter the field of digital humanities is how nice everybody is. This can be in stark contrast to other (unnamed) disciplines where suspicion, envy, and territoriality sometimes seem to rule. By contrast, our most commonly used bywords are ‘collegiality,’ ‘openness,’ and ‘collaboration’” (2012, 1). I have to admit I have not noticed what Scheinfeldt claims people often notice (perhaps I have spent too much time on twitter watching digital humanities debates unfurl in less than “nice” ways), but the claim, even as a discursive and defining fiction around DH, helps to understand one thread of the digital humanities’ project of self-definition: we are kind because what we work on is verifiable fact, not complicated and speculative philosophy or theory. Scheinfeldt says as much as he concludes his essay:

    Digital humanities is nice because, as I have described in earlier posts, we’re often more concerned with method than we are with theory. Why should a focus on method make us nice? Because methodological debates are often more easily resolved than theoretical ones. Critics approaching an issue with sharply opposed theories may argue endlessly over evidence and interpretation. Practitioners facing a methodological problem may likewise argue over which tool or method to use. Yet at some point in most methodological debates one of two things happens: either one method or another wins out empirically, or the practical needs of our projects require us simply to pick one and move on. Moreover, as Sean Takats, my colleague at the Roy Rosenzweig Center for History and New Media (CHNM), pointed out to me today, the methodological focus makes it easy for us to “call bullshit.” If anyone takes an argument too far afield, the community of practitioners can always put the argument to rest by asking to see some working code, a useable standard, or some other tangible result. In each case, the focus on method means that arguments are short, and digital humanities stays nice. (2)

    The most obvious question one is left with is: but what is the code doing? Where are the humanities in this vision of the digital? What truly discursive and interpretative work could produce fundamental disagreements that could be resolved simply by verifying the code in a community setting? Also, the celebration of how enforceable community norms are if an argument goes “too far afield” presents a troubling vision of a true discursive community where the appearance of agreement, enforceable through “empirical” testing, is more important than freedom of debate. In our current political climate, one wonders if such empirically-minded groupthink adequately makes room for more vulnerable, and not quite as loud, voices. When the goal is a functioning website or program, Scheinfeldt may be quite right, but when describing discursive work in the humanities, citing text for instance, rarely quells disagreement, but only makes clearer where the battle lines are drawn. This is particularly ironic given how the digital humanities, understood as a giant discursive, never-quite-adequate term for the field, is still defining itself and has been defining itself for decades with essay after essay defining just what DH is.

    I am echoing here some of the arguments offered by Adeline Koh in her essay “Niceness, Building, and Opening the Genealogy of the Digital Humanities: Beyond the Social Contract of Humanities Computing.” In this quite important intervention, Koh argues that DH is centered in two linked characteristics, niceness and technological expertise. Though one might think these requirements are disparate, Koh reveals how they are linked in the formation of a DH social contract:

    In my reading of this discursive structure, each rule reinforces the other. An emphasis on method as it applies to a project—which requires technical knowledge—requires resolution, which in turn leads to niceness and collegiality. To move away from technical knowledge—which appears to happen in [prominent DH scholar Stephen] Ramsay’s formulation of DH 2—is to move away from niceness and toward a darker side of the digital humanities. Proponents of technical knowledge appear to be arguing that to reject an emphasis on method is to reject an emphasis on civility. In other words, these two rules form the basis of an attempt to enforce a digital humanities social contract: necessary conditions (technical knowledge) that impose civic responsibilities (civility and niceness). (100)

    Koh believes that what is necessary to reduce the link between DH social contracts and the tenets of liberalism, is an expanded genealogy of the digital humanities. Koh urges DH to consider its roots beyond humanities computing.[4]

    To demand that one work with technical expertise on “this actually existing work”—whatever that work may end up being—is to state rather clearly that there are guidelines fencing in the digital humanities. As in the history of cartographic studies, the opinions of the makers paying attention to data sets have been allowed to determine what the digital humanities are (or what DH is). Like the moment when J.B. Harley challenged historians and theorists of cartography to ignore what the cartographers say and explore maps and mapmaking outside of the tools needed to make a map, perhaps DH is ready to enter a new phase where it begins its own renewal by no longer valorizing tools, code, and technology and letting the observers, the consumers, the fantasists, and the historians of power and oppression in (without their laptops). Indeed, what DH can learn from the history of cartography is to understand that what DH is, in all its many forms, is seldom (just) what digital humanists say it is.

    _____

    Tim Duffy is a scholar of Renaissance literature, poetics, and spatial philosophy.

    Back to the essay

    _____

    Notes

    [1] See David Stoddart, “Geography—a European Science” in On geography and its history, pp 28-40. For a discussion of Stoddart’s thinking, see Derek Gregory, Geographic Imaginations, pp. 16-21.

    [2] Obviously, critics and writers make, but their critique exists outside of the production of the artifact that they study. Cartographic theorists, as this article will argue, need not be a cartographer themselves any more than a critic or theorist of the digital need be a programmer or creator of digital objects.

    [3] For more on the political problems of dependence on grants, see Waltzer (2012): “One of those conditions is the dependence of the digital humanities upon grants. While the increase in funding available to digital humanities projects is welcome and has led to many innovative projects, an overdependence on grants can shape a field in a particular way. Grants in the humanities last a short period of time, which make them unlikely to fund the long-term positions that are needed to mount any kind of sustained challenge to current employment practices in the humanities. They are competitive, which can lead to skewed reporting on process and results, and reward polish, which often favors the experienced over the novice. They are external, which can force the orientation of the organizations that compete for them outward rather than toward the structure of the local institution and creates the pressure to always be producing” (340-341).

    [4] In her reading of how digital humanities deploys niceness, Koh writes “In my reading of this discursive structure, each rule reinforces the other. An emphasis on method as it applies to a project—which requires technical knowledge—requires resolution, which in turn leads to niceness and collegiality. To move away from technical knowledge…is to move away from niceness and toward a darker side of the digital humanities. Proponents of technical knowledge appear to be arguing that to reject an emphasis on method is to reject an emphasis on civility” (100).

    _____

    Works Cited

    • Allington, Daniel, Sarah Brouillete, and David Golumbia. 2016. “Neoliberal Tools (and Archives): A Political History of Digital Humanities.” Los Angeles Review of Books.
    • Boelhower, William. 1988. “Inventing America: The Culture of the Map” in Revue française d’études américaines 36. 211-224.
    • Cecire, Natalia. 2011. “When DH Was in Vogue; or, THATCamp Theory.”
    • Dalché, Patrick Gautier. 2007. “The Reception of Ptolemy’s Geography (End of the Fourteenth to Beginning of the Sixteenth Century) in Cartography in the European Renaissance, Volume 3, Part 1. Edited by David Woodward. Chicago: University of Chicago Press. 285-364.
    • Fiormonte, Domenico, Teresa Numerico, and Francesca Tomasi. 2015. The Digital Humanist: A Critical Inquiry. New York: Punctum Books
    • Gregory, Derek. 1994. Geographic Imaginations. Cambridge: Blackwell.
    • Harley, J.B. 2011. “Deconstructing the Map” in The Map Reader: Theories of Mapping Practice and Cartographic Representation, First Edition, edited by Martin Dodge, Rob Kitchin and Chris Perkins. New York: John Wiley & Sons, Ltd. 56-64.
    • Jacob, Christian. 2005. The Sovereign Map. Translated by Tom Conley. Chicago:  University of Chicago Press.
    • Kirschenbaum, Matthew. 2014. “What is ‘Digital Humanities,’ and Why Are They Saying Such Terrible Things about It?” Differences 25:1. 46-63.
    • Koh, Adeline. 2014. “Niceness, Building, and Opening the Genealogy of the Digital Humanities: Beyond the Social Contract of Humanities Computing.” Differences 25:1. 93-106.
    • Scheinfeldt, Tom. 2012. “Why Digital Humanities is ‘Nice.’” In Matthew Gold, ed., Debates in the Digital Humanities. Minneapolis: University of Minnesota Press.
    • Trettien, Whitney. 2016. “Creative Destruction/‘Digital Humanities.’” Medium (Aug 24).
    • Watts, Pauline Moffitt. 2007. “The European Religious Worldview and Its Influence on Mapping” in The History of Cartography: Cartography in the European Renaissance, Vol. 3, part 1. Edited by David Woodward. Chicago: University of Chicago Press). 382-400.
    • Waltzer, Luke. 2012. “Digital Humanities and the ‘Ugly Stepchildren’ of American Higher Education.” In Matthew Gold, ed., Debates in the Digital Humanities. Minneapolis: University of Minnesota Press.
    • Witmore, Michael. 2016. “Latour, the Digital Humanities, and the Divided Kingdom of Knowledge.” New Literary History 47:2-3. 353-375.

     

  • Gavin Mueller — Digital Proudhonism

    Gavin Mueller — Digital Proudhonism

    Gavin Mueller

    In a passage from his 2014 book Information Doesn’t Want to Be Free author and copyright reformer Cory Doctorow sounds a familiar note against strict copyright. “Creators and investors lose control of their business—they become commodity suppliers for a distribution channel that calls all the shots. Anti-circumvention [laws such as the Digital Millennium Copyright Act, which prohibits subverting controls on the intended use of digital objects] isn’t copyright protection, it’s middleman protection” (50).

    This is the specter haunting the digital cultural economy, according to many of the most influential voices arguing to reform or disrupt it: the specter of the middleman, the monopolist, the distortionist of markets. Rather than an insurgency, this specter emanates from economic incumbency: these middlemen are the culture industries themselves. With the dual revolutions of personal computer and internet connection, record labels, book publishers, and movie studios could maintain their control and their profits only by asserting and strengthening intellectual property protections and squelching the new technologies that subverted them. Thus, these “monopolies” of cultural production threatened to prevent individual creators from using technology to reach their audiences independently.

    Such a critique became conventional wisdom among a rising tide of people who had become accustomed to using the powers of digital technology to copy and paste in order to produce and consume cultural texts, beginning with music. It was most comprehensively articulated in a body of arguments, largely produced by technology evangelists and tech-aligned legal professionals, hailing from the Free Culture movement spearheaded by Lawrence Lessig. The critique’s practical form was the host of piratical activities and peer-to-peer technologies that, in addition to obviating traditional distribution chains, dedicated themselves to attacking culture industries, as well as their trade organizations such as the Recording Industry Association of America (RIAA) and the Motion Picture Association of America (MPAA).

    Connected to this critique is an alternate vision of the digital economy, one that leverages new technological commons, peer production and network effects to empower creators. This vision has variations, and travels under a number of different political banners, from anarchist to libertarian to liberal and many more who prefer not to label.[1] It tells a compelling story (one Doctorow has adapted into novels for young people): against corporate monopolists and state regulation, a multitude, empowered by the democratizing effects bequeathed to society by networked personal computers, and other technologies springing from them, is posed to revolutionize the production of media and information, and, therefore, the political and economic structure as a whole. Work will be small-scale and independent, but, bereft of corporate behemoths, more lucrative than in the past.

    This paper traces the contours of the critique put forth by Doctorow and other revolutionaries of networked digital production in light of a nineteenth-century thinker who espoused remarkably similar arguments over a century ago: the French anarchist Pierre-Joseph Proudhon. Few of these writers are evident readers of Proudhon or explicitly subscribe to his views, though some, such as the Center for Stateless Society do. Rather than a formal doctrine, what I call “Digital Proudhonism” is better understood as what Raymond Williams (1977) calls a “structure of feeling”: a kind of “practical consciousness” that identifies “meanings and values as they are actively lived and felt” (132), in this case, related to specific experiences of networked computer use. In the case under discussion these “affective elements of consciousness and relationships” are often articulated in a political, or at least polemical, register, with real effects on the political self-understanding of networked subjects, the projects they pursue, and their relationship to existing law, policy and institutions. Because of this, I seek to do more than identify currents of contemporary Digital Proudhonism. I maintain that the influence of this set of practices and ideas over the politics of digital production necessitates a critique. In this case, I argue that a return to Marx’s critique of Proudhon will aid us in piercing through the Digital Proudhonist mystifications of the Internet’s effects on politics and industry and reformulate both a theory of cultural production under digital capitalism as well as radical politics of work and technology for the 21st century.

    From the Californian Ideology to Digital Proudhonism

    What I am calling Digital Proudhonism has precedent in the social critique of techno-utopian beliefs surrounding the internet. It echoes Langdon Winner’s (1997) diagnosis of “cyberlibertarianism” in the Progress and Freedom Foundation’s 1994 manifesto “Magna Carta for the Knowledge Age,” where “the wedding of digital technology and the free market” manages to “realize the most extravagant ideals of classical communitarian anarchism” (15). Above all, it bears a marked resemblance to Barbrook and Cameron’s (1996) landmark analysis of the “Californian Ideology,” that “bizarre mish-mash of hippie anarchism and economic liberalism beefed up with lots of technological determinism” emerging from the Wired (in the sense of the magazine) corners of the rise of networked computers, which claims that digital technology is the key to realizing freedom and autonomy (56). As the authors put it, “the Californian Ideology promiscuously combines the free-wheeling spirit of the hippies and the entrepreneurial zeal of the yuppies. This amalgamation of opposites has been achieved through profound faith in the emancipatory potential of new information technologies” (45).

    My contribution will follow the argument of Barbrook and Cameron’s exemplary study. As good Marxists, they recognized that ideology was not merely an abstract belief system, but “offers a way of understanding the lived reality” (50) of a specific social base: “digital artisans” of programmers, software developers, hackers and other skilled technology workers who “not only tend to be well-paid, but also have considerable autonomy over their pace of work and place of employment” (49). Barbrook and Cameron located the antecedents of the Californian Ideology in Thomas Jefferson’s belief that democracy was best secured by self-sufficient individual farmers, a kind of freedom that, as the authors trenchantly note, “was based upon slavery for black people” (59).

    Thomas Jefferson is an oft-cited figure among the digital revolutionaries associated with copyright reform. Law professor James Boyle (2008) drafts Jefferson into the Free Culture movement as a fellow traveler who articulated “a skeptical recognition that intellectual property rights might be necessary, a careful explanation that they should not be treated as natural rights, and a warning of the monopolistic dangers that they pose” (21). Lawrence Lessig cites Jefferson’s remarks on intellectual property approvingly in Free Culture (2004, 84). “Thomas Jefferson and the other Founding Fathers were thoughtful, and got it right,” states Kembrew McLeod (2005) in his discussion of the U.S. Constitution’s clauses on patent and copyright (9).

    There is a deeper political and economic resonance between Jefferson and internet activists beyond his views on intellectual property. Jefferson’s ideal productive arrangement of society was small individual landowners and petty producers: the yeoman farmer. Jefferson believed that individual self-sufficiency guaranteed a democratic society. The abundance of land in the New World and the willingness to expropriate it from the indigenous peoples living there gave his fantasy a plausibility and attraction many Americans still feel today. It was this vision of America as a frontier, an empty space waiting to be filled by new social formations, that makes his philosophy resonate with the techno-adept described by Barbrook and Cameron, who viewed the Internet in a similar way. One of these Californians, John Perry Barlow (1996), who famously declared to “governments of the Industrial World” that “cyberspace does not lie within your borders,” even co-founded an organization dedicated to a deregulated internet called the “Electronic Frontier Foundation.”

    However, not everything online lent itself to the metaphor of a frontier. Particularly in the realm of music and video, artisans dealt with a field crowded with existing content, as well as thickets of intellectual property laws that attempted to regulate how that content was created and distributed. There could be no illusion of a blank canvas on which to project one’s ideal society: in fact, these artisans were noteworthy, not for producing work independently out of whole cloth, but for refashioning existing works through remix. Lawrence Lessig (2004) quotes mashup artist Girl Talk: “We’re living in this remix culture. This appropriation time where any grade-school kid has a copy of Photoshop and can download a picture of George Bush and manipulate his face how they want and send it to their friends” (14). The project of Lessig and others was not to create the conditions for erecting a new society upon a frontier, as a yeoman farmer might, but to politicize this class of artisans in order to challenge larger industrial concerns, such as record labels and film studios, who used copyright to protect their incumbent position. This very different terrain requires a different perspective from Jefferson’s.

    Thomas Jefferson’s vision is not the only expression of the fantasy of a society built on the basis of petty producers. In nineteenth-century Europe, where most land had long been tied up in hereditary estates, large and small, the yeoman farmer ideal held far less influence. Without a belief in abundant land, there could be no illusion of a blank canvas on which a new society could be created: some kind of revolutionary change would have to occur within and against the old one. And so a similar, yet distinct, political philosophy sprang up in France among a similar social base of artisans and craftsmen—those who tended to control their own work process and own their own tools—who made up a significant part of the French economy. As they were used to an individualized mode of production, they too believed that self-sufficiency guaranteed liberty and prosperity. The belief that society should be organized along the lines of petty individual commodity producers, without interference from the state—a belief remarkably consonant with a variety of digital utopians—found its most powerful expression in the ideas of Pierre-Joseph Proudhon. It is to his ideas that I now turn.

    What was Proudhonism?

    An anarchist and influential member of the International Workingmen’s Association of which Karl Marx was also a part, Proudhon’s ideas were especially popular in his native France, where the economy was rooted far more deeply in small-scale artisanal production than the industrial-scale capitalism Marx experienced in Britain. His first major work, What Is Property? ([1840] 2011) (Proudhon’s pithy answer: property is theft) caught the attention of Marx, who admired the work’s thrust and style, even while he criticized its grasp of the science of political economy. After attempting to win over Proudhon by teaching him political economy and Hegelian dialectics, Marx became a vehement critic of Proudhon’s ideas, which held more sway over the First International than Marx’s own.

    Proudhon was critical of the capitalism of his day, but made his criticisms, along with his ideas for a better society, from the perspective of a specific class. Rather than analyze, as Marx did, the contradictions of capitalism through the figure of the proletarian, who possesses nothing but their own capacity to work, Proudhon understood capitalism from the perspective of an artisanal small producer, who owns and labors with their own small-scale means of production. In David McNally’s (1993) survey of eighteenth- and nineteenth-century radical political economy, he summarizes Proudhon’s beliefs. Proudhon “envisages a society [of] small independent producers—peasants and artisans—who own the products of their personal labour, and then enter into a series of equal market exchanges. Such a society will, he insists, eliminate profit and property, and ‘pauperism, luxury, oppression, vice, crime and hunger will disappear from our midst’” (140).

    For Proudhon, massive property accumulation of large firms and accompanying state collusion distorts these market exchanges. Under the prevailing system, he asserts in The Philosophy of Poverty, “there is irregularity and dishonesty in exchange” ([1847] 2012, 124) a problem exemplified by monopoly and its perversion of “all notions of commutative justice” (297). Monopoly permits unjust property extraction: Proudhon states in General Idea of the Revolution in the Nineteenth Century ([1851] 2003) that “the price of things is not proportionate to their VALUE: it is larger or smaller according to an influence which justice condemns, but the existing economic chaos excuses” (228). Exploitation becomes thereby a consequence of market disequilibria—the upward and downward deviations of price from value. It is a faulty market, warped by state intervention and too-powerful entrenched interests that is the cause of injustice. The Philosophy of Poverty details all manner of economic disaster caused by monopoly: “the interminable hours, disease, deformity, degradation, debasement, and all the signs of industrial slavery: all these calamities are born of monopoly” (290).

    As McNally’s (1993) work shows, blaming economic woes on “monopolists” and “middlemen” ran rife in popular critiques of political economy during the seventeenth and eighteenth centuries, leading many radicals to call for free trade as a solution to widespread poverty. Proudhon’s anarchism was part of this general tendency. In General Idea of the Revolution in the Nineteenth Century ([1851] 2003), he railed against “middlemen, commission dealers, promoters, capitalists, etc., who, in the old order of things, stand in the way of producer and consumer” (90). The exploiters worked by obstructing and manipulating the exchange of goods and services on the market.

    Proudhon’s particular view of economic injustice begets its own version of how best to change it. His revolutionary vision centers on the end of monopolies and currency reform, two ways that “monopolists” intervened in the smooth functioning of the market. He remained dedicated to the belief that the ills of capitalism arose from the concentrations of ownership creating unjust political power that could further distort the functioning of the market, and envisioned a market-based society where “political functions have been reduced to industrial functions, and that social order arises from nothing but transactions and exchanges” (1979, 11).

    Proudhon evinced a technological optimism that Marx would later criticize. From his petty producer standpoint, he believed technology would empower workers by overcoming the division of labor:

    Every machine may be defined as a summary of several operations, a simplification of powers, a condensation of labor, a reduction of costs. In all these respects machinery is the counterpart of division. Therefore through machinery will come a restoration of the parcellaire laborer, a decrease of toil for the workman, a fall in the price of his product, a movement in the relation of values, progress towards new discoveries, advancement of the general welfare. ([1847] 2012, 167)

    While Proudhon recognized some of the dynamics by which machinery could immiserate workers through deskilling and automating their work, he remained strongly skeptical of organized measures to ameliorate this condition. He rejected compensating the unemployed through taxation because it would “visit ostracism upon new inventions and establish communism by means of the bayonet” ([1847] 2012, 207); he also criticized employing out-of-work laborers in public works programs. Technological development should remain unregulated, leading to eventual positive outcomes: “The guarantee of our liberty lies in the progress of our torture” (209).

    Marx’s Critique of Proudhon

    Marx, after attempting to influence Proudhon, became one of his most vehement critics, attacking his rival’s arguments, both major and marginal. Marx had a very different understanding of the new industrial society of the nineteenth century. Marx ([1865] 2016) diagnosed his rival’s misrepresentations of capitalism as derived from a particular class basis. Proudhon’s theories emanated “from the standpoint and with the eyes of a French small-holding peasant (later petit bourgeois)” rather than the proletarian, who possesses nothing but labor-power, which must be exchanged for a wage from the capitalist.

    Since small producers own their own tools and depend largely on their own labor, they do not perceive any conflict between ownership of the means of production and labor: analysis from this standpoint, such as Proudhon’s, tends to collapse these categories together. Marx’s theorization of capitalism centered an emergent class of industrial proletarians, who, unlike small producers, owned nothing but their ability to sell their labor-power for a wage. Without any other means of survival, the proletarian could not experience the “labor market” as a meeting of equals coming to a mutually beneficial exchange of commodities, but as an abstraction from the concrete truth that working for whatever wage offered was compulsory, rather than a voluntary contract. Further, it was this very market for labor-power that, in the guise of equal exchange of commodities, helped to obscure that capitalist profit depended on extracting value from workers beyond what their wages compensated. This surplus value emerged in the production process, not, as Proudhon argued, at a later point where the goods produced were bought and sold. Without a conception of a contradiction between ownership and labor, the petty producer standpoint cannot see exploitation occurring in production.

    Instead, Proudhon saw exploitation occurring after production, during exchanges on the market distorted by unfair monopolies held intact through state intervention, with which petty producers could not compete. However, Marx ([1867] 1992) demonstrated that “monopolies” were simply the outcome of the concentration of capital due to competition: in his memorable wording from Capital, “One capitalist always strikes down many others” (929). As producers compete and more and more producers fail and are proletarianized, capital is held in fewer and fewer hands. In other words, monopolies are a feature, not a bug, of market economies.

    Proudhon’s misplaced emphasis on villainous monopolies is part of a greater error in diagnosing the momentous changes in the nineteenth-century economy: a neglect of the centrality of massive industrial-scale production to mature capitalism. In the first volume of Capital, Marx ([1867] 1992) argues that petty production was a historical phenomenon that would give way to capitalist production: “Private property which is personally earned, i.e., which is based, as it were, on the fusing together of the isolated, independent working individual with the conditions of his labour, is supplanted by capitalist private property, which rests on exploitation of alien, but formally free labour” (928). As producers compete and more and more producers fail and are proletarianized, capital—and with it, labor—concentrates.

    However, petty production persisted alongside industrial capitalism in ways that masked how the continued existence of the former relies on the latter. Under capitalism, labor, through commodification of labor-power through the wage relationship, is transformed from concrete acts of labor into labor in the abstract in the system of industrial production for exchange. This abstract labor, the basis of surplus value, is for Marx the “specific social form of labour” in capitalism (Murray 2016, 124). Without understanding abstract labor, Proudhon could not perceive how capitalism functioned as not simply a means of producing profit, but a system of structuring all labor in society.

    The importance of abstract labor to capitalism also meant that Proudhon’s plans to reform currency by making it worth labor-time would fail. As Marx ([1847] 1973) puts it in his book-length critique of Proudhon, “in large-scale industry, Peter is not free to fix for himself the time of his labor, for Peter’s labor is nothing without the co-operation of all the Peters and all the Pauls who make up the workshop” (77). In other words, because commodities under capitalism are manufactured through a complex division of labor, with different workers exercising differing levels of labor productivity, it is impossible to apportion specific quantities of time to specific labors on individual commodities. Without an understanding of the role of abstract labor to capitalist production, Proudhon could simply not grapple with the actual mechanisms of capitalism’s structuring of labor in society, and so, could not develop plans to overcome it. This overcoming could only occur through a political intervention that sought to organize production from the point of view of its socialization, not, as Proudhon believed, reforming elements of the exchange system to preserve individual producers.

    The Roots of Digital Proudhonism

    Many of Proudhon’s arguments were revived among digital radicals and reformers during the battles over copyright precipitated by networked digital technologies during the 1990s, of which Napster is the exemplary case. The techno-optimistic belief that the Internet would provide radical democratic change in cultural production took on a highly Proudhonian cast. The internet would “empower creators” by eliminating “middlemen” and “gatekeepers” such as record labels and distributors, who were the ultimate source of exploitation, and allowing exchange to happen on a “peer-to-peer” basis. By subverting the “monopoly” granted by copyright protections, radical change would happen on the basis of increased potential for voluntary market exchange, not political or social revolution.

    Siva Vaidhyanathan’s Anarchist in the Library (2005) is a representative example of this argument, and made with explicit appeals to anarchist philosophy. According to Vaidhyanathan, “the new [peer-to-peer] technology evades the professional gatekeepers, flattening the production and distribution pyramid…. Digitization and networking have democratized the production of music” (48). This democratization by peer-to-peer distribution threatens “oligarchic forces such as global entertainment conglomerates” even as it works to “empower artists in new ways and connect communities of fans” (102).

    The seeds of Digital Proudhonism were planted earlier than Napster, derived from the beliefs and practices of the Free Software movement. Threatened by intellectual property protections that signaled the corporatization of software development, the academics and amateurs of the Free Software movement developed alternative licenses that would keep software code “open” and thus able to share and build upon by any interested coder. This successfully protected the autonomous and collaborative working practices of the group. The movement’s major success was the Linux operating system, collaboratively built by a distributed team of mostly voluntary programmers who created a free alternative to the proprietary systems of Microsoft and Apple.

    Linux indicated to those examining the front lines of technological development that, far from just a software development model, Free Software could actually be an alternative mode of production, and even a harbinger of democratic revolution. The triumph of an unpaid network-based community of programmers creating a free and open product in the face of the IP-dependent monopoly like Microsoft seemed to realize one of Marx’s ([1859] 1911) technologically determinist prophecies from A Contribution to the Critique of Political Economy:

    At a certain stage of their development, the material forces of production in society come into conflict with the existing relations of production or—what is but a legal expression of the same thing—with the property relations within which they had been at work before. From forms of development of the forces of production these relations turn into their fetters. Then comes the era of social revolution. (12)

    The Free Software movement provoked a wave of political initiatives and accompanying theorizations of a new digital economy based on what Yochai Benkler (2006) called “commons-based peer production.” With networked personal computers so widely distributed, “[t]he material requirements for effective information production and communication are now owned by numbers of individuals several orders of magnitude larger than the number of owners of the basic means of information production and exchange a mere two decades ago” (4). Suddenly, and almost as if by accident, the means of production were in the hands, not of corporations or states, but of individuals: a perfect encapsulation of the petty producer economy.

    The classification of file sharing technologies such as Napster as “peer-to-peer” solidified this view. Napster’s design allowed users to exchange MP3 files by linking “peers” to one another, without storing files on Napster’s own servers. This performed two useful functions. It dispersed the server load for hosting and exchanging files among the computers and connections of Napster’s user base, alleviating what would have been massive bandwidth expenses. It also provided Napster with a defense against charges of infringement, as its own servers were not involved in copying files. This design might offer it protection from the charges that had doomed the site MP3.com, which had hosted user files.

    While Napster’s suggestion that corporate structures for the distribution of culture could be supplanted by a voluntary federation of “peers” was important, it was ultimately a mystification. Not only did the courts find Napster liable for facilitating infringement, but the flat, “decentralized” topology of Napster still relied on the company’s central listing service to connect peers. Yet the ideological impact was profound. A law review article by Raymond Ku (2002), the then-director of the Institute of Law, Science & Technology, Seton Hall University School of Law is illustrative of both the nature of the arguments and how widespread and respectable they became in the post-Napster era: “the argument for copyright is primarily an argument for protecting content distributors in a world in which middlemen are obsolete. Copyright is no longer needed to encourage distribution because consumers themselves build and fund the distribution channels for digital content” (263). Clay Shirky’s (2008) paeans to “the mass amateurization of efforts previously reserved for media professionals” sound a similar note (55), presenting a technologically functionalist explanation for the existence of “gatekeeper” media industries: “It used to be hard to move words, images, and sounds from creator to consumer… The commercial viability of most media businesses involves providing those solutions, so preservation of the original problems became an economic imperative. Now, though, the problems of production, reproduction, and distribution are much less serious” (59). This narrative has remained persistent years after the brief flourishing of Napster: “the rise of peer-to-peer distribution systems… make middlemen hard to identify, if not cutting them out of the process altogether” (Kernfeld 2011, 217).

    This situation was given an emancipatory political valence by intellectuals associated with copyright reform. Eager to protect an emerging sector of cultural production founded on sampling, remixing and file sharing, they described the accumulation of digital information and media online as a “commons,” which could be treated in an alternative way from forms of private property. Due to the lack of rivalry among digital goods (Benkler 2006, 36), users do not deplete the common stock, and so should benefit from a laxer approach to property rights. Law professor Lawrence Lessig (2004) started an initiative, Creative Commons, dedicated to establishing new licenses that would “build a layer of reasonable copyright on top of the extremes that now reign” (282). Part of Lessig’s argument for Creative Commons classifies media production and distribution, such as making music videos or mashups, as a “form of speech.” Therefore, copyright acted as unjust government regulation, and so must be resisted. “It is always a bad deal for the government to get into the business of regulating speech markets,” Lessig argues, even going so far as to raise the specter of communist authoritarianism: “It is the Soviet Union under Brezhnev” (128). Here Lessig performs a delicate rhetorical sleight of hand: the positioning cultural production as speech, it reifies a vision of such production as emanating from a solitary, individual producer who must remain unencumbered when bringing that speech to market.

    Cory Doctorow (2014), a poster child of achievement in the new peer-to-peer world (in Free Culture, Lessig boasts of Doctorow’s successful promotional strategy of giving away electronic copies of his books for free), argues from a pro-market position against middlemen in his latest book: “copyright exists to protect middlemen, retailers, and distributors from being out-negotiated by creators and their investors” (48). While the argument remains the same, some targets have shifted: “investors” are “publishers, studios, record labels” while “intermediaries” are the platforms of distribution: “a distributor, a website like YouTube, a retailer, an e-commerce site like Amazon, a cinema owner, a cable operator, a TV station or network” (27).

    While the thrust of these critiques of copyright focus on egregious overreach by the culture industries and their assault upon all manner of benign noncommercial activity, they also reveal a vision of an alternative cultural economy of independent producers who, while not necessarily anti-capitalist, can escape the clutches of massive centralized corporations through networked digital technologies. This facilitates both economic and political freedom via independence from control and regulation, and maximum opportunities on the market. “By giving artists the tools and technologies to take charge of their own production, marketing, and distribution, digitization underscored the disequilibrium of traditional record contracts and offered what for many is a preferable alternative” (Sinnreich 2013, 124). As it so often does, the fusion of ownership and labor characteristic of the petty producer standpoint, the structure of feeling of the independent artisan, articulates itself through the mantra of “Do It Yourself.”

    These analyses and polemics reproduce the Proudhonist vision of an alternative to existing digital capitalism. Individual independent creators will achieve political autonomy and economic benefit through the embrace digital network technologies, as long as these creators are allowed to compete fairly with incumbents. Rather than insist on collective regulation of production, Digital Proudhonism seeks forms of deregulation, such as copyright reform, that will chip away at the existence of “monopoly” power of existing media corporations that fetters the market chances of these digital artisans.

    Digital Proudhonism Today

    Rooted in emergent digital methods of cultural production, the first wave of Digital Proudhonism shored up its petty producer standpoint through a rhetoric that centered the figure of the artist or “creator.” The contemporary term is the more expansive “the creative,” which lionizes a larger share of knowledge workers of the digital economy. As Sarah Brouillette (2009) notes, thinkers from management gurus such as Richard Florida to radical autonomist Marxist theorists such as Paolo Virno “broadly agree that over the past few decades more work has become comparable to artists’ work.” As a kind of practical consciousness, Digital Proudhonism easily spreads through the channels of the so-called “creative class,” its politics and worldview traveling under a host of other endeavors. These initiatives self-consciously seek to realize the ideals of Proudhonism in fields beyond the confines of music and film, with impact in manufacturing, social organization, and finance.

    The maker movement is one prominent translation of Digital Proudhonism into a challenge to the contemporary organization of production, with allegedly radical effects on politics and economics. With the advent of new production technologies, such as 3D printers and digital design tools, “makers” can take the democratizing promise of the digital commons into the physical world. Just as digital technology supposedly distributes the means of production of culture across a wider segment of the population, so too will it spread manufacturing blueprints, blowing apart the restrictions of patents the same way Napster tore copyright asunder. “The process of making physical stuff has started to look more like the process of making digital stuff,” claims Chris Anderson (2012), author of Makers: The New Industrial Revolution (25). This has a radical effect: a realization of the goals of socialism via the unfolding of technology and the granting of access. “If Karl Marx were here today, his jaw would be on the floor. Talk about ‘controlling the tools of production’: you (you!) can now set factories into motion with a mouse click” (26). The key to this revolution is the ability of open-source methods to lower costs, thereby fusing the roles of inventor and entrepreneur (27).

    Anderson’s “new industrial revolution” is one of a distinctly Proudhonian cast. Digital design tools are “extending manufacturing to a hugely expanded population of producers—the existing manufacturers plus a lot of regular folk who are becoming entrepreneurs” (41). The analogy to the rise of remix culture and amateur production lionized by Lessig is deliberate: “Sound familiar? It’s exactly what happened with the Web” (41). Anderson envisions the maker movement to be akin to the nineteenth century petty producers represented by Proudhon’s views: Cottage industries “were closer to what a Maker-driven New Industrial Revolution might be than are the big factories we normally associate with manufacturing” (49). Anderson’s preference for the small producer over the large factory echoes Proudhon. The subject of this revolution is not the proletarian at work in the large factory, but the artisan who owns their own tools.

    A more explicitly radical perspective comes from the avowedly Proudhonist Center for a Stateless Society (C4SS), a “left market anarchist think tank and media center” deeply conversant in libertarian and so-called anarcho-capitalist economic theory. As with Anderson, C4SS subscribes to the techno-utopian potentials for a new arrangement of production driven by digital technology, which has the potential to reduce prices on goods, making them within the reach of anyone (once again, music piracy is held up as a precursor). However, this potential has not been realized because “economic ruling classes are able to enclose the increased efficiencies from new technology as a source of rents mainly through artificial scarcities, artificial property rights, and entry barriers enforced by the state” (Carson 2015a). Monopolies, enforced by the state, have “artificially” distorted free market transactions.

    These monopolies, in the form of intellectual property rights, are preventing a proper Proudhonian revolution in which everyone would control their own individual production process. “The main source of continued corporate control of the production process is all those artificial property rights such as patents, trademarks, and business licenses, that give corporations a monopoly on the conditions under which the new technologies can be used” (Carson 2015a). However, once these artificial monopolies are removed, corporations will lose their power and we can have a world of “small neighborhood cooperative shops manufacturing for local barter-exchange networks in return for the output of other shops, of home microbakeries and microbreweries, surplus garden produce, babysitting and barbering, and the like” (Carson 2015a).

    This revolution is a quiet one, requiring no strikes or other confrontations with capitalists. Instead, the answer is to create this new economy within the larger one, and hollow it out from the inside:

    Seizing an old-style factory and holding it against the forces of the capitalist state is a lot harder than producing knockoffs in a garage factory serving the members of a neighborhood credit-clearing network, or manufacturing open-source spare parts to keep appliances running. As the scale of production shifts from dozens of giant factories owned by three or four manufacturing firms, to hundreds of thousands of independent neighborhood garage factories, patent law will become unenforceable. (Carson 2015b)

    As Marx pointed out long ago, such petty producer fantasies of individually owned and operated manufacturing ironically rely upon the massive amounts of surplus generated from proletarians working in large-scale factories. The devices and infrastructures of the internet itself, as described by Nick Dyer-Witheford (2015) in his appropriately titled Cyber-Proletariat, are an obvious example. But proletarian labor also appears in the Digital Proudhonists’ own utopian fantasies. Anderson, describing the change in innovation wrought by the internet, describes how his grandfather’s invention of a sprinkler system would have gone differently. “When it came time to make more than a handful of his designs, he wouldn’t have begged some manufacturer to license his ideas, he would have done it himself. He would have uploaded his design files to companies that could make anything from tens to tens of thousands for him, even drop-shipping them directly to customers” (15).  These “companies” of course are staffed by workers very different from “makers,” who work in facilities of mass production. Their labor is obscured by an influential ideology of artisans who believe themselves reliant on nothing but a personal computer and their own creativity.

    A recent Guardian column by Paul Mason, anti-capitalist journalist and author of the techno-optimistic Postcapitalism serves as a further example. Mason (2016) argues, similarly to the C4SS, that intellectual property is the glue holding together massive corporations, and the key to their power over production. Simply by giving up on patents, as recommended by Anderson, Proudhonists will outflank capitalism on the market. His example is the “revolutionary” business model of the craft brewery chain BrewDog, who “open-sourced its recipe collection” by releasing the information publicly, unlike its larger corporate competitors. For Mason, this is an astonishing act of economic democracy: armed with BrewDog’s recipes, “All you would need to convert them from homebrew approximations to the actual stuff is a factory, a skilled workforce, some raw materials and a sheaf of legal certifications.” In other words, all that is needed to achieve postcapitalism is capitalism precisely as Marx described it.

    The pirate fantasies of subverting monopolies extend beyond the initiatives of makers. The Digital Proudhonist belief in revolutionary change rooted in individual control of production and exchange on markets liberated from incumbents such as corporations and the state drives much of the innovation on the margins of tech. A recent treatise on the digital currency Bitcoin lauds Napster’s ability to “cut out the middlemen,” likening the currency to the file sharing technology (Kelly 2014, 11). “It is a quantum leap in the peer-to-peer network phenomenon. Bitcoin is to value transfer what Napster was to music” (33). Much like the advocates of digital currencies, Proudhon believed that state control of money was an unfair manipulation of the market, and sought to develop alternative currencies and banks rooted in labor-time, a belief that Marx criticized for its misunderstanding of the role of abstract labor in production.

    In this way, Proudhon and his beliefs fit naturally into the dominant ideologies surrounding Bitcoin and other cryptocurrencies: that economic problems stem from the conspiratorial manipulation of “fiat” currency by national governments and financial organizations such as the Federal Reserve. In light of recent analyses that suggest that Bitcoin functions less as a means of exchange than as a sociotechnical formation to which an array of faulty right-wing beliefs about economics adheres (Golumbia 2016), and the revelation that contemporary fascist groups rely on Bitcoin and other cryptocurrency to fund their activities (Ebner 2018), it is clear that Digital Proudhonism exists comfortably beside the most reactionary ideologies. Historically, this was true of Proudhon’s own work as well. As Zeev Sternhell (1996) describes, the early twentieth-century French political organization the Cercle Proudhon were captivated by Proudhon’s opposition to Marxism, his distaste for democracy, and his anti-Semitism. According to Sternhell, the group was an influential source of French proto-fascist thought.

    Alternatives

    The goal of this paper is not to question the creativity of remix culture or the maker movement, or to indict their potentials for artistic expression, or negate all their criticisms of intellectual property. What I wish to criticize is the outsized economic and political claims made about it. These claims have an impact on policy, such as Obama’s “Nation of Makers” initiative (The White House Office of the Press Secretary 2016), which draws upon numerous federal agencies, hundreds of schools, as well as educational product companies to spark “a renaissance of American manufacturing and hardware innovation.” But further, like Marx, I not only think Proudhonism rests on incorrect analyses of cultural labor, but that such ideas lead to bad politics. As Astra Taylor (2014) extensively documents in The People’s Platform, for all the exclamations of new opportunities with the end of middlemen and gatekeepers, the creative economy is as difficult as it ever was for artists to navigate, noting that writers like Lessig have replaced the critique of the commodification of culture with arguments about state and corporate control (26-7).  Meanwhile, many of the fruits of this disintermediation have been plucked by an exploitative “sharing economy” whose platforms use “peer-to-peer” to subvert all manner of regulations; at least one commentator has invoked Napster’s storied ability to “cut out the middlemen” to describe AirBnB and Uber (Karabel 2014).

    Digital Proudhonism and its vision of federations of independent individual producers and creators (perhaps now augmented with the latest cryptographic tools) dominates the imagination of a radical challenge to digital capitalism. Its critiques of the corporate internet have become common sense. What kind of alternative radical vision is possible? Here I believe it is useful to return to the core of Marx’s critique of Proudhon.

    Marx saw that the unromantic labor of proletarians, combining varying levels of individual productivity within the factory through machines which themselves are the product of social labor, capitalism’s dynamics create a historically novel form of production—social production—along with new forms of culture and social relations. For Marx ([1867] 1992), this was potentially the basis for an economy beyond capitalism. To attempt to move “back” to individual production was reactionary: “As soon as the workers are turned into proletarians, and their means of labour into capital, as soon as the capitalist mode of production stands on its own feet, then the further socialization of labour and further transformation of the soil and other means of production into socially exploited and, therefore, communal means of production takes on a new form” (928).

    The socialization of production under the development of the means of production—the necessity of greater collaboration and the reliance on past labors in the form of machines—gives way to a radical redefinition of the relationship to one’s output. No one can claim a product was made by them alone; rather, production demands to be recognized as social. Describing the socialization of labor through industrialization in Socialism: Utopian and Scientific, Engels ([1880] 2008) states, “The yarn, the cloth, the metal articles that now came out of the factory were the joint product of many workers, through whose hands they had successively to pass before they were ready. No one person could say of them: ‘I made that; this is my product’” (56). To put it in the language of cultural production, there can be no author. Or, in another implicit recognition that the work of today relies on the work of many others, past and present: everything is a remix.

    Or instead of a remix, a “vortex,” to use the language of Nick Dyer-Witheford (2015), whose Cyber-Proletariat reminds us that the often-romanticized labor of digital creators and makers is but one stratum among many that makes up digital culture. The creative economy is a relatively privileged sector in an immense global “factory” made up of layers of formal and informal workers operating at the point of production, distribution and consumption, from tantalum mining to device manufacture to call center work to app development. The romance of “DIY” obscures the reality that nothing digital is done by oneself: it is always already a component of a larger formation of socialized labor.

    The labor of digital creatives and innovators, sutured as it is to a technical apparatus fashioned from dead labor and meant for producing commodities for profit, is therefore already socialized. While some of this socialization is apparent in peer production, much of it is mystified through the real abstraction of commodity fetishism, which masks socialization under wage relations and contracts. Rather than further rely on these contracts to better benefit digital artisans, a Marxist politics of digital culture would begin from the fact of socialization, and as Radhika Desai (2011) argues, take seriously Marx’s call for “a general organization of labour in society” via political organizations such as unions and labor parties (212). Creative workers could align with others in the production chain as a class of laborers rather than as an assortment of individual producers, and form the kinds of organizations, such as unions, that have been the vehicles of class politics, with the aim of controlling society’s means of production, not simply one’s “own” tools or products. These would be bonds of solidarity, not bonds of market transactions. Then the apparatus of digital cultural production might be controlled democratically, rather than by the despotism of markets and private profit.

    _____

    Gavin Mueller Gavin Mueller holds a PhD in Cultural Studies from George Mason University. He teaches in the New Media and Digital Culture program at the University of Amsterdam.

    Back to the essay

    _____

    Notes

    [1] The Pirate Bay, the largest and most antagonistic site of the peer-to-peer movement, has founders who identified as libertarian, socialist, and apolitical, respectively, and acquired funding from Carl Lundström, an entrepreneur associated with far-right movements (Schwartz 2014, 142).

    _____

    Works Cited

    • Anderson, Chris. 2012. Makers: The New Industrial Revolution. New York: Crown Business.
    • Barbrook, Richard and Andy Cameron. 1996. “The Californian Ideology.” Science as Culture 6:1. 44-72.
    • Barlow, John Perry. 1996. “A Declaration of the Independence of Cyberspace.” Electronic Frontier Foundation.
    • Benkler, Yochai. 2006. The Wealth of Networks. New Haven, CT: Yale University Press.
    • Boyle, James. 2008. Public Domain: Enclosing the Commons of the Mind. New Haven, CT: Yale University Press.
    • Brouillette, Sarah. 2009. “Creative Labor.” Mediations: Journal of the Marxist Literary Group 24:2. 140-149.
    • Carson, Kevin. 2015a. “Nothing to Fear from New Technologies if the Market is Free.” Center for a Stateless Society.
    • Carson, Kevin. 2015b. “Paul Mason and His Critics (Such As They Are).” Center for a Stateless Society.
    • Desai, Radhika. 2011. “The New Communists of the Commons: Twenty-First-Century Proudhonists.” International Critical Thought 1:2. 204-223.
    • Doctorow, Cory. 2014. Information Doesn’t Want to Be Free: Laws for the Internet Age. San Francisco: McSweeney’s.
    • Dyer-Witheford, Nick. 2015. Cyber-proletariat: Global Labour in the Digital Vortex. London: Pluto Press.
    • Ebner, Julia, 2018. “The Currency of the Far-Right: Why Neo-Nazis Love Bitcoin.” The Guardian (Jan 24).
    • Engels, Friedrich. (1880) 2008. Socialism: Utopian and Scientific. Trans. Edward Aveling. New York: Cosimo Books, 2008.
    • Golumbia, David. 2016. The Politics of Bitcoin: Software as Right-Wing Extremism. Minneapolis: University of Minnesota Press.
    • Karabel, Zachary. 2014. “Requiem for the Middleman.” Slate (Apr 25).
    • Kelly, Brian. 2014. The Bitcoin Big Bang: How Alternative Currencies Are About to Change the World. Hoboken: Wiley.
    • Kernfeld, Barry. 2011. Pop Song Piracy: Disobedient Music Distribution Since 1929. Chicago: University of Chicago Press.
    • Ku, Raymond Shih Ray. 2002. “The Creative Destruction of Copyright: Napster and the New Economics of Digital Technology.” The University of Chicago Law Review 69, no. 1: 263-324.
    • Lessig, Lawrence. 2004. Free Culture: The Nature and Future of Creativity. New York: Penguin Books.
    • Lessig, Lawrence. 2008. Remix: Making Art and Commerce Thrive in the New Economy. New York: Penguin.
    • Marx, Karl. (1847) 1973. The Poverty of Philosophy. New York: International Publishers.
    • Marx, Karl. (1859) 1911. A Contribution to the Critique of Political Economy. Translated by N.I. Stone. Chicago: Charles H. Kerr and Co.
    • Marx, Karl. (1865) 2016. “On Proudhon.” Marxists Internet Archive.
    • Marx, Karl. (1867) 1992. Capital: A Critique of Political Economy, Volume 1. Trans. Ben Fowkes. London: Penguin Books.
    • Mason, Paul. 2016, “BrewDog’s Open-Source Revolution is at the Vanguard of Postcapitalism.” The Guardian (Feb 29).
    • McLeod, Kembrew. 2005. Freedom of Expression: Overzealous Copyright Bozos and Other Enemies of Creativity. New York: Doubleday.
    • McNally, David. 1993. Against the Market: Political Economy, Market Socialism and the Marxist Critique. London: Verso.
    • Murray, Patrick. 2016. The Mismeasure of Wealth: Essays on Marx and Social Form. Leiden: Brill.
    • The White House Office of the Press Secretary. 2016. “New Commitments in Support of the President’s Nation of Makers Initiative to Kick Off 2016 National Week of Making.” June 17.
    • Proudhon, Pierre-Joseph. (1840) 2011. “What is Property.” In Property is Theft! A Pierre-Joseph Proudhon Reader, edited by Iain McKay. Translated by Benjamin R. Tucker. Edinburgh: AK Press.
    • Proudhon, Pierre-Joseph. (1847) 2012. The Philosophy of Poverty: The System of Economic Contradictions. Translated by Benjamin R. Tucker. Floating Press.
    • Proudhon, Pierre-Joseph. (1851) 2003. General Idea of the Revolution in the Nineteenth Century. Translated by John Beverly Robinson. Mineola, NY: Dover Publications, Inc.
    • Proudhon, Pierre-Joseph. (1863) 1979. The Principle of Federation. Translated by Richard Jordan. Toronto: University of Toronto Press.
    • Schwartz, Jonas Andersson. 2014. Online File Sharing: Innovations in Media Consumption. New York: Routledge.
    • Sinnreich, Aram. 2013. The Piracy Crusade: How the Music Industry’s War on Sharing Destroys Markets and Erodes Civil Liberties. Amherst, MA: University of Massachusetts Press.
    • Shirky, Clay. 2008. Here Comes Everybody: The Power of Organizing Without Organizations. New York: Penguin.
    • Sternhell, Zeev. 1996. Neither Right Nor Left: Fascist Ideology in France. Princeton, NJ: Princeton University Press.
    • Taylor, Astra. 2014. The People’s Platform: Taking Back Power and Culture in a Digital Age. New York: Metropolitan Books.
    • Vaidhyanathan, Siva. 2005. The Anarchist in the Library: How the Clash Between Freedom and Control is Hacking the Real World and Crashing the System. New York: Basic Books.
    • Williams, Raymond. 1977. Marxism and Literature. Oxford: Oxford University Press.
    • Winner, Langdon. 1997. “Cyberlibertarian Myths and The Prospects For Community.” Computers and Society 27:3. 14 – 19.

     

  • Zachary Loeb — From Megatechnic Bribe to Megatechnic Blackmail: Mumford’s ‘Megamachine’ After the Digital Turn

    Zachary Loeb — From Megatechnic Bribe to Megatechnic Blackmail: Mumford’s ‘Megamachine’ After the Digital Turn

    Zachary Loeb

    Without even needing to look at the copyright page, an aware reader may be able to date the work of a technology critic simply by considering the technological systems, or forms of media, being critiqued. Unfortunately, in discovering the date of a given critique one may be tempted to conclude that the critique itself must surely be dated. Past critiques of technology may be read as outdated curios, can be considered as prescient warnings that have gone unheeded, or be blithely disregarded as the pessimistic braying of inveterate doomsayers. Yet, in the case of Lewis Mumford, even though his activity peaked by the mid-1970s, it would be a mistake to deduce from this that his insights are of no value to the world of today. Indeed, when it comes to the “digital turn,” it is a “turn” in the road which Mumford saw coming.

    It would be reductive to simply treat Mumford as a critic of technology. His body of work includes literary analysis, architectural reviews, treatises on city planning, iconoclastic works of history, impassioned calls to arms, and works of moral philosophy (Mumford 1982; Miller 1989; Blake 1990; Luccarelli 1995; Wojtowicz 1996). Leo Marx described Mumford as “a generalist with strong philosophic convictions,” one whose body of work represents the steady unfolding of “a single view of reality, a comprehensive historical, moral, and metaphysical—one might say cosmological—doctrine” (L. Marx 1990: 167). In the opinion of the literary scholar Charles Molesworth, Mumford is an “axiologist with a clear social purpose: he wants to make available to society a better and fuller set of harmoniously integrated values” (Molesworth 1990: 241), while Christopher Lehmann-Haupt caricatured Mumford as “perhaps our most distinguished flagellator,” and Lewis Croser denounced him as a “prophet of doom” who “hates almost all modern ideas and modern accomplishments without discrimination” (Mendelsohn 1994: 151-152). Perhaps Mumford is captured best by Rosalind Williams, who identified him alternately as an “accidental historian” (Williams 1994: 228) and as a “cultural critic” (Williams 1990: 44) or by Don Ihde who referred to him as an “intellectual historian” (Ihde 1993; 96). As for Mumford’s own views, he saw himself in the mold of the prophet Jonah, “that terrible fellow who keeps on uttering the very words you don’t want to hear, reporting the bad news and warning you that it will get even worse unless you yourself change your mind and alter your behavior” (Mumford 1979: 528).

    Therefore, in the spirit of this Jonah let us go see what is happening in Ninevah after the digital turn. Drawing upon Mumford’s oeuvre, particularly the two volume The Myth of the Machine, this paper investigates similarities between Mumford’s concept of “the megamachine” and the post digital-turn technological world. In drawing out these resonances, I pay particular attention to the ways in which computers featured in Mumford’s theorizing of the “megamachine” and informed his darkening perception. In addition I expand upon Mumford’s concept of “the megatechnic bribe” to argue that, after the digital-turn, what takes place is a move from “the megatechnic bribe” towards what I term “megatechnic blackmail.”

    In a piece provocatively titled “Prologue for Our Times,” which originally appeared in The New Yorker in 1975, Mumford drolly observed: “Even now, perhaps a majority of our countrymen still believe that science and technics can solve all human problems. They have no suspicion that our runaway science and technics themselves have come to constitute the main problem the human race has to overcome” (Mumford 1975: 374). The “bad news” is that more than forty years later a majority may still believe that.

    Towards “The Megamachine”

    The two-volume Myth of the Machine was not Mumford’s first attempt to put forth an overarching explanation of the state of the world mixing cultural criticism, historical analysis, and free-form philosophizing; he had previously attempted a similar feat with his Renewal of Life series.

    Mumford originally planned the work as a single volume, but soon came to realize that this project was too ambitious to fit within a single book jacket (Miller 1989, 299). The Renewal of Life ultimately consisted of four volumes: Technics and Civilization (1934), The Culture of Cities (1938), The Condition of Man (1944), and The Conduct of Life (1951)—of which Technics and Civilization remains the text that has received the greatest continued attention. A glance at the nearly twenty-year period encompassed in the writing of these four books should make it obvious that they were written during a period of immense change and upheaval in the world and this certainly impacted the shape and argument of these books. These books fall evenly on opposite sides of two events that were to have a profound influence on Mumford’s worldview: the 1944 death of his son Geddes on the Italian front during World War II, and the dropping of atomic bombs on Hiroshima and Nagasaki in 1945.

    The four books fit oddly together and reflect Mumford’s steadily darkening view of the world—a pendulous swing from hopefulness to despair (Blake 1990, 286-287). With the Renewal of Life, Mumford sought to construct a picture of the sort of “whole” which could develop such marvelous potential, but which was so morally weak that it wound up using that strength for destructive purposes. Unwelcome though Mumford’s moralizing may have been, it was an attempt, albeit from a tragic perspective (Fox 1990), to explain why things were the way that they were, and what steps needed to be taken for positive change to occur. That the changes that were taking place were those which, in Mumford’s estimation, were for the worse propelled him to develop concepts like “the megamachine” and the “megatechnic bribe” to explain the societal regression he was witnessing.

    By the time Mumford began work on The Renewal of Life he had already established himself as a prominent architectural critic and public intellectual. Yet he remained outside of any distinct tradition, school, or political ideology. Mumford was an iconoclastic thinker whose ethically couched regionalist radicalism, influenced by the likes of Ebenezer Howard, Thorstein Veblen, Peter Kropotkin and especially Patrick Geddes, placed him at odds with liberals and socialists alike in the early decades of the twentieth century (Blake 1990, 198-199). For Mumford the prevailing progressive and radical philosophies had been buried amongst the rubble of World War I and he felt that a fresh philosophy was needed, one that would find in history the seeds for social and cultural renewal, and Mumford thought himself well-equipped to develop such a philosophy (Miller 1989, 298-299). Mumford was hardly the first in his era to attempt such a synthesis (Lasch 1991): by the time Mumford began work on The Renewal of Life, Oswald Spengler had already published a grim version of such a new philosophy (300). Indeed, there is something of a perhaps not-accidental parallel between Spengler’s title The Decline of the West and Mumford’s choice of The Renewal of Life as the title for his own series.

    In Mumford’s estimation, Spengler’s work was “more than a philosophy of history” it was “a work of religious consolation” (Mumford 1938, 218). The two volumes of The Decline of the West are monuments to Prussian pessimism in which Spengler argues that cultures pass “from the organic to the inorganic, from spring to winter, from the living to the mechanical, from the subjectively conditioned to the objectively conditioned” (220). Spengler argued that this is the fate of all societies, and he believed that “the West” had entered into its winter. It is easy to read Spengler’s tracts as woebegone anti-technology dirges (Farrenkopf 2001, 110-112), or as a call for “Faustian man” (Western man) to assert dominance over the machine and wield it lest it be wielded against him (Herf 1984, 49-69); but Mumford observed that Spengler had “predicted, better than more hopeful philosophers, the disastrous downward course that modern civilization is now following” (Mumford 1938, 235). Spengler had been an early booster of the Nazi regime, if a later critic of it, and though Mumford criticized Spengler for the politics he helped unleash, Mumford still saw him as one with “much to teach the historian and the sociologist” (Mumford 1938, 227). Mumford was particularly drawn to, and influenced by, Spengler’s method of writing moral philosophy in the guise of history (Miller 1989, 301). And it may well be that Spengler’s woebegone example prompted Mumford to distance himself from being a more “hopeful” philosopher in his later writings. Nevertheless, where Spengler had gazed longingly towards the coming fall, Mumford, even in the grip of the megamachine, still believed that the fall could be avoided.

    Mumford concludes the final section of The Renewal of Life, called The Conduct of Life, with measured optimism, noting: “The way we must follow is untried and heavy with difficulty; it will test to the utmost our faith and our powers. But it is the way toward life, and those who follow it will prevail” (Mumford 1951, 292). Alas, as the following sections will demonstrate, Mumford grew steadily less confident in the prospects of “the way toward life,” and the rise of the computer only served to make the path more “heavy with difficulty.”

    The Megamachine

    The volumes of The Renewal of Life hardly had enough time to begin gathering dust, before Mumford was writing another work that sought to explain why the prophesized renewal had not come. In the two volumes of The Myth of the Machine Mumford revisits the themes from The Renewal of Life while advancing an even harsher critique and developing his concept of the “megamachine.” The idea of the megamachine has been taken up for its explanatory potential by many others beyond Mumford in a range of fields, it was drawn upon by some of his contemporary critics of technology (Fromm 1968; Illich 1973; Ellul 1980), has been commented on by historians and philosophers of technology (Hughes 2004; Jacoby 2005; Mitcham 1994; Segal 1994), has been explored in post-colonial thinking (Alvares 1988), and has sparked cantankerous disagreements amongst those seeking to deploy the term to advance political arguments (Bookchin 1995; Watson 1997). It is a term that shares certain similarities with other concepts that aim to capture the essence of totalitarian technological control such as Jacque Ellul’s “technique,” (Ellul 1967) and Neil Postman’s “technopoly” (Postman 1993). It is an idea that, as I will demonstrate, is still useful for describing, critiquing, and understanding contemporary society.

    Mumford first gestured in the direction of the megamachine in his 1964 essay “Authoritarian and Democratic Technics” (Mumford 1964). There Mumford argued that small scale technologies which require the active engagement of the human, that promote autonomy, and that are not environmentally destructive are inherently “democratic” (2-3); while large scale systems that reduce humans to mere cogs, that rely on centralized control and are destructive of planet and people, are essentially “authoritarian” (3-4). For Mumford, the rise of “authoritarian technics” was a relatively recent occurrence; however, by “recent” he had in mind “the fourth millennium B.C.” (3). Though Mumford considered “nuclear bombs, space rockets, and computers” all to be examples of contemporary “authoritarian technics” (5) he considered the first examples of such systems to have appeared under the aegis of absolute rulers who exploited their power and scientific knowledge for immense construction feats such as the building of the pyramids. As those endeavors had created “complex human machines composed of specialized, standardized, replaceable, interdependent parts—the work army, the military army, the bureaucracy” (3). In drawing out these two tendencies, Mumford was clearly arguing in favor of “democratic technics,” but he moved away from these terms once he coined the neologism “megamachine.”

    Like the Renewal of Life before it, The Myth of the Machine was originally envisioned as a single book (Mumford 1970, xi). The first volume of the two represents something of a rewriting of Technics and Civilization, but gone from Technics and Human Development is the optimism that had animated the earlier work. By 1959 Mumford had dismissed of Technics and Civilization as “something of a museum piece” wherein he had “assumed, quite mistakenly, that there was evidence for a weakening of faith in the religion of the machine” (Mumford 1934, 534). As Mumford wrote The Myth of the Machine he found himself looking at decades of so-called technological progress and seeking an explanation as to why this progress seemed to primarily consist of mountains of corpses and rubble.

    With the rise of kingship, in Mumford’s estimation, so too came the ability to assemble and command people on a scale that had been previously unknown (Mumford 1967, 188). This “machine” functioned by fully integrating all of its components to complete a particular goal and “when all the components, political and economic, military, bureaucratic and royal, must be included” what emerges is “the megamachine” and along with it “megatechnics” (188-189). It was a structure in which, originally, the parts were not made of steel, glass, stone or copper but flesh and blood—though each human component was assigned and slotted into a position as though they were a cog. While the fortunes of the megamachine ebbed and flowed for a period, Mumford saw the megamachine as becoming resurgent in the 1500s as faith in the “sun god” came to be replaced by the “divine king” exploiting new technical and scientific knowledge (Mumford 1970: 28-50). Indeed, in assessing the thought of Hobbes, Mumford goes so far as to state “the ultimate product of Leviathan was the megamachine, on a new and enlarged model, one that would completely neutralize or eliminate its once human parts” (100).

    Unwilling to mince words, Mumford had started The Myth of the Machine by warning that with the “new ‘megatechnics’ the dominant minority will create a uniform, all-enveloping, super-planetary structure, designed for automatic operation” in which “man will become a passive, purposeless, machine-conditioned animal” (Mumford 1967, 3). Writing at the close of the 1960s, Mumford observed that the impossible fantasies of the controllers of the original megamachines were now actual possibilities (Mumford 1970, 238). The rise of the modern megamachine was the result of a series of historic occurrences: the French revolution which replaced the power of the absolute monarch with the power of the nation state; World War I wherein scientists and scholars were brought into service of the state whilst moderate social welfare programs were introduced to placate the masses (245); and finally the emergence of tools of absolute control and destructive power such as the atom bomb (253). Figures like Stalin and Hitler were not exceptions to the rule of the megamachine but only instances that laid bare “the most sinister defects of the ancient megamachine” its violent, hateful and repressive tendencies (247).

    Even though the power of the megamachine may make it seem that resistance is futile, Mumford was no defeatist. Indeed, The Pentagon of Power ends with a gesture towards renewal that is reminiscent of his argument in The Conduct of Life—albeit with a recognition that the state of the world had grown steadily more perilous. A core element of Mumford’s arguments is that the megamachine’s power was reliant on the belief invested in it (the “myth”), but if such belief in the megamachine could be challenged, so too could the megamachine itself (Miller 1989, 156). The Pentagon of Power met with a decidedly mixed reaction: it was selected as a main selection by the Book-of-the-Month-Club and The New Yorker serialized much of the argument about the megamachine (157). Yet, many of the reviewers of the book denounced Mumford for his pessimism; it was in a review of the book in the New York Times that Mumford was dubbed “our most distinguished flagellator” (Mendelsohn 1994, 151-154). And though Mumford chafed at being dubbed a “prophet of doom” (Segal 1994, 149) it is worth recalling that he liked to see himself in the mode of that “prophet of doom” Jonah (Mumford 1979).

    After all, even though Mumford held out hope that the megamachine could be challenged—that the Renewal of Life could still beat back The Myth of the Machine—he glumly acknowledged that the belief that the megamachine was “absolutely irresistible” and “ultimately beneficent…still enthralls both the controllers and the mass victims of the megamachine today” (Mumford 1967, 224). Mumford described this myth as operating like a “magical spell,” but as the discussion of the megatechnic bribe will demonstrate, it is not so much that the audience is transfixed as that they are bought off. Nevertheless, before turning to the topic of the bribe and blackmail, it is necessary to consider how the computer fit into Mumford’s theorizing of the megamachine.

    The Computer and the Megamachine

    Five years after the publication of The Pentagon of Power, Mumford was still claiming that “the Myth of the Machine” was “the ultimate religion of our seemingly rational age” (Mumford 1975, 375). While it is certainly fair to note that Mumford’s “today” is not our today, it would be foolhardy to merely dismiss the idea of the megamachine as anachronistic moralizing. And to credit the megamachine for its full prescience and continued utility, it is worth closely reading the text to consider the ways in which Mumford was writing about the computer—before the digital turn.

    Writing to his friend, the British garden city advocate Frederic J. Osborn, Mumford noted: “As to the megamachine, the threat that it now offers turns out to be even more frightening, thanks to the computer, than even I in my most pessimistic moments had ever suspected. Once fully installed our whole lives would be in the hands of those who control the system…no decision from birth to death would be left to the individual” (M. Hughes 1971, 443). It may be that Mumford was merely engaging in a bit of hyperbolic flourish in referring to his view of the computer as trumping his “most pessimistic moments,” but Mumford was no stranger (or enemy) of pessimistic moments. Mumford was always searching for fresh evidence of “renewal,” his deepening pessimism points to the types of evidence he was actually finding.  In constructing a narrative that traced the origins of the megamachine across history Mumford had been hoping to show “that human nature is biased toward autonomy and against submission to technology,” (Miller 1990, 157) but in the computer Mumford saw evidence pointing in the opposite direction.

    In assessing the computer, Mumford drew a contrast between the basic capabilities of the computers of his day and the direction in which he feared that “computerdom” was moving (Mumford 1970, plate 6).  Computers to him were not simply about controlling “the mechanical process” but also “the human being who once directed it” (189). Moving away from historical antecedents like Charles Babbage, Mumford emphasized Norbert Wiener’s attempt to highlight human autonomy and he praised Wiener’s concern for the tendency on the part of some technicians to begin to view the world only in terms of the sorts of data that computers could process (189). Mumford saw some of the enthusiasm for the computer’s capability as being rather “over-rated” and he cited instances—such as the computer failure in the case of the Apollo 11 moon landing—as evidence that computers were not quite as all-powerful as some claimed (190). In the midst of a growing ideological adoration for computers, Mumford argued that their “life-efficiency and adaptability…must be questioned” (190). Mumford’s critiquing of computers can be read as an attempt on his part to undermine the faith in computers when such a belief was still in its nascent cult state—before it could become a genuine world religion.

    Mumford does not assume a wholly dismissive position towards the computer. Instead he takes a stance toward it that is similar to his position towards most forms of technology: its productive use “depends upon the ability of its human employers quite literally to keep their own heads, not merely to scrutinize the programming but to reserve the right for ultimate decision” (190). To Mumford, the computer “is a big brain in its most elementary state: a gigantic octopus, fed with symbols instead of crabs,” but just because it could mimic some functions of the human mind did not mean that the human mind should be discarded (Mumford 1967: 29). The human brain was for Mumford infinitely more complex than a computer could be, and even where computers might catch up in terms of quantitative comparison, Mumford argued that the human brain would always remain superior in qualitative terms (39). Mumford had few doubts about the capability of computers to perform the functions for which they had been programmed, but he saw computers as fundamentally “closed” systems whereas the human mind was an “open” one; computers could follow their programs but he did not think they could invent new ones from scratch (Mumford 1970: 191). For Mumford the rise in the power of computers was linked largely to the shift away from the “old-fashioned” machines such as Babbage’s Calculating Engine—and towards the new digital and electric machines which were becoming smaller and more commonplace (188). And though Mumford clearly respected the ingenuity of scientists like Weiner, he amusingly suggested that “the exorbitant hopes for a computer dominated society” were really the result of “the ‘pecuniary-pleasure’ center” (191). While Mumford’s measured consideration of the computer’s basic functioning is important, what is of greater significance is his thinking regarding the computer’s place in the megamachine.

    Whereas much of Technics and Human Development focuses upon the development of the first megamachine, in The Pentagon of Power Mumford turns his focus to the fresh incarnation of the megamachine. This “new megamachine” was distinguished by the way in which it steadily did away with the need for the human altogether—now that there were plenty of actual cogs (and computers) human components were superfluous (258). To Mumford, scientists and scholars had become a “new priesthood” who had abdicated their freedom and responsibility as they came to serve the “megamachine” (268). But if they were the “priesthood” than who did they serve? As Mumford explained, in the command position of this new megamachine was to be found a new “ultimate ‘decision-maker’ and Divine King” and this figure had emerged in “a transcendent, electronic form” it was “the Central Computer” (273).

    Writing in 1970, before the rise of the personal computer or the smartphone, Mumford’s warnings about computers may have seemed somewhat excessive. Yet, in imagining the future of a “a computer dominated society” Mumford was forecasting that the growth of the computer’s power meant the consolidation of control by those already in power. Whereas the rulers of yore had dreamt of being all-seeing, with the rise of the computer such power ceased being merely a fantasy as “the computer turns out to be the Eye of the reinstated Sun God” capable of exacting “absolute conformity to his demands, because no secret can be hidden from him, and no disobedience can go unpunished” (274). And this “eye” saw a great deal: “In the end, no action, no conversation, and possibly in time no dream or thought would escape the wakeful and relentless eye of this deity: every manifestation of life would be processed into the computer and brought under its all-pervading system of control. This would mean, not just the invasion of privacy, but the total destruction of autonomy: indeed the dissolution of the human soul” (274-275). The mention of “the human soul” may be evocative of a standard bit of Mumfordian moralizing, but the rest of this quote has more to say about companies like Google and Facebook, as well as about the mass surveillance of the NSA than many things written since. Indeed, there is something almost quaint about Mumford writing of “no action” decades before social media made it so that an action not documented on social media is of questionable veracity. While the comment regarding “no conversation” seems uncomfortably apt in an age where people are cautioned not to disclose private details in front of their smart TVs and in which the Internet of Things populates people’s homes with devices that are always listening.

    Mumford may have written these words in the age of large mainframe computers but his comments on “the total destruction of autonomy” and the push towards “computer dominated society” demonstrate that he did not believe that the power of such machines could be safely locked away. Indeed, that Mumford saw the computer as an example of an “authoritarian technic” makes it highly questionable that he would have been swayed by the idea that personal computers could grant individuals more autonomy. Rather, as I discuss below, it is far more likely that he would have seen the personal computer as precisely the sort of democratic seeming gadget used to “bribe” people into accepting the larger “authoritarian” system. As it is precisely through the placing of personal computers in people’s homes, and eventually on their persons, that the megamachine is able to advance towards its goal of total control.

    The earlier incarnations of the megamachine had dreamt of the sort of power that became actually available in the aftermath of World War II thanks to “nuclear energy, electric communication, and the computer” (274). And finally the megamachine’s true goal became clear: “to furnish and process an endless quantity of data, in order to expand the role and ensure the domination of the power system” (275). In short, the ultimate purpose of the megamachine was to further the power and enhance the control of the megamachine itself. It is easy to see in this a warning about the dangers of “big data” many decades before that term had entered into common use. Aware of how odd these predictions may have sounded to his contemporaries, Mumford recognized that only a few decades earlier such ideas could have been dismissed of as just so much “satire,” but he emphasized that such alarming potentialities were now either already in existence or nearly within reach (275).

    In the twenty-first century, after the digital turn, it is easy to find examples of entities that fit the bill of the megamachine. It may, in fact, be easier to do this today than it was during Mumford’s lifetime. For one no longer needs to engage in speculative thinking to find examples of technologies that ensure that “no action” goes unnoticed. The handful of massive tech conglomerates that dominate the digital world today—companies like Google, Facebook, and Amazon—seem almost scarily apt manifestations of the megamachine. Under these platforms “every manifestation of life” gets “processed into the computer and brought under its all-pervading system of control,” whether it be what a person searches for, what they consider buying, how they interact with friends, how they express their likes, what they actually purchase, and so forth. And as these companies compete for data they work to ensure that nothing is missed by their “relentless eye[s].” Furthermore, though these companies may be technology firms they are like the classic megamachines insofar as they bring together the “political and economic, military, bureaucratic and royal.” Granted, today’s “royal” are not those who have inherited their thrones but those who owe their thrones to the tech empires at the heads of which they sit. While the status of these platform’s users, reduced as they are to cogs supplying an endless stream of data, further demonstrates the totalizing effects of the megamachine as it coordinates all actions to serve its purposes. And yet, Google, Facebook, and Amazon are not the megamachine, but rather examples of megatechnics; the megamachine is the broader system of which all of those companies are merely parts.

    Though the chilling portrait created by Mumford seems to suggest a definite direction, and a grim final destination, Mumford tried to highlight that such a future “though possible, is not determined, still less an ideal condition of human development” (276). Nevertheless, it is clear that Mumford saw the culmination of “the megamachine” in the rise of the computer and the growth of “computer dominated society.” Thus, “the megamachine” is a forecast of the world after “the digital turn.” Yet, the continuing strength of Mumford’s concept is based not only on the prescience of the idea itself, but in the way in which Mumford sought to explain how it is that the megamachine secures obedience to its strictures. It is to this matter that our attention, at last, turns.

    From the Megatechnic Bribe to Megatechnic Blackmail

    To explain how the megamachine had maintained its power, Mumford provided two answers, both of which avoid treating the megamachine as a merely “autonomous” force (Winner 1989, 108-109). The first explanation that Mumford gives is an explanation of the titular idea itself: “the ultimate religion of our seemingly rational age” which he dubbed ““the myth of the machine” (Mumford 1975, 375). The key component of this “myth” is “the notion that this machine was, by its very nature, absolutely irresistible—and yet, provided that one did not oppose it, ultimately beneficial” (Mumford 1967, 224) —once assembled and set into action the megamachine appears inevitable, and those living in megatechnic societies are conditioned from birth to think of the megamachine in such terms (Mumford 1970, 331).

    Yet, the second part of the myth is especially, if not more, important: it is not merely that the megamachine appears “absolutely irresistible” but that many are convinced that it is “ultimately beneficial.” This feeds into what Mumford described as “the megatechnic bribe,” a concept which he first sketched briefly in “Authoritarian and Democratic Technics” (Mumford 1964, 6) but which he fully developed in The Pentagon of Power (Mumford 1970, 330-334). The “bribe” functions by offering those who go along with it a share in the “perquisites, privileges, seductions, and pleasures of the affluent society” so long that is as they do not question or ask for anything different from that which is offered (330). And this, Mumford recognizes, is a truly tempting offer, as it allows its recipients to believe they are personally partaking in “progress” (331). After all, a “bribe” only really works if what is offered is actually desirable. But Mumford warns, once a people opt for the megamachine, once they become acclimated to the air-conditioned pleasure palace of the megatechnic bribe “no other choices will remain” (332).

    By means of this “bribe,” the megamachine is able to effect an elaborate bait and switch: one through which people are convinced that an authoritarian technic is actually a democratic one. For the bribe accepts “the basic principle of democracy, that every member of society should have a share in its goods,” (Mumford 1964, 6). Mumford did not deny the impressive things with which people were being bribed, but to see them as only beneficial required, in his estimation, a one-sided assessment which ignored “long-term human purposes and a meaningful pattern of life” (Mumford 1970, 333). It entailed confusing the interests of the megamachine with the interests of actual people. Thus, the problem was not the gadgets as such, but the system in which these things were created, produced, and the purposes for which they were disseminated: the problem was that the true purpose of these things was to incorporate people into the megamachine (334). The megamachine created a strange and hostile new world, but offered its denizens bribes to convince them that life in this world was actually a treat. Ruminating on the matter of the persuasive power of the bribe, Mumford wondered if democracy could survive after “our authoritarian technics consolidates its powers, with the aid of its new forms of mass control, its panoply of tranquilizers and sedatives and aphrodisiacs” (Mumford 1964, 7). And in typically Jonah-like fashion, Mumford balked at the very question, noting that in such a situation “life itself will not survive, except what is funneled through the mechanical collective” (7).

    If one chooses to take the framework of the “megatechnic bribe” seriously then it is easy to see it at work in the 21st century. It is the bribe that stands astride the dais at every gaudy tech launch, it is the bribe which beams down from billboards touting the slightly sleeker design of the new smartphone, it is the bribe which promises connection or health or beauty or information or love or even technological protection from the forces that technology has unleashed. The bribe is the offer of the enticing positives that distracts from the legion of downsides. And in all of these cases that which is offered is that which ultimately enhances the power of the megamachine. As Mumford feared, the values that wind up being transmitted across these “bribes,” though they may attempt a patina of concern for moral or democratic values, are mainly concerned with reifying (and deifying) the values of the system offering up these forms of bribery.

    Yet this reading should not be taken as a curmudgeonly rejection of technology as such, in keeping with Mumford’s stance, one can recognize that the things put on offer after the digital turn provide people with an impressive array of devices and platforms, but such niceties also seem like the pleasant distraction that masks and normalizes rampant surveillance, environmental destruction, labor exploitation, and the continuing concentration of wealth in a few hands. It is not that there is a total lack of awareness about the downsides of the things that are offered as “bribes,” but that the offer is too good to refuse. And especially if one has come to believe that the technological status quo is “absolutely irresistible” then it makes sense why one would want to conclude that this situation is “ultimately beneficial.” As Langdon Winner put it several decades ago, “the prevailing consensus seems to be that people love a life of high consumption, tremble at the thought that it might end, and are displeased about having to clean up the messes that technologies sometimes bring” (Winner 1986, 51), such a sentiment is the essence of the bribe.

    Nevertheless, it seems that more thought needs to be given to the bribe after the digital turn, the point after which the bribe has already become successful. The background of the Cold War may have provided a cultural space for Mumford’s skepticism, but, as Wendy Hui Kyong Chun has argued, with the technological advances around the Internet in the last decade of the twentieth century, “technology became once again the solution to political problems” (Chun 2006, 25). Therefore, in the twenty-first century it is not merely about bribery needing to be deployed as a means of securing loyalty to a system of control towards which there is substantial skepticism. Or, to put it slightly differently, at this point there are not many people who still really need to be convinced that they should use a computer. We no longer need to hypothesize about “computer dominated society,” for we already live there.  After all, the technological value systems about which Mumford was concerned have now gained significant footholds not only in the corridors of power, but in every pocket that contains a smart phone. It would be easy to walk through the library brimming with e-books touting the wonders of all that is digital and persuasively disseminating the ideology of the bribe, but such “sugar-coated soma pills”—to borrow a turn of phrase from Howard Segal (1994, 188)—serve more as examples of the continued existence of the bribe than as explanations of how it has changed.

    At the end of her critical history of social media, José Van Dijck (Van Dijck 2013, 174) offers what can be read as an important example of how the bribe has changed, when she notes that “opting out of connective media is hardly an option. The norm is stronger than the law.” On a similar note, Laura Portwood-Stacer in her study of Facebook abstention portrays the very act of not being on that social media platform as “a privilege in itself” —an option that is not available to all (Portwood-Stacer 2012, 14). In interviews with young people, Sherry Turkle has found many “describing how smartphones and social media have infused friendship with the Fear of Missing Out” (Turkle 2015, 145). Though smartphones and social media platforms certainly make up the megamachine’s ecosystem of bribes, what Van Dijck, Portwood-Stacer, and Turkle point to is an important shift in the functioning of the bribe. Namely, that today we have moved from the megatechnic bribe, towards what can be called “megatechnic blackmail.”

    Whereas the megatechnic bribe was concerned with assimilating people into the “new megamachine,” megatechnic blackmail is what occurs once the bribe has already been largely successful. This is not to claim that the bribe does not still function—for it surely does through the mountain of new devices and platforms that are constantly being rolled out—but, rather, that it does not work by itself. The bribe is what is at work when something new is being introduced, it is what convinces people that the benefits outweigh any negative aspects, and it matches the sense of “irresistibility” with a sense of “beneficence.” Blackmail, in this sense, works differently—it is what is at work once people become all too aware of the negative side of smartphones, social media, and the like. Megatechnic blackmail is what occurs once, as Van Dijck put it, “the norm” becomes “stronger than the law” as here it is not the promise of something good that draws someone in but the fear of something bad that keeps people from walking away.

    This puts the real “fear” in the “fear of missing out” which no longer needs to promise “use this platform because it’s great” but can instead now threaten “you know there are problems with this platform, but use it or you will not know what is going on in the world around you.” The shift from bribe to blackmail can further be seen in the consolidation of control in the hands of fewer companies behind the bribes—the inability of an upstart social network (a fresh bribe) to challenge the social network is largely attributable to the latter having moved into a blackmail position. It is no longer the case that a person, in a Facebook saturated society, has a lot to gain by joining the site, but that (if they have already accepted its bribe) they have a lot to lose by leaving it. The bribe secures the adoration of the early-adopters, and it convinces the next wave of users to jump on board, but blackmail is what ensures their fealty once the shiny veneer of the initial bribe begins to wear thin.

    Mumford had noted that in a society wherein the bribe was functioning smoothly, “the two unforgivable sins, or rather punishable vices, would be continence and selectivity” (Mumford 1970, 332) and blackmail is what keeps those who would practice “continence and selectivity” in check. As Portwood-Stacer noted, abstention itself may come to be a marker of performative privilege—to opt out becomes a “vice” available only to those who can afford to engage in it. To not have a smartphone, to not have a Facebook account, to not buy things on Amazon, or use Google, becomes either a signifier of one’s privilege or marks one as an outsider.

    Furthermore, choosing to renounce a particular platform (or to use it less) rarely entails swearing off the ecosystem of megatechnics entirely. As far as the megamachine is concerned, insofar as options are available and one can exercise a degree of “selectivity” what matters is that one is still selecting within that which is offered by the megamachine. The choice between competing systems of particular megatechnics is still a choice that takes place within the framework of the megamachine. Thus, Douglas Rushkoff’s call “program or be programmed” (Rushkoff 2010) appears less as a rallying cry of resistance, than as a quiet acquiescence: one can program, or one can be programmed, but what is unacceptable is to try to pursue a life outside of programs. Here the turn that seeks to rediscover the Internet’s once emancipatory promise in wikis, crowd-funding, digital currency, and the like speaks to a subtle hope that the problems of the digital day can be defeated by doubling down on the digital. From this technologically-optimistic view the problem with companies like Google and Facebook is that they have warped the anarchic promise, violated the independence, of cyberspace (Barlow 1996; Turner 2006); or that capitalism has undermined the radical potential of these technologies (Fuchs 2014; Srnicek and Williams 2015). Yet, from Mumford’s perspective such hopes and optimism are unwarranted. Indeed, they are the sort of democratic fantasies that serve to cover up the fact that the computer, at least for Mumford, was ultimately still an authoritarian technology. For the megamachine it does not matter if the smartphone with a Twitter app is used by the President or by an activist: either use is wholly acceptable insofar as both serve to deepen immersion in the “computer dominated society” of the megamachine.  And thus, as to the hope that megatechnics can be used to destroy the megamachine it is worth recalling Mumford’s quip, “Let no one imagine that there is a mechanical cure for this mechanical disease” (Mumford 1954, 50).

    In this situation the only thing worse than falling behind or missing out is to actually challenge the system itself, to practice or argue that others practice “continence and selectivity” leads to one being denounced as a “technophobe” or “Luddite.” That kind of derision fits well with Mumford’s observation that the attempt to live “detached from the megatechnic complex” to be “cockily independent of it, or recalcitrant to its demands, is regarded as nothing less than a form of sabotage” (Mumford 1970, 330). Minor criticisms can be permitted if they are of the type that can be assimilated and used to improve the overall functioning of the megamachine, but the unforgiveable heresy is to challenge the megamachine itself. It is acceptable to claim that a given company should be attempting to be more mindful of a given social concern, but it is unacceptable to claim that the world would actually be a better place if this company were no more. One sees further signs of the threat of this sort of blackmail at work in the opening pages of the critical books about technology aimed at the popular market, wherein the authors dutifully declare that though they have some criticisms they are not anti-technology. Such moves are not the signs of people merrily cooperating with the bribe, but of people recognizing that they can contribute to a kinder, gentler bribe (to a greater or lesser extent) or risk being banished to the margins as fuddy-duddies, kooks, environmentalist weirdos, or as people who really want everyone to go back to living in caves. The “myth of the machine” thrives on the belief that there is no alternative. One is permitted (in some circumstances) to say “don’t use Facebook” but one cannot say “don’t use the Internet.” Blackmail is what helps to bolster the structure that unfailingly frames the megamachine as “ultimately beneficial.”

    The megatechnic bribe dazzles people by muddling the distinction between, to use a comparison Mumford was fond of, “the goods life” and “the good life.” But megatechnic blackmail threatens those who grow skeptical of this patina of “the good life” that they can either settle for “the goods life” or they can look forward to an invisible life on the margins. Those who can’t be bribed are blackmailed. Thus it is no longer just that the myth of the machine is based on the idea that the megamachine is “absolutely irresistible” and “ultimately beneficial” but that it now includes the idea that to push back is “unforgivably detrimental.”

    Conclusion

    Of the various biblical characters from whom one can draw inspiration, Jonah is something of an odd choice for a public intellectual. After all, Jonah first flees from his prophetic task, sleeps in the midst of a perilous storm, and upon delivering the prophecy retreats to a hillside to glumly wait to see if the prophesized destruction will come. There is a certain degree to which Jonah almost seems disappointed that the people of Ninevah mend their ways and are forgiven by God. Yet some of Jonah’s frustrated disappointment flows from his sense that the whole ordeal was pointless—he had always known that God would forgive the people of Ninevah and not destroy the city. Given that, why did Jonah have to leave the comfort of his home in the first place? (JPS 1999, 1333-1337). Mumford always hoped to be proven wrong. As he put it in the very talk in which he introduced himself as Jonah, “I would die happy if I knew that on my tombstone could be written these words, ‘This man was an absolute fool. None of the disastrous things that he reluctantly predicted ever came to pass!’ Yes: then I could die happy” (Mumford 1979, 528). But those words do not appear on Mumford’s tombstone.

    Assessing whether Mumford was “an absolute fool” and whether any “of the disastrous things that he reluctantly predicted ever came to pass” is a tricky mire to traverse. For the way that one responds to that probably has as much to do with whether or not one shares Mumford’s outlook than with anything particular he wrote. During his lifetime Mumford had no shortage of critics who viewed him as a stodgy pessimist. But what is one to expect if one is trying to follow the example of Jonah? If you see yourself as “that terrible fellow who keeps on uttering the very words you don’t want to hear, reporting the bad news and warning you that it will get even worse unless you yourself change your mind and alter your behavior” (528) than you can hardly be surprised when many choose to dismiss you as a way of dismissing the bad news you bring.

    Yet, it has been the contention of this paper, that Mumford should not be ignored—and that his thought provides a good tool to think with after the digital turn. In his introduction to the 2010 edition of Mumford’s Technics and Civilization, Langdon Winner notes that it “openly challenged scholarly conventions of the early twentieth century and set the stage for decades of lively debate about the prospects for our technology-centered ways of living” (Mumford 2010, ix).  Even if the concepts from The Myth of the Machine have not “set the stage” for debate in the twenty-first century, the ideas that Mumford develops there can pose useful challenges for present discussions around “our technology-centered ways of living.” True, “the megamachine” is somewhat clunky as a neologism but as a term that encompasses the technical, political, economic, and social arrangements of a powerful system it seems to provide a better shorthand to capture the essence of Google or the NSA than many other terms. Mumford clearly saw the rise of the computer as the invention through which the megamachine would be able to fully secure its throne. At the same time, the idea of the “megatechnic bribe” is a thoroughly discomforting explanation for how people can grumble about Apple’s labor policies or Facebook’s uses of user data while eagerly lining up to upgrade to the latest model of iPhone or clicking “like” on a friend’s vacation photos. But in the present day the bribe has matured beyond a purely pleasant offer into a sort of threat that compels consent. Indeed, the idea of the bribe may be among Mumford’s grandest moves in the direction of telling people what they “don’t want to hear.” It is discomforting to think of your smartphone as something being used to “bribe” you, but that it is unsettling may be a result of the way in which that claim resonates.

    Lewis Mumford never performed a Google search, never made a Facebook account, never Tweeted or owned a smartphone or a tablet, and his home was not a repository for the doodads of the Internet of Things. But it is doubtful that he would have been overly surprised by any of them. Though he may have appreciated them for their technical capabilities he would have likely scoffed at the utopian hopes that are hung upon them. In 1975 Mumford wrote: “Behold the ultimate religion of our seemingly rational age—the Myth of the Machine! Bigger and bigger, more and more, farther and farther, faster and faster became ends in themselves, as expressions of godlike power; and empires, nations, trusts, corporations, institutions, and power-hungry individuals were all directed to the same blank destination” (Mumford 1975, 375).

    Is this assessment really so outdated today? If so, perhaps the stumbling block is merely the term “machine,” which had more purchase in the “our” of Mumford’s age than in our own. Today, that first line would need to be rewritten to read “the Myth of the Digital” —but other than that, little else would need to be changed.

    _____

    Zachary Loeb is a graduate student in the History and Sociology of Science department at the University of Pennsylvania. His research focuses on technological disasters, computer history, and the history of critiques of technology (particularly the work of Lewis Mumford). He is a frequent contributor to The b2 Review Digital Studies section.

    Back to the essay

    _____

    Works Cited

    • Alvares, Claude. 1988. “Science, Colonialism, and Violence: A Luddite View” In Science, Hegemony and Violence: A Requiem for Modernity, edited by Ashis Nandy. Delhi: Oxford University Press.
    • Barlow, John Perry. 1996. “A Declaration of the Independence of Cyberspace” (Feb 8).
    • Blake, Casey Nelson. 1990. Beloved Community: The Cultural Criticism of Randolph Bourne, Van Wyck Brooks, Waldo Frank, and Lewis Mumford. Chapel Hill: The University of North Carolina Press.
    • Bookchin, Murray. 1995. Social Anarchism or Lifestyle Anarchism: An Unbridgeable Chasm. Oakland: AK Press.
    • Cowley, Malcolm and Bernard Smith, eds. 1938. Books That Changed Our Minds. New York: The Kelmscott Editions.
    • Ezrahi, Yaron, Mendelsohn, Everett, and Segal, Howard P., eds. 1994. Technology, Pessimism, and Postmodernism. Amherst: University of Massachusetts Press.
    • Ellul, Jacques. 1967. The Technological Society. New York: Vintage Books.
    • Ellul, Jacques. 1980. The Technological System. New York: Continuum.
    • Farrenkopf, John. 2001 Prophet of Decline: Spengler on World History and Politics. Baton Rouge: LSU Press.
    • Fox, Richard Wightman. 1990. “Tragedy, Responsibility, and the American Intellectual, 1925-1950” In Lewis Mumford: Public Intellectual, edited by Thomas P. Hughes, and Agatha C. Hughes. New York: Oxford University Press.
    • Fromm, Erich. 1968. The Revolution of Hope: Toward a Humanized Technology. New York: Harper & Row, Publishers.
    • Fuchs, Christian. 2014. Social Media: A Critical Introduction. Los Angeles: Sage.
    • Herf, Jeffrey. 1984. Reactionary Modernism: Technology, Culture, and Politics in Weimar and the Third Reich. Cambridge: Cambridge University Press.
    • Hughes, Michael (ed.) 1971. The Letters of Lewis Mumford and Frederic J. Osborn: A Transatlantic Dialogue, 1938-1970. New York: Praeger Publishers.
    • Hughes, Thomas P. and Agatha C. Hughes. 1990. Lewis Mumford: Public Intellectual. New York: Oxford University Press.
    • Hughes, Thomas P. 2004. Human-Built World: How to Think About Technology and Culture. Chicago: University of Chicago Press.
    • Hui Kyong Chun, Wendy. 2006. Control and Freedom. Cambridge: The MIT Press.
    • Ihde, Don. 1993. Philosophy of Technology: an Introduction. New York: Paragon House.
    • Jacoby, Russell. 2005 Picture Imperfect: Utopian Thought for an Anti-Utopian Age. New York: Columbia University Press.
    • JPS Hebrew-English Tanakh. 1999. Philadelphia: The Jewish Publication Society.
    • Lasch, Christopher. 1991. The True and Only Heaven: Progress and Its Critics. New York: W. W.Norton and Company.
    • Luccarelli, Mark. 1996. Lewis Mumford and the Ecological Region: The Politics of Planning. New York: The Guilford Press.
    • Marx, Leo. 1988. The Pilot and the Passenger: Essays on Literature, Technology, and Culture in the United States. New York: Oxford University Press.
    • Marx, Leo. 1990. “Lewis Mumford” Prophet of Organicism” In Lewis Mumford: Public Intellectual, edited by Thomas P. Hughes and Agatha C. Hughes. New York: Oxford University Press.
    • Marx, Leo. 1994. “The Idea of ‘Technology’ and Postmodern Pessimism.” In Does Technology Drive History? The Dilemma of Technological Determinism, edited by Merritt Roe Smith and Leo Marx. Cambridge: MIT Press.
    • Mendelsohn, Everett. 1994. “The Politics of Pessimism: Science and Technology, Circa 1968.” In Technology, Pessimism, and Postmodernism, edited by Yaron Ezrahi, Everett Mendelsohn, and Howard P. Segal. Amherst: University of Massachusetts Press.
    • Miller, Donald L. 1989. Lewis Mumford: A Life. New York: Weidenfeld and Nicolson.
    • Molesworth, Charles. 1990. “Inner and Outer: The Axiology of Lewis Mumford.” In Lewis Mumford: Public Intellectual, edited by Thomas P. Hughes and Agatha C. Hughes. New York: Oxford University Press.
    • Mitcham, Carl. 1994. Thinking Through Technology: The Path between Engineering and Philosophy. Chicago: University of Chicago Press.
    • Mumford, Lewis. 1926. “Radicalism Can’t Die.” The Jewish Daily Forward (English section, Jun 20).
    • Mumford, Lewis. 1934. Technics and Civilization. New York: Harcourt, Brace and Company.
    • Mumford, Lewis. 1938. The Culture of Cities. New York, Harcourt, Brace and Company.
    • Mumford, Lewis. 1944. The Condition of Man. New York, Harcourt, Brace and Company.
    • Mumford, Lewis. 1951. The Conduct of Life. New York, Harcourt, Brace and Company.
    • Mumford, Lewis. 1954. In the Name of Sanity. New York: Harcourt, Brace and Company.
    • Mumford, Lewis. 1959. “An Appraisal of Lewis Mumford’s Technics and Civilization (1934).” Daedalus 88:3 (Summer). 527-536.
    • Mumford, Lewis. 1962. The Story of Utopias. New York: Compass Books, Viking Press.
    • Mumford, Lewis. 1964. “Authoritarian and Democratic Technics.” Technology and Culture 5:1 (Winter). 1-8.
    • Mumford, Lewis. 1967. Technics and Human Development. Vol. 1 of The Myth of the Machine. Technics and Human Development. New York: Harvest/Harcourt Brace Jovanovich.
    • Mumford, Lewis. 1970. The Pentagon of Power. Vol. 2 of The Myth of the Machine. Technics and Human Development. New York: Harvest/Harcourt Brace Jovanovich.
    • Mumford, Lewis. 1975. Findings and Keepings: Analects for an Autobiography. New York, Harcourt, Brace and Jovanovich.
    • Mumford, Lewis. 1979. My Work and Days: A Personal Chronicle. New York: Harcourt, Brace, Jovanovich.
    • Mumford, Lewis. 1982. Sketches from Life: The Autobiography of Lewis Mumford. New York: The Dial Press.
    • Mumford, Lewis. 2010. Technics and Civilization. Chicago: The University of Chicago Press.
    • Portwood Stacer, Laura. 2012. “Media Refusal and Conspicuous Non-consumption: The Performative and Political Dimensions of Facebook Abstention.” New Media and Society (Dec 5).
    • Postman, Neil. 1993. Technopoly: The Surrender of Culture to Technology. New York: Vintage Books.
    • Rushkoff, Douglas. 2010. Program or Be Programmed. Berkeley: Soft Skull Books.
    • Segal, Howard P. 1994a. “The Cultural Contradictions of High Tech: or the Many Ironies of Contemporary Technological Optimism.” In Pessimism, and Postmodernism, edited by Yaron Ezrahi, Everett Mendelsohn, and Howard P. Segal. Amherst: University of Massachusetts Press.
    • Segal, Howard P. 1994b. Future Imperfect: The Mixed Blessings of Technology in America. Amherst: The University of Amherst Press.
    • Spengler, Oswald. 1932a. Form and Actuality. Vol. 1 of The Decline of the West. New York: Alfred K. Knopf.
    • Spengler, Oswald. 1932b. Perspectives of World-History. Vol. 2 of The Decline of the West. New York: Alfred K. Knopf.
    • Spengler, Oswald. 2002. Man and Technics: A Contribution to a Philosophy of Life. Honolulu: University Press of the Pacific.
    • Srnicek, Nick and Alex Williams. 2015. Inventing the Future: Postcapitalism and a World Without Work. New York: Verso Books.
    • Turkle, Sherry. 2015. Reclaiming Conversation: The Power of Talk in a Digital Age. New York: Penguin Press.
    • Turner, Fred. 2006. From Counterculture to Cyberculture: Stewart Brand, The Whole Earth Network and the Rise of Digital Utopianism. Chicago: The University of Chicago Press.
    • Van Dijck, José. 2013. The Culture of Connectivity. Oxford: Oxford University Press.
    • Watson, David. 1997. Against the Megamachine: Essays on Empire and Its Enemies. Brooklyn: Autonomedia.
    • Williams, Rosalind. 1990. “Lewis Mumford as a Historian of Technology in Technics and Civilization.” In Lewis Mumford: Public Intellectual, edited by Thomas P. Hughes and Agatha C. Hughes. New York: Oxford University Press.
    • Williams, Rosalind. 1994. “The Political and Feminist Dimensions of Technological Determinism.” In Does Technology Drive History? The Dilemma of Technological Determinism, edited by Merritt Roe Smith and Leo Marx. Cambridge: MIT Press.
    • Winner, Langdon. 1989. Autonomous Technology: Technics-out-of-Control as a Theme in Political Thought. Cambridge: MIT Press.
    • Winner, Langdon. 1986. The Whale and the Reactor. Chicago: University of Chicago Press.
    • Wojtowicz, Robert. 1996. Lewis Mumford and American Modernism: Eutopian Themes for Architecture and Urban Planning. Cambridge: Cambridge University Press.

     

  • Chris Gilliard and Hugh Culik — The New Pythagoreans

    Chris Gilliard and Hugh Culik — The New Pythagoreans

    Chris Gilliard and Hugh Culik

    A student’s initiation into mathematics routinely includes an encounter with the Pythagorean Theorem, a simple statement that describes the relationship between the hypotenuse and sides of a right triangle: the sum of the squares of the sides is equal to the square of the hypotenuse, i.e., A2 + B2 = C2. The statement and its companion figure of a generic right triangle are offered as an interchangeable, seamless flow between geometric “things” and numbers (Kline 1980, 11). Among all the available theorems that might be offered as emblematic of mathematics, this one is held out as illustrative of a larger claim about mathematics and the Real. This use suggests that it is what W. J. T. Mitchell would call a “hypericon,” a visual paradigm that doesn’t “merely serve as [an] illustration to theory; [it] picture[s] theory” (1995, 49). Understood in this sense, the Pythagorean Theorem asserts a central belief of Western culture: that mathematics is the voice of an extra-human realm, a realm of fundamental, unchanging truth apart from human experience, culture, or biology. Pythagorean theoremIt is understood as more essential than the world and as prior to it. Mathematics becomes an outlier among representational systems because numbers are claimed to be “ideal forms necessarily prior to the material ‘instances’ and ‘examples’ that are supposed to illustrate them and provide their content” (Rotman 2000, 147).[1] The dynamic flow between the figure of the right triangle and the formula transforms mathematical language into something akin to Christian concepts of a prelapsarian language, a “nomenclature of essences, in which word would have reflected thing with perfect accuracy” (Eagle 2007, 184). As the Pythagoreans styled it, the world is number (Guthrie 1962, 256). The image schools the child into the culture’s uncritical faith in the rhetoric of numbers, a sort of everyman’s version of the Pythagorean vision. Whatever the general belief in this notion, the nature of mathematical representations has been a central problematic of mathematics that appears throughout its history. The difference between the historical significance of this problematic and its current manifestation in the rhetoric of “Big Data” illustrates an important cultural anxiety.

    Contemporary culture uses the Pythagorean Theorem’s image and formula as a hypericon that not only obscures problematic assumptions about the consistency and completeness of mathematics, but which also misrepresents the consistency and completeness of the material-world relationships that mathematics is used to describe.[2] This rhetoric of certainty, consistency, and completeness continues to infect contemporary political and ideological claims. For example, “Big Data” enthusiasts – venture capitalists, politicians, financiers, education reformers, policing strategists, et al. – often invoke a neo-Pythagorean worldview to validate their claims, claims that rest on the interplay of technology, analysis, and mythology (Boyd and Crawford 2012, 663). What is a highly productive problematic in the 2,500-year history of mathematics disappears into naïve assertions about the inherent “truth” of the algorithmic outputs of mathematically based technologies. When corporate behemoths like Pearson and Knewton (makers of an adaptive learning platform) participate in events such as the Department of Education’s 2012 “Datapalooza,” the claims become totalizing. Knewton’s CEO, Jose Ferreira, asserts, in a crescendo of claims, that “Knewton gets 5-10 million actionable data points per student per day”; and that tagging content “unlocks data.” In his terms, “work cascades out data” that is then subject to the various models the corporation uses to predict and prescribe the future. His claims of descriptive completeness are correct, he asserts, because “everything in education is correlated to everything else” (November 2012). The narrative of Ferreira’s claims is couched in fluid equivalences of data points, mathematical models, and a knowable future. Data become a metonym for not only the real student, but for the nature of learning and human cognition. In a sort of secularized predestination, the future’s origin in perfectly representational numbers produces perfect predictions of students’ performance. Whatever the scale of the investment dollars behind these New Pythagoreans, such claims lose their patina of objective certainty when placed in the history of the West’s struggle with mathematized claims about a putative “real.” For them, predictions are not the outcomes of processes; rather, predictions are revelations of a deterministic reality.[3]

    A recent claim for a facial-recognition algorithm that identifies criminals normalizes its claims by simultaneously asserting and denying that “in all cultures and all periods of recorded human history, [is] the belief that the face alone suffices to reveal innate traits of a person” (Wu, Xiaolin, and Xi Zhang 2016, 1) The authors invoke the Greeks:

    Aristotle in his famous work Prior Analytics asserted, ‘It is possible to infer character from features, if it is granted that the body and the soul are changed together by the natural affections’ (1)

    The authors then remind readers that “the same question has captivated professionals (e.g., psychologists, sociologists, criminologists) and amateurs alike, across all cultures, and for as long as there are notions of law and crime. Intuitive speculations are abundant both in writing . . . and folklore.” Their work seeks to demonstrate that the question yields to a mathematical model, a model that is specifically a non-human intelligence: “In this section, we try to answer the question in the most mechanical and scientific way allowed by the available tools and data. The approach is to let a machine learning method explore the data and reveal the most discriminating facial features that tell apart criminals and non-criminals” (6). The rhetoric solves the problem by asserting an unchanging phenomenon – the criminal face – by invoking a mathematics that operates via machine learning. Problematic crimes such as “DWB” (driving while black) disappear along with history and social context.

    Such claims rest on confused and contradictory notions. For the Pythagoreans, mathematics was not a representational system. It was the real, a reality prior to human experience. This claim underlies the authority of mathematics in the West. But simultaneously, it effectively operates as a response to the world, i.e., it is a re-presentation. As re-presentational, it becomes another language, and like other languages, it is founded on bias, exclusions, and incompleteness. These two notions of mathematics are resolved by seeing the representation as more “real” than the multiply determined events it re-presents. Nonetheless, once we say it re-presents the real, it becomes just another sign system that comes after the real. Often, bouncing back and forth between its extra-human status and its representational function obscures the places where representation fails or becomes an approximation. To data fetishists, “data” has a status analogous to that of “number” in the Pythagorean’s world. For them, reality is embedded in a quasi-mathematical system of counting, measuring, and tagging. But the ideological underpinnings, pedagogical assumptions, and political purposes of the tagging go unremarked; to do so would problematize the representational claims. Because the world is number, coders are removed from the burden of history and from the responsibility to examine the social context that both creates and uses their work.

    The confluence of corporate and political forces validates itself through mathematical imagery, animated graphics, and the like. Terms such as “data-driven” and “evidence-based” grant the rhetoric of numbers a power that ignores its problematic assumptions. There is a pervasive refusal to recognize that data are artifacts of the descriptive categories imposed on the world. But “Big Data” goes further; the term is used in ways that perpetuate the antique notion of “number” by invoking numbers as distillations of certainty and a knowable universe. “Number” becomes decontextualized and stripped of its historical, social, and psychological origins. Because the claims of Big Data embed residual notions about the re-presentational power of numbers, and about mathematical completeness and consistency, they speak to such deeply embedded beliefs about mathematics, the most fundamental of which is the Pythagorean claim that the world is number. The point is not to argue whether mathematics is formal, referential, or psychological; rather, it is to place contemporary claims about “Big Data” in historical and cultural contexts where such issues are problematized. The claims of Big Data speak through a language whose power rests on longstanding notions of mathematics; however, these notions lose some of their power when placed in the context of mathematical invention (Rotman 2000, 4-7).

    “Big Data” represents a point of convergence for residual mathematical beliefs, beliefs that obscure cultural frameworks and thus interfere with critique. For example, predictive policing tools are claimed to produce neutral, descriptive acts using machine intelligence. Berk asserts that “if you let the computer just snoop around in the dataset, it finds things that are unexpected by existing theory and works really substantially well to help forecast” (Berk 2011). In this view, Big Data – the numerical real – can be queried to produce knowledge that is not driven by any theoretical or ideological interest. Precisely because the world is presumed to be mathematical, the political, economic, and cultural frameworks of its operation can become the responsibility of the algorithm’s users. To this version of a mathematized real, there is no inherently ethical algorithmic action prior to the use of its output. Thus, the operation of the algorithm is doubly separated from its social contexts. First, the mathematics themselves are conceived as autonomous embodiments of a reality independent of the human; second, the effects of the algorithm – its predictions – are apart from values, beliefs, and needs that create the algorithm. The specific limits of historical and social context do not mathematically matter; the limits are determined by the values and beliefs of the algorithm’s users. The problematics of mathematizing the world are passed off to its customers. Boyd and Crawford identify three interacting phenomena that create the notion of Big Data: technology, analysis, and mythology (2012, 663). The mythological element embodies both dystopian and utopian narratives, and thus how we categorize reality. O’Neil notes that “these models are constructed not just from data but from the choices we make about which data to pay attention to – and which to leave out. Those choices are not just about logistics, profits, and efficiency. They are fundamentally moral” (2016, 218). On one hand, the predictive value depends on the moral, ethical, and political values of the user, a non-mathematical question. On the other hand, this division between the model and its application carves out a special arena where the New Pythagoreans claim that it operates without having to recognize social or historical contexts.

    Whatever their commitment to number, the Pythagoreans were keenly aware that their system was vulnerable to discoveries that problematized their basic claim that the world is number. And they protected their beliefs through secrecy and occasionally through violence. Like the proprietary algorithms of contemporary corporations, their work was reserved for a circle of adepts/owners. First among their secrets was the keen understanding that an unnamable point on the number line would represent a rupture in the relationship of mathematics and world. If that relationship failed, with it would go their basis for belief in a knowable world. Their claims arose from within the concrete practices of Greek mathematics. For example, the Greeks portrayed numbers by a series of dots called Monads. The complex ratios used to describe geometric figures were understood to generate the world, and numbers were visualized in arrangements of stones (calculi). A 2 x 2 arrangement of stones had the form of a square, hence the term “square numbers.” Thus, it was a foundational claim that any point or quantity (because monads were conceived as material objects) have a corresponding number. Line segments, circumferences, and all the rest had to correspond to what we still call the “rational numbers”: 1, 2, 3 . . . and their ratios. Thus, the Pythagorean’s great claim – that the world is number – was vulnerable to the discovery of a point on the number line that could not be named as the ratio of integers.

    Unfortunately for their claim, such numbers are common, and the great irony of the Pythagorean Theorem lies in the fact that it routinely generates numbers that are not ratios of integers. For example, a right triangle with sides one-unit long has a hypotenuse √2 units long (12 + 12 = C2 i.e., 2 = C2 i.e., √2 = C). Numbers such as √2 contradict the mathematical aspiration toward a completely representational system because they cannot be expressed as a ratio of integers, and hence their status as what are called “ir-rational” numbers.[4] A relatively simple proof demonstrates that they are neither odd nor even; these numbers exist in what is called a “surd” relationship to the integers, that is, they are silent – the meaning of “surd” – about each other. They literally cannot “speak” to each other. To the Pythagoreans, this appeared as a discontinuity in their naming system, a gap that might be the mark of a world beyond the generative power of number. Such numbers are, in fact, a new order of naming precipitated by the limited representational power of the prior naming system based on real numbers. But for the Pythagoreans, to look upon these numbers was to look upon the void, to discover that the world had no intrinsic order. Irrational numbers disrupted the Pythagorean project of mathematizing reality. This deeply religious impulse toward order underlies the aspiration that motivates the bizarre and desperate terminologies of contemporary data fetishists: “data-driven,” “evidence-based,” and even “Big Data,” which is usually capitalized to show the reification of number it desires.

    Big Data appeals to a mathematical nostalgia for certainty that cannot be sustained in contemporary culture. O’Neil provides careful examples of how history, social context, and the data chosen for algorithmic manipulation do not – indeed cannot – matter in this neo-Pythagorean world. Like Latour, she historicizes the practices and objects that the culture pretends are natural. The ideological and political nature of the input becomes invisible, especially when algorithms are granted special proprietary status that converts them to what Pasquale calls a “black box” (2016). It is a problematic claim, but it can be made without consequence because it speaks in the language of an ancient mathematical philosophy still heard in our culture,[5] especially in education where the multifoliate realities of art, music, and critical writing are quashed by forces such as the Core Curriculum and its pervasive valorization of standardization. Such strategies operate in fear of the inconsistency and incompleteness of any representational relationship, a fear of epistemological silence that has lurked in the background of Western mathematics from its beginnings. To the Greeks, the irrationals represented a sort of mathematical aphasia. The irrational numbers such as √2 thus obtained emblematic values far beyond their mathematical ones. They inserted an irremediable gap between the world and the “word” of mathematics. Such knowledge was catastrophic – adepts were murdered for revealing the incommensurability of side and diagonal.[6] More importantly, the discovery deeply fractured mathematics itself. The gap in the naming system split mathematics into algebra (numerical) and geometry (spatial), a division that persisted for almost 2,000 years. Little wonder that the Greeks restricted geometry to measurements that were not numerical, but rather were produced through the use of a straightedge and compass. Physical measurement by line segments and circles rather than by a numerical length effectively sidestepped the threat posed by the irrational numbers. Kline notes, “The conversion of all of mathematics except the theory of whole numbers into geometry . . . forced a sharp separation between number and geometry . . . at least until 1600” (1980, 105). Once we recognize that the Pythagorean theorem is a hypericon, i.e., a visual paradigm that visualizes theory, we begin to see its extension into other fundamental mathematical “discoveries” such as Descartes’s creation of coordinate geometry. A deep anxiety about the gap between word and world is manifested in both mathematics as well as in contemporary claims about “Big Data.”

    The division between numerical algebra and spatial geometry remained a durable feature of Western mathematics until problematized by social change. Geometry offered an elegant axiomatic system that satisfied the hierarchical impulse of the culture, and it worked in concert with the Aristotelian logic that dominated notions of truth. The Aristotelian nous and the Euclidian axioms seemed similar in ways that justified the hierarchical structure of the church and of traditional politics. They were part of a social fabric that bespoke an extra-human order that could be dis-covered. But with the rise of commercial culture came the need for careful records, computations, risk assessments, interest calculations, and other algebraic operations. The tension between algebra and geometry became more acute and visible. It was in this new cultural setting that Descartes’s work appeared. Descartes’s 1637 publication of La Géométrie confronted the terrors revealed in the irrationals embodied in the geometry/algebra divide by subordinating both algebra and geometry to a more abstract relationship. Turchin notes that Descartes re-unified geometry and arithmetic not by granting either priority or reducing either to the other; rather, in his language “the symbols do not designate number or quantities, but relations of quantities” (Turchin 1977, 196).

    Rotman directly links concepts of number to this shifting relationship of algebra and geometry and even to the status of numbers such as zero:

    During the fourteenth century, with the emergence of mercantile / capitalism in Northern Italy, the handling of numbers passed . . . to merchants, artisan-scientists, architects . . . for whom arithmetic was an essential prerequisite for trade and technology . . . . The central role occupied by double-entry book-keeping (principle of the zero balance) and the calculational demands of capitalism broke down any remaining resistance to the ‘infidel symbol’ of zero. (1987, 7-8)

    The emergence of the zero is an index to these changes, not the revelation of a pre-existing, extra-human reality. Similarly, Alexander’s history of the calculus places its development in the context of Protestant notions of authority (2014, 140-57). He emphasizes that the methodologies of the sciences and mathematics began to serve as political models for scientific societies: “if reasonable men of different backgrounds and convictions could meet to discuss the workings of nature, why could they not do the same in matters that concerned the state?” (2014, 249). Again, in the case of the calculus, mathematics responds to the emerging forces of the Renaissance: individualism, capitalism, and Protestantism. Certainly, the ongoing struggle with irrational numbers extends from the Greeks to the Renaissance, but the contexts are different. For the Greeks, the generative nature of number was central. For 17th Century Europe, the material demands of commercial life converged with religious, economic, and political shifts to make number a re-presentational tool.

    The turmoil of that historical moment suggests the turmoil of our own era in the face of global warfare, climate change, over-population, and the litany of other catastrophes we perpetually await.[7] In both cases, the anxiety produces impulses to mathematize the world and thereby reveal a knowable “real.” The current corporate fantasy that the world is a simulation is the fantasy of non-mathematicians (Elon Musk and Sam Altman) to embed themselves in a techno-centric narrative of the power of their own tools to create themselves. While this inexpensive version of Baudrillard’s work might seem sophomoric, it nevertheless exposes the impulse to contain the visceral fear that a socially constructed world is no different from solipsism’s chaos. It seems a version of the freshman student’s claim that “Everything’s just opinion” or the plot of another Matrix film. They speak/act/claim that their construction of meaning is equal to any other — the old claim that Hitler and Mother Teresa are but two equally valid “opinions”. They don’t know that the term/concept is social construction, and their radical notions of the individual prevent them from recognizing the vast scope, depth, and stabilizing power of social structures. They are only the most recent example of how social change exacerbates the misuse of mathematics.

    Amid these sorts of epistemic shifts, Renaissance mathematics underwent its own transformations. Within a fifty-year span (1596-1646), Descartes, Newton, and Leibniz are born. Their major works appear, respectively, in 1637, 1666, and 1675, a burst of innovation that cannot be separated from the shifts in education, economics, religion, and politics that were then sweeping Europe. Porter notes that statistics emerges alongside the rising modern state of this era. Managing the state’s wealth required profiles of populations. Such mathematical profiling began in the mid-1600s, with the intent to describe the state’s wealth and human resources for the creation of “sound, well-informed state policy” (Porter 1986, 18). The notion of probabilities, samples, and models avoids the aspirations that shaped earlier mathematics by making mathematics purely descriptive. Hacking suggests that the delayed appearance of probability arises from five issues: 1) an obsession with determinism and personal fatalism; 2) the belief that God spoke through randomization and thus, a theory of the random was impious; 3) the lack of equiprobable events provided by standardized objects, e.g., dice; 4) the lack of economic drivers such as insurances and annuities; and 5) the lack of a workable calculus needed for the computation of probability distributions (Davis and Hersh 1981, 21). Hacking finds these insufficient and suggests that as authority was relocated in nature and not in the words of authorities, this led to the observation of frequencies.[8] Alongside the fierce opposition of the Church to the zero, understood as the absence of God, and to the calculus, understood as an abandonment of material number, the shifting mathematical landscape signals the changes that began to affect the longstanding status of number as a sort of prelapsarian language.

    Mathematics was losing its claims to completeness and consistency, and the incommensurables problematized that. Newton and Leibniz “de-problematized” irrationals, and opened mathematics to a new notion of approximation. The central claims about mathematics were not disproved; worse, they were set aside as unproductive conflations of differences between the continuous and the discrete. But because the church saw mathematics as “true” in a fashion inextricable from other notions of the truth, it held a special status. Calculus became a dangerous interest likely to call the Inquisition to action. Alexander locates the central issue as the irremediable conflict between the continuous and the discrete, something that had been the core of Zeno’s paradoxes (2014). The line of mathematical anxieties stretches from the Greeks into the 17th Century. These foundational understandings seem remote and abstract until we see how they re-appear in the current claims about the importance of “Big Data.” The term legitimates its claims by resonating with other responses to the anxiety of representation.

    The nature of the hypericon perpetuates the notion of a stable, knowable reality that rests upon a non-human order. In this view, mathematics is independent of the world. It existed prior to the world and does not depend on the world; it is not an emergent narrative. The mathematician discovers what is already there. While this viewpoint sees mathematics as useful, mathematics is prior to any of its applications and independent of them. The parallel to religious belief becomes obvious if we substitute the term “God” for “mathematics”; the notions of a self-existing, self-knowing, and self-justifying system are equally applicable (Davis and Hersh 1981, 232-3). Mathematics and religion share in a fundamental Western belief in the Ideal. Taken together, they reveal a tension between the material and the eternal that can be mediated by specific languages. There is no doubt that a simplified mathematics serves us when we are faced with practical problems such as staking out a rectangular foundation for a house, but beyond such short-term uses lie more consequential issues, e.g., the relation of the continuous and the discrete, and between notions of the Ideal and the socially constructed. These larger paradoxes remain hidden when assertions of completeness, consistency, and certainty go unchallenged. In one sense, the data fetishists are simply the latest incarnation of a persistent problem: understanding mathematics as culturally situated.

    Again, historicizing this problem addresses the widespread willingness to accept their totalistic claims. And historicizing these claims requires a turn to established critical techniques. For example, Rotman’s history of the zero turns to Derrida’s Of Grammatology to understand the forces that complicated and paralyzed the acceptance of zero into Western mathematics (1987). He turns to semiotics and to the work of Ricoeur to frame his reading of the emergence of the zero in the West during the Renaissance. Rotman, Alexander, desRaines, and a host of mathematical historians recognize that the nature of mathematical authority has evolved. The evolution lurks in the role of the irrational numbers, in the partial claims of statistics, and in the approximations of the calculus. The various responses are important as evidence of an anxiety about the limits of representation. The desire to resolve such arguments seems revelatory. All share an interest in the gap between the aspirations of systematic language and its object: the unnamable. That gap is iconic, an emblem of its limits and the functions it plays in the generation of novel responses to the threat of an inarticulable void; its history exposes the powerful attraction of the claims made for Big Data.

    By the late 1800s, questions of systematic completeness and consistency grew urgent. For example, they appeared in the competing positions of Frege and Hilbert, and they resonated in the direction David Hilbert gave to 20th Century mathematics with his famed 23 questions (Blanchette 2014). The second of these specifically addressed the problem of proving that mathematical systems could be both complete and consistent. This question deeply influenced figures such as Bertrand Russell, Ludwig Wittgenstein, and others.[9] Hilbert’s question was answered in 1931 by Gödel’s theorems that demonstrated the inherent incompleteness and inconsistency of arithmetic systems. Gödel’s first theorem demonstrated that axiomatic systems would necessarily have true statements that could be neither proven nor disproven; his second theorem demonstrated that such systems would necessarily be inconsistent. While mathematicians often take care to note that his work addresses a purely mathematical problem, it nevertheless is read metaphorically. As a metaphor, it connects the problematic relationship of natural and mathematical languages. This seems inevitable because it led to the collapse of the mathematical aspiration for a wholly formal language that does not require what is termed ‘natural’ language, that is, for a system that did not have to reach outside of itself. Just as John Craig’s work exemplifies the epistemological anxieties of the late eighteenth century,[10] so also does Gödel’s work identify a sustained attempt of his own era to demonstrate that systematic languages might be without gaps.

    Gödel’s theorems rely on a system that creates specialized numbers for symbols and the operations that relate them. This second-order numbering enabled him to move back and forth between the logic of statements and the codes by which they were represented. His theorems respond to an enduring general hope for complete and consistent mappings of the world with words, and each embeds a representational failure.  Craig was interested in the loss of belief in the gospels; Pythagoras feared the gaps in the number line represented by the irrational numbers, and Gödel identified the incompleteness and inconsistency of axiomatic systems. To the dominant mathematics of the early 20th Century, the value of the question to which Gödel addresses himself lies in the belief that an internally complete mathematical map would be the mark of either of two positions: 1) the purely syntactic orderliness of mathematics, one that need not refer to any experiential world (this is the position of Frege, Russell, and Hilbert); or 2) the emergence of mathematics alongside concrete, human experience. Goldstein argues that these two dominant alternatives of the late eighteenth and early twentieth centuries did not consider the aprioricity of mathematics to constitute an important question, but Gödel offered his theorems as proofs that served exactly that idea. His demonstration of incompleteness does not signal a disorderly cosmos; rather, it argues that there are arithmetic truths that lie outside of formalized systems; as Goldstein notes, “the criteria for semantic truth could be separated from the criteria for provability” (2006, 51). This was an argument for mathematical Platonism. Goldstein’s careful discussion of the cultural framework and the meta-mathematical significance of Gödel’s work emphasizes that it did not argue for the absence of any extrinsic order to the world (51). Rather, Gödel was consciously demonstrating the defects in a mathematical project begun by Frege, addressed in the work of Russell and Whitehead, and enshrined by Hilbert as essential for converting mathematics into a profoundly isolated system whose orderliness lay in its internal consistency and completeness.[11] Similarly, his work also directly addressed questions about the a priori nature of mathematics challenged by the Vienna Circle. Paradoxically, by demonstrating that a foundational system – arithmetic – was not consistent and complete, the argument that mathematics was simply a closed, self-referential system could be challenged and opened to meta-mathematical claims about epistemological problems.

    Gödel’s work, among other things, argues for essential differences between human thought and mathematics. Gödel’s work has become imbricated in a variety of discourses about representation, the nature of the mind, and the nature of language. Goldstein notes:

    The structure of Gödel’s proof, the use it makes of ancient paradox [the liar’s paradox], speaks at some level, if only metaphorically, to the paradoxes in the tale that the twentieth century told itself about some of its greatest intellectual achievements – including, of course, Gödel’s incompleteness theorems. Perhaps someday a historian of ideas will explain the subjectivist turn taken by so many of the last century’s most influential thinkers, including not only philosophers but hard-core scientists, such as Heisenberg and Bohr. (2006, 51)

    At the least, his work participated in a major consideration of three alternative understandings of symbolic systems: as isolated, internally ordered syntactic systems, as accompaniments of experience in the material world, or as the a priori realities of the Ideal. Whatever the immensely complex issues of these various positions, Gödel is the key meta-mathematician/logician whose work describes the limits of mathematical representation through an elegant demonstration that arithmetic systems – axiomatic systems – were inevitably inconsistent and incomplete. Depending on one’s aspirations for language, this is either a great catastrophe or an opening to an infinite world of possibility where the goal is to deploy a paradoxical stance that combines the assertion of meaning with its cancellation. This double position addresses the problem of representational completeness.

    This anxiety became acute during the first half of the twentieth century as various discourses deployed strategies that exploited this heightened awareness of the intrinsic incompleteness and inconsistency of systematic knowledge. Whatever their disciplinary differences – neurology, psychology, mathematics – they nonetheless shared the sense that recognizing these limits was an opportunity to understand discourse both from within narrow disciplinary practices and from without in a larger logical and philosophical framework that made the aspiration toward completeness quaint, naïve, and unproductive. They situated the mind as a sort of boundary phenomenon between the deployment of discourses and an extra-linguistic reality. In contrast to the totalistic claims of corporate spokesmen and various predictive software, this sensibility was a recognition that language might always fail to re-present its objects, but that those objects were nonetheless real and expressible as a function of the naming process viewed from yet another position. An important corollary was that these gaps were not only a token for the interplay of word and world, but were also an opportunity to illuminate the gap itself. In short, symbol systems seemed to stand as a different order of phenomena than whatever they proposed to represent, and the result was a burst of innovative work across a variety of disciplines.

    Data enthusiasts sometimes participate in a discredited mathematics, but they do so in powerfully nostalgic ways that resonate with the amorphous Idealism infused in our hierarchical churches, political structures, aesthetics, and epistemologies. Thus, Big Data enthusiasts speak through the residue of a powerful historical framework to assert their own credibility. For these New Pythagoreans, mathematics remains a quasi-religious undertaking whose complexity, consistency, sign systems, and completeness assert a stable, non-human order that keeps chaos at bay. However, they are stepping into an issue more fraught than simply the misuses and misunderstanding of the Pythagorean Theorem. The historicized view of mathematics and their popular invocation of mathematics diverge at the point that anxieties about the representational failure of languages become visible. We not only need to historicize our understanding of mathematics, but also to identify how popular and commercial versions of mathematics are nostalgic fetishes for certainty, completeness, and consistency. Thus, the authority of algorithms has less to do with their predictive power than their connection to a tradition rooted in the religious frameworks of Pythagoreanism. Critical methods familiar to the humanities – semiotics, deconstruction, psychology – build a sort of critical braid that not only re-frames mathematical inquiry, but places larger question about the limits of human knowledge directly before us; this braid forces an epistemological modesty that is eventually ethical and anti-authoritarian in ways that the New Pythagoreans rarely are.

    Immodest claims are the hallmark of digital fetishism, and are often unabashedly conscious. Chris Anderson, while Editor-in-Chief of Wired magazine, infamously argued that “the data deluge makes the scientific method obsolete” (2008). He claimed that distributed computing, cloud storage, and huge sets of data made traditional science outmoded. He asserted that science would become mathematics, a mathematical sorting of data to discover new relationships:

    At the petabyte scale, information is not a matter of simple three and four-dimensional taxonomy and order but of dimensionally agnostic statistics. It calls for an entirely different approach, one that requires us to lose the tether of data as something that can be visualized in its totality. It forces us to view data mathematically first and establish a context for it later.

    “Agnostic statistics” would be the mechanism that for precipitating new findings. He suggests that mathematics is somehow detached from its contexts and represents the real through its uncontaminated formal structures. In Anderson’s essay, the world is number. This neo-Pythagorean claim quickly gained attention, and then wilted in the face of scholarly response such as that of Pigliucci (2009, 534).

    Anderson’s claim was both a symptom and a reinforcement of traditional notions of mathematics that extend far back into Western history. Its explicit notions of mathematics stirred two kinds of anxiety: one reflected a fear of a collapsed social project (science) and the other reflected a desperate hunger for a language – mathematics – that penetrated the veil drawn across reality and made the world knowable. Whatever the collapse of his claim, similar ones such as those of the facial phrenologists continue to appear. Without history – mathematical, political, ideological – “data” acquires a material status much as number did for the Greeks, and this status enables statements of equality between the messiness of reality and the neatness of formal systems. Part of this confusion is a common misunderstanding of the equals sign in popular culture. The “sign” is a relational function, much as the semiotician’s signified and signifier combine to form a “sign.” However, when we mistreat treat the “equals sign” as a directional, productive operation, the nature of mathematics loses its availability to critique. It becomes a process outside of time that generates answers by re-presenting the real in a language. Where once a skeptical Pythagorean might be drowned for revealing the incommensurability of side and diagonal, proprietary secrecy now threatens a sort of legalized financial death for those who violate copyright (Pasquale 2016, 142). Pasquale identifies the “creation of invisible powers” as a hallmark of contemporary, algorithmic culture (2016, 193). His invaluable work recovers the fact that algorithms operate in a network of economic, political, and ideological frameworks, and he carefully argues the role of legal processes in resisting the control that algorithms can impose on citizens.

    Pasquale’s language is not mathematical, but it shares with scholars like Rotman and Goldstein an emphasis on historical and cultural context. The algorithm is made accountable if we think of it as an act whose performance instantiates digital identities through powerful economic, political, and ideological narratives. The digitized individual does not exist until it becomes the subject of such a performance, a performance which is framed much as any other performance is framed: by the social context, by repetition, and through embodiment. Digital individuals come into being when the algorithmic act is performed, but they are digital performances because of the irremediable gap between any object and its re-presentation. In short, they are socially constructed. This would be of little import except that these digital identities begin as proxies for real bodies, but the diagnoses and treatments are imposed on real, social, psychological, flesh beings. The difference between digital identity and human identity can be ignored if the mathematized self is isomorphic with the human self. Thus, algorithmic acts entangle the input > algorithm > output sequence by concealing layers of problematic differences: digital self and human self; mathematics and the Real; test inputs and test outputs, scaling, and input and output. The sequence loses its tidy sequential structure when we recognize that the outputs are themselves data and often re-enter the algorithm’s computations by their transfer to third parties whose information returns for re-processing. A somewhat better version of the flow would be data1 > algorithm > output > data2 > algorithm > output > data3. . . . with the understanding that any datum might re-enter the process. The sequence suggests how an object is both the subject of its context and a contributor to that context. The threat of a constricting output looms precisely because there is a decreasing room for what de Certeau calls “le perruque” (1988, 25), i.e, the inefficiencies where unplanned innovation appears. And like any text, it requires a variety of analytic strategies.

    We have learned to think of algorithms in directional terms. We understand them as transformative processes that operate upon data sets to create outputs. The problematic relationships of data > algorithm > output become even more visible when we recognize that data sets have already been collected according to categories and processes that embody political, economic, and ideological biases. The ideological origin of the collected data – the biases of the questions posed in order to generate “inputs” – are yet another kind of black box, a box prior to the black box of the algorithm, a prior structure inseparable from the algorithm’s hunger for (using the mathematicians’ language) a domain upon which it can act to produce a range of results. The nature of the algorithm controls what items from the domain (data set) can be used, and on the other hand, the nature of the data set controls what the algorithm has available to act upon and transform into descriptive and prescriptive claims. The inputs are as much a black box as the algorithm itself. Thus, opaque algorithms operate upon opaque data sets (Pasquale 2016, 204) in ways that nonetheless embody the inescapable “politics of large numbers” that is the topic of Desrosières and Naish’s history of statistical reasoning (2002). This interplay forces us to recognize that the algorithm inherits biases, and that then they are compounded by operations within these two algorithmic boxes to become doubly biased outputs. It might be more revelatory to term the algorithmic process as “stimuli” > algorithm > “responses.” Re-naming “input” as “stimuli” emphasizes the selection process that precedes the algorithmic act; re-naming “output” as “response” establishes the entire process as human, cultural, and situated. This is familiar territory to psychology. Digital technologies are texts whose complexity emerges when approached using established tools for textual analysis. Rotman and other mathematicians directly state their use of semiotics. They turn to phenomenology to explicate the reader/writer interaction, and they approach mathematical texts with terms like narrator, self-referential and recursion. Most of all, they explore the problem of mathematical representation when mathematics itself is complicated by its referential, formal, and psychological statuses.

    The fetishization of mathematics is a fundamental strategy for exempting digital technologies from theory, history, and critique. Two responses are essential: first, to clarify the nostalgic mathematics at work in the mathematical rhetoric of Big Data and its tools; and second, to offer analogies that step beyond naïve notions of re-presentation to more productive critiques. Analogy is essential because analogy is itself a performance of the anti-representational claim that digital technologies need to be understood as socially constructed by the same forces that instantiate any technology. Bruno Latour frames the problem of the critical stance as three-dimensional:

    The critics have developed three distinct approaches to talking about our world: naturalization, socialization and deconstruction . . . . When the first speaks of naturalized phenomena, then societies, subjects, and all forms of discourse vanish. When the second speaks of fields of power, then science, technology, texts, and the contents of activities disappear. When the third speaks of truth effects, then to believe in the real existence of brain neurons or power plays would betray enormous naiveté. Each of these forms of criticism is powerful in itself but impossible to combine with the other. . . . Our intellectual life remains recognizable as long as epistemologists, sociologists, and deconstructionists remain at arm’s length, the critique of each group feeding on the weaknesses of the other two. (1993, 5-6)

    Latour then asks, “Is it our fault if the networks are simultaneously real, like nature, narrated, like discourse, and collective like society?” (6). He goes on to assert, “Analytic continuity has become impossible” (7). Similarly, Rotman’s history of the zero finds that the concept problematizes the hope that a “field of entities” exists prior to “the meta-sign which both initiates the signifying system and participates within it as a constituent sign”; he continues, “the simple picture of an independent reality of objects providing a pre-existing field of referents for signs conceived after them . . . cannot be sustained” (1987, 27). Our own approach is heterogeneous; we use notions of fetish, re-presentation, and Gödelian metaphor to try and bypass the critical immunity conferred on digital technologies by the naturalistic mathematical claims that immunize it against critique.

    Whether we use Latour’s description of the mutually exclusive methods of talking about the world – naturalization, socialization, deconstruction – or if we use Rotman’s three starting points for the semiotic analysis of mathematical signs – referential, formal, and psychological – we can contextualize the claims of the Big Data fetishists so that the manifestations of Big Data thinking – policing practices, financial privilege, educational opportunity – are not misrepresented as only a mathematical/statistical question about assessing the results of supposedly neutral interventions, decisions, or judgments. If we are confined to those questions, we will only operate within the referential domains described by Rotman or the realm of naturalization described by Latour. The claims of an a-contextual validity violate the consequence of their contextual status by claiming that operations, uses, and conclusions are exempt from the aggregated array of partial theorizations, applied, in this case, to mathematics. This historical/critical application reveals the contradictory world concealed and perpetuated by the corporatized mathematics of contemporary digital culture. However, deploying a constellation of critical methods – historical, semiotic, psychological – prevents the critique from falling prey to the totalism that afflicts the thinking of these New Pythagoreans. This array includes concepts such as fetishization from the pre-digital world of psychoanalysis.

    The concept of the fetish has fallen on hard times as the star of psychoanalysis sinks into the West’s neurochemical sea. But its original formulation remains useful because it seeks to address the gap between representational formulas and their objects. For example – drawing on the quintessential heterosexual, male figure who is central to psychoanalysis – the male shoe fetishist makes no distinction between a pair of Louboutins and the “normal” object of his sexual desire. Fenichel asserts (1945, 343) that such fetishization is “an attempt to deny a truth known simultaneously by another part of the personality,” and enables the use of denial. Such explanations may seem quaint, but that is not the point. The point is that within one of the most powerful metanarratives of the past century – psychoanalysis – scientists faced the contorted and defective nature of human symbolic behavior in its approach to a putative “real.” The fetish offers an illusory real that protects the fetishist against the complexities of the real. Similarly, the New Pythagoreans of Big Data offer an illusory real – a misconstrued mathematics – that often paralyzes resistance to their profit-driven, totalistic claims. In both cases, the fetish becomes the “real” while simultaneously protecting the fetishist from contact with whatever might be more human and more complex.

    Wired Magazine’s “daily fetish” seems an ironic reversal of the term’s functional meaning. Its steady stream of technological gadgets has an absent referent, a hyperreal as Baudrillard styles it, that is exactly the opposite of the material “real” that psychoanalysis sees as the motivation of the fetish. In lived life, the anxiety is provoked by the real; in digital fetishism, the anxiety is provoked by the absence of the real. The anxiety of absence provokes the frenzied production of digital fetishes. Their inevitable failure – because representation always fails – drives the proliferation of new, replacement fetishes, and these become a networked constellation that forms a sort of simulacrum: a model of an absence that the model paradoxically attempts to fill. Each failure accentuates the gap, thereby accentuating the drive toward yet another digital embodiment of the missing part. Industry newsletters exemplify the frantic repetition required by this worldview. For example, Edsurge proudly reports an endless stream of digital edtech products, each substituting for the awkward, fleshly messiness of learning. And each substitution claims to validate itself via mathematical claims of representation. And almost all fade away as the next technology takes its place. Endless succession.

    This profusion of products clamoring to be the “real” object suggests a sort of cultural castration anxiety, a term that might prove less outmoded if we note the preponderance of males in the field who busily give birth to objects with the characteristics of the living beings they seek to replace.[12] The absence at the core of this process is the unbridgeable gap between word and world. Mathematics is especially useful to such strategies because it is embedded in the culture as both the discoverer and validator of objective true/false judgments. These statements are understood to demonstrate a reality that “exists prior to the mathematical act of investigating it” (Rotman 2000, 6). It provides the certainty, the “real” that the digital fetish simultaneously craves and fears. Mathematics short-circuits the problematic question that drives the anxiety about a knowable “real.” The point here is not to revive psychoanalytic thinking, but rather to see how an anxiety mutates and invites the application of critical traditions that themselves embody a response to the incompleteness and inconsistency of sign systems. The psychological model expands into the destabilized social world of digital culture.

    The notion of mathematics as a complete and consistent equivalent of the real is a longstanding feature of Western thought. It both creates and is created by the human need for a knowable real. Mathematics reassures the culture because its formal characteristics seem to operate without referents in the real world, and thus its language seems to become more real than any iteration of its formal processes. However, within mathematical history, the story is more convoluted, in part because of the immense practical value of applied mathematics. While semiotic approaches to the history engage and describe the social construction of mathematics, an important question remains about the completeness and consistency of mathematical systems. The history of this concern connects both the technical question and the popular interest in the power of languages – natural and/or mathematical – to represent the real. Again, these are not just technical, expert questions; they leak into popular metaphor because they embody a larger cultural anxiety about a knowable real. If Pythagorean notions have affected the culture for 2500 years, we want to claim that contemporary culture embodies the anxiety of uncertainty that is revealed not only in its mathematics, but also in the contemporary arguments about algorithmic bias, completeness, and consistency.

    The nostalgia for a fully re-presentational sign system becomes paired with the digital technologies – software, hardware, networks, query strategies, algorithms, black boxes – that characterize daily life. However, this nostalgic rhetoric has a naïveté that embodies the craving for a stable and knowable external world. The culture often responds to it through objects inscribed with the certainty imputed to mathematics, and thus these digital technologies are felt to satisfy a deeply felt need. The problematic nature of mathematics matters little in terms of personalized shopping choices or customizing the ideal playlist. Although these systems rarely achieve the goal of “knowing what you want before you want it,” we rarely balk at the claim because the stakes are so low. However, where these claims have life-altering, and in some cases life and death implications – education, policing, health care, credit, safety net benefits, parole, drone targets – we need to understand them so they can be challenged, and where needed, resisted. Resistance addresses two issues:

    1. That the traditional mystery and power of number seem to justify the refusal of transparency. The mystified tools point upward to the supposed mysterium of the mathematical realm.
    2. That the genuflection before the mathematical mysterium has an insatiable hunger for illustrations that show the world is orderly and knowable.

    Together, these two positions combine to assert the mythological status of mathematics, and set it in opposition to critique. However, it is vulnerable on several fronts. As Pasquale makes clear, legislation – language in action – can begin the demystification; proprietary claims are mundane imitations of the old Pythagorean illusions; outside of political pressure and legislation, there is little incentive for companies to open their algorithms to auditing. However, once pried open by legislation, the wizard behind the curtain and the Automated Turk show their hand. With transparency comes another opportunity: demythologizing technologies that fetishize the re-presentational nature of mathematics.

    _____

    Chris Gilliard’s scholarship concentrates on privacy, institutional tech policy, digital redlining, and the re-inventions of discriminatory practices through data mining and algorithmic decision-making, especially as these apply to college students.

    Hugh Culik teaches at Macomb Community College. His work examines the convergence of systematic languages (mathematics and neurology) in Samuel Beckett’s fiction.

    Back to the essay

    _____

    Notes

    [1] Rotman’s work along with Amir Alexander’s cultural history of the calculus (2014) and Rebecca Goldstein’s (2006) placement of Gödel’s theorems in the historical context of mathematics’ conceptual struggle with the consistency and completeness of systems exemplify the movement to historicize mathematics. Alexander and Rotman are mathematicians, and Goldstein is a logician.

    [2] Other mathematical concepts have hypericonic status. For example, triangulation serves psychology as a metaphor for a family structure that pits two members against a third. Politicians “triangulate” their “position” relative to competing viewpoints. But because triangulation works in only two dimensions, it produces gross oversimplifications in other contexts. Nora Culik (pers. comm.) notes that a better metaphor would be multilateration, a measurement of the time difference between the arrival of a signal with at least two known points and another one that is unknown, to generate possible locations; these take the shape of a hyperboloid, a metaphor that allows for uncertainty in understanding multiply determined concepts. Both re-present an object’s position, but each carries implicit ideas of space.

    [3] Faith in the representational power of mathematics is central to hedge funds. Bridgewater Associates, a fund that manages more than $150 billion US, is at work building a piece of software to automate the staffing for strategic planning. The software seeks to model the cognitive structure of founder Raymond Dalio, and is meant to perpetuate his mind beyond his death. Dalio variously refers to the project as “The Book of the Future,” “The One Thing,” and “The Principles Operating System.” The project has drawn the enthusiastic attention of many popular publications such as The Wall Street Journal, Forbes, Wired, Bloomberg, and Fortune. The project’s model seems to operate on two levels: first, as a representation of Dalio’s mind, and second a representation of the dynamics of investing.

    [4] Numbers are divided into categories that grow in complexity. The development of numbers is an index to the development of the field (Kline, Mathematical Thought, 1972). For a careful study of the problematic status of zero, see Brian Rotman, Signifying Nothing: The Semiotics of Zero (1987). Amir Aczel, Finding Zero: A Mathematician’s Odyssey to Uncover the Origins of Numbers (2015) offers a narrative of the historical origins of number.

    [5] Eugene Wigner (1959) asserts an ambiguous claim for a mathematizable universe. Responses include Max Tegmark’s “The Mathematical Universe” (2008) which sees the question as imbricated in a variety of computational, mathematical, and physical systems.

    [6] The anxiety of representation characterizes the shift from the literary moderns to the postmodern. For example, Samuel Beckett’s intense interest in mathematics and his strategies – literalization and cancellation – typify the literary responses to this anxiety. In his first published novel, Murphy (1938), one character mentions “Hypasos the Akousmatic, drowned in a mud puddle . . . for having divulged the incommensurability of side and diagonal” (46). Beckett uses detailed references to Descartes, Geulcinx, Gödel, and 17th Century mathematicians such as John Craig to literalize the representational limits of formal systems of knowledge. Andrew Gibson’s Beckett and Badiou provides a nuanced assessment of the mathematics, literature, and culture (2006) in Beckett’s work.

    [7] See Frank Kermode, The Sense of an Ending: Studies in the Theory of Fiction with a New Epilogue (2000) for an overview of the apocalyptic tradition in Western culture and the totalistic responses it evokes in politics. While mathematics dealt with indeterminacy, incompleteness, inconsistency and failure, the political world simultaneously saw a countervailing regressive collapse: Mein Kampf in 1925, the Soviet Gulag in 1934; Hitler’s election as Chancellor of Germany in 1933; the fascist bent of Ezra Pound, T. S. Eliot’s After Strange Gods, and D. H. Lawrence’s Mexican fantasies suggest the anxiety of re-presentation that gripped the culture.

    [8] Davis and Hersh (21) divide probability theory into three aspects: 1) theory, which has the same status as any other branch of mathematics; 2) applied theory that is connected to experimentation’s descriptive goals; and 3) applied probability for practical decisions and actions.

    [9] For primary documents, see Jean Van Heijenoort, From Frege to Gödel: a Source Book in Mathematical Logic, 1879-1931 (1967). Ernest Nagel and James Newman, Gödel’s Proof (1958) explains the steps of Gödel’s proofs and carefully restricts their metaphoric meanings; Douglas Hofstadter, Gödel, Escher, Bach: An Eternal Golden Braid [A Metaphoric Fugue on Minds and Machines in the Spirit of Lewis Carroll] (1980) places the work in the conceptual history that now leads to the possibility of artificial intelligence.

    [10] See Richard Nash, John Craige’s Mathematical Principles of Christian Theology. (1991) for a discussion of the 17th Century mathematician and theologian who attempted to calculate the rate of decline of faith in the Gospels so that he would know the date of the Apocalypse. His contributions to calculus and statistics emerge in a context we find absurd, even if his friend, Isaac Newton, found them valuable.

    [11] An equally foundational problem – the mathematics of infinity – occupies a similar position to the questions addressed by Gödel. Cantor’s opening of set theory exposes and solves the problems it poses to formal mathematics.

    [12] For the historical appearances of the masculine version of this anxiety, see Dennis Todd’s Imagining Monsters: Miscreations of the Self in Eighteenth Century England (1995).

    _____

    Works Cited

    • Aczel, Amir. 2015. Finding Zero: A Mathematician’s Odyssey to Uncover the Origins of Numbers. New York: St. Martin’s Griffin.
    • Alexander, Amir. 2014. Infinitesimal: How a Dangerous Mathematical Theory Shaped the Modern World. New York: Macmillan.
    • Anderson, Chris. 2008. “The End of Theory.” Wired Magazine 16, no. 7: 16-07.
    • Beckett, Samuel. 1957. Murphy (1938). New York: Grove.
    • Berk, Richard. 2011. “Q&A with Richard Berk.” Interview by Greg Johnson. PennCurrent (Dec 15).
    • Blanchette, Patricia. 2014. “The Frege-Hilbert Controversy.” In Edward N. Zalta, ed., The Stanford Encyclopedia of Philosophy.
    • boyd, danah, and Crawford, Kate. 2012. “Critical Questions for Big Data.” Information, Communication & Society 15:5. doi 10.1080/1369118X.2012.678878.
    • de Certeau, Michel. 1988. The Practice of Everyday Life. Translated by Steven Rendall. Berkeley: University of California Press.
    • Davis, Philip and Reuben Hersh. 1981. Descartes’ Dream: The World According to Mathematics. Boston: Houghton Mifflin.
    • Desrosières, Alain, and Camille Naish. 2002. The Politics of Large Numbers: A History of Statistical Reasoning. Cambridge: Harvard University Press.
    • Eagle, Christopher. 2007. “‘Thou Serpent That Name Best’: On Adamic Language and Obscurity in Paradise Lost.” Milton Quarterly 41:3. 183-194.
    • Fenichel, Otto. 1945. The Psychoanalytic Theory of Neurosis. New York: W. W. Norton & Company.
    • Gibson, Andrew. 2006. Beckett and Badiou: The Pathos of Intermittency. New York: Oxford University Press.
    • Goldstein, Rebecca. 2006. Incompleteness: The Proof and Paradox of Kurt Gödel. New York: W.W. Norton & Company.
    • Guthrie, William Keith Chambers. 1962. A History of Greek Philosophy: Vol.1 The Earlier Presocratics and the Pythagoreans. Cambridge: Cambridge University Press.
    • Hofstadter, Douglas. 1979. Gödel, Escher, Bach: An Eternal Golden Braid; [a Metaphoric Fugue on Minds and Machines in the Spirit of Lewis Carroll]. New York: Basic Books.
    • Kermode, Frank. 2000. The Sense of an Ending: Studies in the Theory of Fiction with a New Epilogue. New York: Oxford University Press.
    • Kline, Morris. 1990. Mathematics: The Loss of Certainty. New York: Oxford University Press.
    • Latour, Bruno. 1993. We Have Never Been Modern. Translated by Catherine Porter. Cambridge: Harvard University Press.
    • Mitchell, W. J. T. 1995. Picture Theory: Essays on Verbal and Visual Representation. Chicago: University of Chicago Press.
    • Nagel, Ernest and James Newman. 1958. Gödel’s Proof. New York: New York University Press.
    • Office of Educational Technology at the US Department of Education. “Jose Ferreria: Knewton – Education Datapalooza”. Filmed [November 2012]. YouTube video, 9:47. Posted [November 2012]. https://youtube.com/watch?v=Lr7Z7ysDluQ.
    • O’Neil, Cathy. 2016. Weapons of Math Destruction. New York: Crown.
    • Pasquale, Frank. 2016. The Black Box Society: The Secret Algorithms that Control Money and Information. Cambridge: Harvard University Press.
    • Pigliucci, Massimo. 2009. “The End of Theory in Science?”. EMBO Reports 10, no. 6.
    • Porter, Theodore. 1986. The Rise of Statistical Thinking, 1820-1900. Princeton: Princeton University Press.
    • Rotman, Brian. 1987. Signifying Nothing: The Semiotics of Zero. Stanford: Stanford University Press
    • Rotman, Brian. 2000. Mathematics as Sign: Writing, Imagining, Counting. Stanford: Stanford University Press.
    • Tegmark, Max. 2008. “The Mathematical Universe.” Foundations of Physics 38 no. 2: 101-150.
    • Todd, Dennis. 1995. Imagining Monsters: Miscreations of the Self in Eighteenth Century England. Chicago: University of Chicago Press.
    • Turchin, Valentin. 1977. The Phenomenon of Science. New York: Columbia University Press.
    • Van Heijenoort, Jean. 1967. From Frege to Gödel: A Source Book in Mathematical Logic, 1879-1931. Vol. 9. Cambridge: Harvard University Press.
    • Wigner, Eugene P. 1959. “The Unreasonable Effectiveness of Mathematics in the Natural Sciences.” Richard Courant Lecture in Mathematical Sciences delivered at New York University, May 11. Reprinted in Communications on Pure and Applied Mathematics 13:1 (1960). 1-14.
    • Wu, Xiaolin, and Xi Zhang. 2016. “Automated Inference on Criminality using Face Images.” arXiv preprint: 1611.04135.
  • David Golumbia — The Digital Turn

    David Golumbia — The Digital Turn

    David Golumbia

    Is there, was there, will there be, a digital turn? In (cultural, textual, media, critical, all) scholarship, in life, in society, in politics, everywhere? What would its principles be?

    The short prompt I offered to the contributors to this special issue did not presume to know the answers to these questions.

    That means, I hope, that these essays join a growing body of scholarship and critical writing (much, though not by any means all, of it discussed in the essays that make up this collection) that suspends judgment about certain epochal assumptions built deep into the foundations of too much practice, thought, and even scholarship about just these questions.

    • In “The New Pythagoreans,” Chris Gilliard and Hugh Culik look closely at the long history of Pythagorean mystic belief in the power of mathematics and its near-exact parallels in contemporary promotion of digital technology, and especially surrounding so-called big data.
    • In “From Megatechnic Bribe to Megatechnic Blackmail: Mumford’s ‘Megamachine’ after the Digital Turn,” Zachary Loeb asks about the nature of the literal and metaphorical machines around us via a discussion of the 20th century writer and social critic (and) Lewis Mumford’s work, one of the thinkers who most fully anticipated the digital revolution and understood its likely consequences.
    • In “Digital Proudhonism,” Gavin Mueller writes that “a return to Marx’s critique of Proudhon will aid us in piercing through the Digital Proudhonist mystifications of the Internet’s effects on politics and industry and reformulate both a theory of cultural production under digital capitalism as well as radical politics of work and technology for the 21st century.”
    • In “Mapping Without Tools: What the Digital Turn Can Learn from the Cartographic Turn.” Tim Duffy pushes back “against the valorization of ‘tools’ and ‘making’ in the digital turn, particularly its manifestation in digital humanities (DH), by reflecting on illustrative examples of the cartographic turn, which, from its roots in the sixteenth century through to J.B. Harley’s explosive provocation in 1989 (and beyond) has labored to understand the relationship between the practice of making maps and the experiences of looking at and using them.  By considering the stubborn and defining spiritual roots of cartographic research and the way fantasies of empiricism helped to hide the more nefarious and oppressive applications of their work, I hope to provide a mirror for the state of the digital humanities, a field always under attack, always defining and defending itself, and always fluid in its goals and motions.”
    • Joseph Erb, Joanna Hearne, and Mark Palmer with Durbin Feeling, in “Origin Stories in the Genealogy of Cherokee Language Technology,” argue that “the surge of critical work in digital technology and new media studies has rarely acknowledged the centrality of Indigeneity to our understanding of systems such as mobile technologies, major programs such as Geographic Information Systems (GIS), digital aesthetic forms such as animation, or structural and infrastructural elements of hardware, circuitry, and code.”
    • In “Artificial Saviors,” tante connects the pseudo-religious and pseudo-scientific rhetoric found at a surprising rate among digital technology developers and enthusiasts: “When AI morphed from idea or experiment to belief system, hackers, programmers, ‘data scientists,’ and software architects became the high priests of a religious movement that the public never identified and parsed as such.”
    • In “The Endless Night of Wikipedia’s Notable Woman Problem,” Michelle Moravec “takes on one of the ‘tests’ used to determine whether content is worthy of inclusion in Wikipedia, notability, to explore how the purportedly neutral concept works against efforts to create entries about female historical figures.”
    • In “The Computational Unconscious,” Jonathan Beller interrogates the “penetration of the digital, rendering early on the brutal and precise calculus of the dimensions of cargo-holds in slave ships and the sparse economic accounts of ship ledgers of the Middle Passage, double entry bookkeeping, the rationalization of production and wages in the assembly line, and more recently, cameras and modern computing.”
    • In “What Indigenous Literature Can Bring to Electronic Archives,” Siobhan Senier asks, “How can the insights of the more ethnographically oriented Indigenous digital archives inform digital literary collections, and vice versa? How do questions of repatriation, reciprocity, and culturally sensitive contextualization change, if at all, when we consider Indigenous writing?”
    • Rob Hunter provides the following abstract of “The Digital Turn and the Ethical Turn: Depoliticization in Digital Practice and Political Theory”:

      The digital turn is associated with considerable enthusiasm for the democratic or even emancipatory potential of networked computing. Free, libre, and open source (FLOSS) developers and maintainers frequently endorse the claim that the digital turn promotes democracy in the form of improved deliberation and equalized access to information, networks, and institutions. Interpreted in this way, democracy is an ethical practice rather than a form of struggle or contestation. I argue that this depoliticized conception of democracy draws on commitments—regarding personal autonomy, the ethics of intersubjectivity, and suspicion of mass politics—that are also present in recent strands of liberal political thought. Both the rhetorical strategies characteristic of FLOSS as well as the arguments for deliberative democracy advanced within contemporary political theory share similar contradictions and are vulnerable to similar critiques—above all in their pathologization of disagreement and conflict. I identify and examine the contradictions within FLOSS, particularly those between commitments to existing property relations and the championing of individual freedom. I conclude that, despite the real achievements of the FLOSS movement, its depoliticized conception of democracy is self-inhibiting and tends toward quietistic refusals to consider the merits of collective action or the necessity of social critique.

    • John Pat Leary, in “Innovation and the Neoliberal Idioms of Development,” “explores the individualistic, market-based ideology of ‘innovation’ as it circulates from the English-speaking first world to the so-called third world, where it supplements, when it does not replace, what was once more exclusively called ‘development.’” He works “to define the ideology of ‘innovation’ that undergirds these projects, and to dissect the Anglo-American ego-ideal that it circulates. As an ideology, innovation is driven by a powerful belief, not only in technology and its benevolence, but in a vision of the innovator: the autonomous visionary whose creativity allows him to anticipate and shape capitalist markets.”
    • Annemarie Perez, in “UndocuDreamers: Public Writing and the Digital Turn,” writes of a “paradox” she finds in her work with students who belong to communities targeted by recent immigration enforcement crackdowns and the default assumptions about “open” and “public” found in so much digital rhetoric: “My students should write in public. Part of what they are learning in Chicanx studies is about the importance of their voices, of their experiences and their stories are ones that should be told. Yet, given the risks in discussing migration and immigration through the use of public writing, I wonder how I as an instructor should either encourage or discourage students from writing their lives, their experiences as undocumented migrants, experiences which have touched, every aspect of their lives.”
    • Gretchen Soderlund, in “Futures of Journalisms Past (or, Pasts of Journalism’s Future),” looks at discourses of “the future” in journalism from the 19th and 20th centuries, in order to help frame current discourses about journalism’s “digital future,” in part because when “when it comes to technological and economic speedup, journalism may be the canary in the mine.”
    • In “The Singularity in the I790s: Toward a Prehistory of the Present With William Godwin and Thomas Malthus,” Anthony Galluzzo examines the often-misunderstood and misrepresented writings of William Godwin, and also those of Thomas Malthus, to demonstrate how far back in English-speaking political history go the roots of today’s technological Prometheanism, and how destructive it can be, especially for the political left.

    “Digital Turn” Table of Contents