Indigenous people are here—here in digital space just as ineluctably as they are in all the other “unexpected places” where historian Philip Deloria (2004) suggests we go looking for them. Indigenous people are on Facebook, Twitter, and YouTube; they are gaming and writing code, podcasting and creating apps; they are building tribal websites that disseminate immediately useful information to community members while asserting their sovereignty. And they are increasingly present in electronic archives. We are seeing the rise of Indigenous digital collections and exhibits at most of the major heritage institutions (e.g., the Smithsonian) as well as at a range of museums, universities and government offices. Such collections carry the promise of giving tribal communities more ready access to materials that, in some cases, have been lost to them for decades or even centuries. They can enable some practical, tribal-nation rebuilding efforts, such as language revitalization projects. From English to Algonquian, an exhibit curated by the American Antiquarian Society, is just one example of a digitally-mediated collaboration between tribal activists and an archiving institution that holds valuable historic Native-language materials.
“Digital repatriation” is a term now used to describe many Indigenous electronic archives. These projects create electronic surrogates of heritage materials, often housed in non-Native museums and archives, making them more available to their tribal “source communities” as well as to the larger public. But digital repatriation has its limits. It is not, as some have pointed out, a substitute for the return of the original items. Moreover, it does not necessarily challenge the original archival politics. Most current Indigenous digital collections, indeed, are based on materials held in universities, museums and antiquarian societies—the types of institutions that historically had their own agendas of salvage anthropology, and that may or may not have come by their materials ethically in the first place. There are some practical reasons that settler institutions might be first to digitize: they tend to have rather large quantities of material, along with the staff, equipment and server space to undertake significant electronic projects. The best of these projects are critically self-conscious about their responsibilities to tribal communities. And yet the overall effect of digitizing settler collections first is to perpetuate colonial archival biases—biases, for instance, toward baskets and buckskins rather than political petitions; biases toward sepia photographs of elders rather than elders’ letters to state and federal agencies; biases toward more “exotic” images, rather than newsletters showing Native activists successfully challenging settler institutions to acknowledge Indigenous peoples’ continuous, and political presence.
Those petitions, letters and newsletters do exist, but they tend to reside in the legions of small archives gathered, protected and curated by tribal people themselves, often with gallingly little material support or recognition from outside their communities. While it is true that many Indigenous cultural heritage items have been taken from their source communities for display in remote collecting institutions, it is also true that Indigenous people have continued to maintain their own archives of books, papers and art objects in tribal offices, tribal museums, attics and garages. Such items might be in precarious conditions of preservation, subject to mold, mildew or other damage. They may be incompletely inventoried, or catalogued only in an elder’s memory. And they are hardly ever digitized. A recent survey by the Association of Tribal Archives Libraries and Museums (2013) found that, even though digitization is now the industry standard for libraries and archives, very few tribal collections in the United States are digitizing anything at all. Moreover, the survey found, this often isn’t for lack of desire, but for lack of resources—lack of staff and time, lack of access to adequate equipment and training, lack of broadband.[1]
Tribally stewarded collections often hold radically different kinds of materials that tell radically different stories from those historically promoted by institutions that thought they were “preserving” cultural remnants. Of particular interest to me as a literary scholar is the Indigenous writing that turns up in tribal and personal archives: tribal newsletters and periodicals; powwow and pageant programs; mimeographed books used to teach language and traditional narratives; recorded oral histories; letters, memoirs and more. Unlike the ethnographers’ photographs, colonial administrators’ records and (sometimes) decontextualized material objects that dominate larger museums, these writings tell stories of Indigenous survival and persistence. In what follows, I give a brief review of some of the best-known Indigenous electronic archives, followed by a consideration of how digitizing Indigenous writing, specifically, could change the way we see such archives. In their own recirculations of their writings online, Native people have shown relatively little interest in the concerns that currently dominate the field of Digital Humanities, including “preservation,” “open access,” “scalability,” and (perhaps the most unfortunate term in this context) “discoverability.” They seem much keener to continue what those literary traditions have in fact always done: assert and enact their communities’ continuous presence and political viability.
Digital Repatriation and Other Consultative Practices
Indigenous digital archives are very often based in universities, headed by professional scholars, often with substantial community engagement. The Yale Indian Papers Project, which seeks to improve access to primary documents demonstrating the continuous presence of Indigenous people in New England, elicits editorial assistance from a number of Indigenous scholars and tribal historians. The award-winning Plateau People’s Web Portal at Washington State University takes this collaborative methodology one step further, inviting consultants from neighboring tribal nations to come in to the university archives and select and curate materials for the web. Other digital Indigenous exhibits come from prestigious museums and collecting institutions, like the American Philosophical Society’s “Native American Images Project.” Indeed, with so many libraries, museums and archives now creating digital collections these days (whether in the form of e-books, scanned documents, or full electronic exhibits), materials related to Indigenous people can be found in an ever-growing variety of formats and places. Hence, there is also a rising popularity in portals—regional or state-based sites that can act as gateways to a wide variety of digital collections. Some are specific to Indigenous topics and locations, like the Carlisle Indian School Digital Resource Center, which compiles web-based resources for studying U.S. boarding school history. Others digital portals sweep up Indigenous objects along with other cultural materials, like the Maine Memory Network or the Digital Public Library of America.
It is not surprising that the bent of most of these collections is decidedly ethnographic, given that Indigenous people the world over have been the subjects of one prolonged imperial looting. Cultural heritage professionals are now legally (or at least ethically) required to repatriate human remains and sacred objects, but in recent years, many have also begun to speak of “digital repatriation.” Just as digital collections of all kinds are providing new access to materials held in far-flung locations, these are arguably a boon to elders or Native people living far away, for instance, from the Smithsonian Museum, to be able to readily view their cultural property. The digitization of heritage and materials can, in fact, help promote cultural revitalization and culturally responsive teaching (Roy and Christal 2002; Srinivasan et al. 2010). Many such projects aim expressly “to reinstate the role of the cultural object as a generator, rather than an artifact, of cultural information and interpretation” (Brown and Nicholas 2012, 313).
Nonetheless, Indigenous people may be forgiven if they take a dim view of their cultural heritage items being posted willy nilly on the internet. Some have questioned whether digital repatriation is a subterfuge for forestalling or refusing the return of the original items. Jim Enote (Zuni), Executive Director of the A:shiwi A:wan Museum and Heritage Center, has gone so far as to say that the words “digital” and “repatriation” simply don’t belong in the same sentence, pointing out that nothing in fact is being repatriated, since even the digital item is, in most cases, also created by a non-Native institution (Boast and Enote 2013, 110). Others worry about the common assumption that unfettered access to information is always and everywhere an unqualified good. Anthropologist Kimberly Christen has asked pointedly, “Does Information Really Want to be Free?” Her answer: “For many Indigenous communities in settler societies, the public domain and an information commons are just another colonial mash-up where their cultural materials and knowledge are ‘open’ for the profit and benefit of others, but remain separated from the sociocultural systems in which they were and continue to be used, circulated, and made meaningful” (Christen 2012, 2879-80).
A truly decolonized archive, then, calls for a critical re-examination of the archive itself. As Ellen Cushman (Cherokee) puts it, “Archives of Indigenous artifacts came into existence in part to elevate the Western tradition through a process of othering ‘primitive’ and Native traditions . . . . Tradition. Collection. Artifacts. Preservation. These tenets of colonial thought structure archives whether in material or digital forms” (Cushman 2013, 119). The most critical digital collections, therefore, are built not only through consultation with Indigenous knowledge-keepers, but also with considerable self-consciousness about the archival endeavor itself. The Yale editors, for instance, explain that “we cannot speak for all the disciplines that have a stake in our work, nor do we represent the perspective of Native people themselves . . . . [Therefore tribal] consultants’ annotations might include Native origin stories, oral sources, and traditional beliefs while also including Euro-American original sources of the same historical event or phenomena, thus offering two kinds of narratives of the past” (Grant-Costa, Glaza, and Sletcher 2012). Other sites may build this archival awareness into the interface itself. Performing Archive: Curtis + the “vanishing race,” for instance, seeks explicitly to “reject enlightenment ideals of the cumulative archive—i.e. that more materials lead to better, more accurate knowledge—in order to emphasize the digital archive as a site of critique and interpretation, wherein access is understood not in terms of access to truth, but to the possibility of past, present, and future performance” (Kim and Wernimont 2014).
Additional innovations worth mentioning here include the content management system Mukurtu, initially developed by Christen and her colleagues to facilitate culturally responsive archiving for an Aboriginal Australian collection, and quickly embraced by projects worldwide. Recognizing that “Indigenous communities across the globe share similar sets of archival, cultural heritage, and content management needs” (2005:317), Mukurtu lets them build their own digital collections and exhibits, while giving them finely grained control over who can access those materials—e.g., through tribal membership, clan system, family network, or some other benchmark. Christen and her colleague Jane Anderson have also created a system of traditional knowledge (TK) licenses and labels—icons that can be placed on a website to help educate site visitors about the culturally appropriate use of heritage materials. The licenses (e.g., “TK Commercial,” “TK Non-Commercial”) are meant to be legal instruments for owners of heritage material; a tribal museum, for instance, could use them to signal how it intends for electronic material to be used or not used. The TK labels, meanwhile, are extra-legal tools meant to educate users about culturally appropriate approaches to material that may, legalistically, be in the “public domain,” but from a cultural standpoint have certain restrictions: e.g., “TK Secret/Sacred,” “TK Women Restricted,” “TK Community Use Only.”)
All of the projects described here, many still in their incipient stages, aim to decolonize archives at their core. They put Indigenous knowledge-keepers in partnership with computing and heritage management professionals to help communities determine how, whether, and why their collections shall be digitized and made available. As such, they have a great deal to teach digital literary projects—literary criticism (if I may) not being a profession historically inclined to consult with living subjects very much at all. Next, I ponder why, despite great strides in both Indigenous digital collections and literary digital collections, the twain have really yet to meet.
Electronic Textualities: The Pasts and Futures of Indigenous Literature
While signatures, deeds and other Native-authored texts surface occasionally in the aforementioned heritage projects, digital projects devoted expressly to Indigenous writing are relatively few and far between.[2] Granting that Aboriginal people, like any other people, do produce writings meant to be private, as a literary scholar I am confronted daily with a rather different problem than that of cultural “protection”: a great abundance of poetry, fiction and nonfiction written by Indigenous people, much of which just never sees the larger audiences for which it was intended. How can the insights of the more ethnographically oriented Indigenous digital archives inform digital literary collections, and vice versa? How do questions of repatriation, reciprocity, and culturally sensitive contextualization change, if at all, when we consider Indigenous writing?
Literary history is another of those unexpected places in which Indians are always found. But while Indigenous literature—both historic and contemporary—has garnered increasing attention in the academy and beyond, the Digital Humanities does not seem to have contributed very much to the expansion and promotion of these canons. Conversely, while DH has produced some dynamic and diverse literary scholarship, scholars in Native American Studies seem to be turning toward this scholarship only slowly. Perhaps digital literary studies has not felt terribly inviting to Indigenous texts; many observers (Earhart 2012; Koh 2015) have remarked that the emerging digital literary canon, indeed, looks an awful lot like the old one, with the lion’s share of the funding and prestige going to predictable figures like William Shakespeare, William Blake, and Walt Whitman. At this moment, I know of no mass movement to digitize Indigenous writing, although a number of “public domain” texts appear in places like the Internet Archive, Google Books, and Project Gutenberg.[3] Indigenous digital literature seems light years away from the kinds of scholarly and technical standards achieved by the Whitman and Rosetti Archives. And without a sizeable or searchable corpus, scholarship on Indigenous literature likewise seems light years from the kinds of text mining, topic modeling and network analysis that is au courant in DH.
Instead, we see small-scale, emergent digital collections that nevertheless offer strong correctives to master narratives of Indigenous disappearance, and that supply further material for ongoing sovereignty struggles. The Hawaiian language newspaper project is one powerful example. Started as a massive crowdsourcing effort that digitized at least half of the remarkable 100 native-language newspapers produced by Hawaiian people between the 1830s and the 1940s, it calls itself “the largest native-language cache in the Western world,” and promises to change the way Hawaiian history is seen. It might well do so if, as Noenoe Silva (2004, 2) has argued, “[t]he myth of [Indigenous Hawaiian] nonresistance was created in part because mainstream historians have studiously avoided the wealth of material written in Hawaiian.” A grassroots digitization movement like the Hawaiian Nupepa Project makes such studious avoidance much more difficult, and it brings to the larger world of Indigenous digital collections direct examples—through Indigenous literacy—of Indigenous political persistence.
It thus points to the value of the literary in Indigenous digitization efforts. Jessica Pressman and Lisa Swanstrom (2013) have asked, “What kind of scholarly endeavors are possible when we think of the digital humanities as not just supplying the archives and data-sets for literary interpretation but also as promoting literary practices with an emphasis on aesthetics, on intertextuality, and writerly processes? What kind of scholarly practices and products might emerge from a decisively literary perspective and practice in the digital humanities?” Abenaki historian Lisa Brooks (2012, 309) has asked similar questions from an Indigenous perspective, positing that digital space allows us to challenge conventional notions of literary periodization and of place, to “follow paths of intellectual kinship, moving through rhizomic networks of influence and inquiry.” Brooks and other literary historians have long argued that Indigenous people have deployed alphabetic literacy strategically to (re)build their communities, restore and revitalize their traditions, and exercise their political and cultural sovereignty. Digital literary projects, like the Hawaiian newspaper project, can offer powerful extensions of these practices in electronic space.
These were some of the questions and issues we had in mind when we started dawnlandvoices.org.[4] This archive is emergent—not a straight scan-and-upload of items residing in one physical site or group of sites, but rather a collaboration among tribal authors, tribal collections, and university-based scholars and students. It came out of a print volume, Dawnland Voices: An Anthology of Writing from Indigenous New England (Senier 2014), edited by myself and eleven tribal historians. Organized by tribal nation, the book ranges from the earliest writings (petroglyphs and political petitions) to the newest (hip-hop poetry and blog entries). The print volume already aimed to be a counter-archive, insofar as it represents the literary traditions of “New England,” a region that has built its very identity on colonial dispossession, colonial boundaries and the myth of Indian disappearance. It also already aimed to decolonize the archive, insofar as it distributes editorial authority and control to Indigenous writers, historians and knowledge-keepers. At almost 700 pages, though, Dawnland in book form could only scratch the surface of the wealth of writing that regional Native people have produced, and that remains, for the most part, in their own hands.
We wanted a living document—one that could expand to include some of the vibrant pieces we could not fit in the book, one that could be revised and reshaped according to ongoing community conversation. And we wanted to keep presenting historic materials alongside new (in this case born-digital) texts, the better to highlight the long history of Indigenous writing in this region. But we also realized that this required resources. We approached the National Endowment for the Humanities and received a $38,000 Preservation and Access grant to explore how digital humanities resources might be better redistributed to empower tribal communities who want to digitize their texts, either for private tribal use or more public dissemination. The partners on this grant included three different, but representative kinds of collections: a tribal museum with some history of professional archiving and private support (the Tomaquag Indian Memorial Museum in Rhode Island); a tribal office that finds itself acting as an unofficial repository for a variety of papers and documents, and that does not have the resources to completely inventory or protect these (the Passamaquoddy Cultural Preservation Office in Maine); and four elders who have amassed a considerable collection of books, papers, and slides from their years working in the Boston Children’s Museum and Plimoth Plantation, and were storing these in their own homes (the Indigenous Resources Collaborative in Massachusetts). Under the terms of the grant, the University of New Hampshire sent digital librarians to each site to set up basic hardware and software for digitization, while training tribal historians in digitization basics. The end result of this two-year pilot project was a small exhibit of sample items from each archive.
The obstacles to this kind of work for small tribal collections are perhaps not unique, but they are intense. Digitization is expensive, time-consuming, and labor-intensive, even more so for collections that do not have ample (or any) paid staff, that can’t afford to update basic software or that don’t even have reliable internet connections. And there were additional hurdles: while the pressure from DH writ large (and granting institutions individually) is frequently to demonstrate scalability, in the end, the tribal partners on this grant did not coalesce around a shared goal of digitizing their collections wholesale. The Passamaquoddy tribal heritage preservation officer has scanned and uploaded the greatest quantity of material by far, but he has strategically zeroed in on dozens of tribal newsletters containing radical histories of Native resistance and survival in the latter half of the twentieth century. The Tomaquag Museum does want to digitize its entire collection, but it prefers to do so in-house, for optimum control of intellectual property. The Indigenous Resources Collaborative, meanwhile, would rather digitize and curate just a small handful of items as richly as possible. While these elders were initially adamant that they wanted to learn to scan and upload their own documents, they learned quickly just how stultifying this labor is. What excited them much more was the process of selecting individual documents and dreaming about how to best share these online. An old powwow flyer describing the Mashpee Wampanoag game of fireball, for instance, had them naming elders and players they could interview, with the possibility of adding metadata in the form of video or narrative audio.
More than a year after articulating this aspiration, the IRC has not begun to conduct or record any such interviews. Such a project is beyond their current energies, time and resources; and to be sure, any continuation of their work on this partner project at dawnlandvoices.org should be compensated, which will mean applying for new grants. But the delay or inaction also points to a larger conundrum: that for all of the Web’s avowed multimodality, indigenous digital collections have generally not reflected the longstanding multimodality of indigenous literatures themselves—in particular, their longstanding and mutually sustaining interplay of oral and written forms. Some (Golumbia 2015) would attribute this to an unwillingness within DH to recognize the kinds of digital language work being done by Indigenous communities worldwide. Perhaps, too, it owes something to the history of violence embedded in “recording” or “preserving” Indigenous oral traditions (Silko 1981); the Indigenous partners with whom I have worked are generally skeptical of the need to put their traditional narratives—or even some of the recorded oral histories they may have stored in cassette—online. Too, there is the time and labor involved in recording. It is now common to hear digital publishers wax enthusiastic about the “affordances” of the Web (it seems so easy, to just add an mp3), but with few exceptions, dawnlandvoices.org has not elicited many recordings, despite our invitations to authors to contribute them.
Unlike the texts in the most esteemed digital literature archives like the Rosetti Archive (edited, contextualized and encoded to the highest scholarly standard), the texts in dawnlandvoices.org are often rough, edgy, and unfinished; and that, quite possibly, is the way they will remain. Insofar as dawnlandvoices.org aspires to be a “database” at all (and we are not sure that it does), it makes sense at this point for there to be multiple pathways in and out of that collection, multiple ways of formatting and presenting material. It is probably fair to say that most scholars working on indigenous digital archives dream of a day when these sites will have robust community engagement and commentary. At the same time, many would readily admit that it’s not as simple as building it and hoping they will come. David Golumbia (2015) has gone so far as to suggest that what marginalizes Indigenous projects within DH is the archive-centric nature of the field itself—that while “most of the major First Nations groups now maintain rich community/governmental websites with a great deal of information on history, geography, culture, and language. . . none of this work, or little of it, is perceived or labeled as DH.” Thus, the esteemed digital archives might not, in fact, be what tribal communities want most. Brown and Nicholas raise the equally provocative possibility that “[i]nstitutional databases may . . . already have been superseded by social networking sites as digital repositories for cultural information” (2012:315). And, in fact, that most pervasive and understandably-maligned of social-networking sites, Facebook, seems to be serving some tribal museums’, authors’ and historians’ immediate cultural heritage needs surprisingly well. Many post historic photos or their own writings to their walls, and generate fabulously rich commentary: identifications of individuals in pictures, memories of places and events, praise and criticism for poetry. Facebook is a proprietary, and notoriously problematic platform, especially on the issue of intellectual property. And yet it has made room, at least for now, for a kind of fugitive curation that, albeit fragile and fugitive, raises the question of whether such curation should be “institutional” at all. We can see similar things happening on Twitter (as in Daniel Heath Justice’s recent “year of tweets” naming Indigenous authors) and Instagram (where artists like Stephen Paul Judd store, share, and comment on their work). Outside of DH and settler institutions, indigenous people are creating all kinds of collections that—if they are not “archives” in a way that satisfy professional archivists—seem to do what Native people, individually and collectively, need them to do. At least for today, these collections create what First Nations artists Jason Lewis and Skawennati Tricia Fragnito call “Aboriginally determined territories in cyberspace” (2005).
What the conversations initiated by Kim Christen, Jane Anderson, Jim Enote and others can bring to digital literature collections is a scrupulously ethical concern for Indigenous intellectual property, an insistence on first voice and community engagement. What Indigenous literature, in turn, can bring to the table is an insistence on politics and sovereignty. Like many literary scholars, I often struggle with what (if anything) makes “Literature” distinctive. It’s not that baskets or katsina masks cannot be read expressions of sovereignty—they can, and they are. But Native literatures—particularly the kinds saved by Indigenous communities themselves rather than by large collecting institutions and salvage anthropologists—provide some of the most powerful and overt archives of resistance and resurgence. The invisibility of these kinds of tribal stories and tribal ways of knowing and keeping stories is an ongoing concern, even on the “open” Web. It may be that Digital Humanities writ large will continue to struggle against the seeming centrifugal force of traditional literary and cultural canons. It is not likely, however, that Indigenous communities will wait for us.
_____
Siobhan Senier is associate professor of English at the University of New Hampshire. She is the editor of Dawnland Voices: An Anthology of Writing from Indigenous New England and dawnlandvoices.org.
[1] A study by Native Public Media (Morris and Meinrath 2009) found that broadband access in and around Native American and Alaska Native communities was less than 10 percent, sometimes as low as 5 to 6 percent.
[3] The University of Virginia Electronic Texts Center at one time had an excellent collection of Native-authored or Native-related works, but these are now buried within the main digital catalog.
_____
Works Cited
Association of Tribal Archives Libraries and Museums. 2013. “International Conference Program.” Santa Ana Pueblo, NM.
Boast, Robin, and Jim Enote. 2013. “Virtual Repatriation: It Is Neither Virtual nor Repatriation.” In Peter Biehl and Christopher Prescott, eds., Heritage in the Context of Globalization. SpringerBriefs in Archaeology. New York, NY: Springer New York. 103–13.
Brooks, Lisa. 2012. “The Primacy of the Present, the Primacy of Place: Navigating the Spiral of History in the Digital World.” PMLA 127:2. 308–16.
Brown, Deidre, and George Nicholas. 2012. “Protecting Indigenous Cultural Property in the Age of Digital Democracy: Institutional and Communal Responses to Canadian First Nations and Māori Heritage Concerns.” Journal of Material Culture 17:3. 307–24.
Christen, Kimberly. 2005. “Gone Digital: Aboriginal Remix and the Cultural Commons.” International Journal of Cultural Property 12:3. 315–45.
Christen, Kimberly. 2012. “Does Information Really Want to Be Free?: Indigenous Knowledge Systems and the Question of Openness.” International Journal of Communication 6. 2870–93.
Cushman, Ellen. 2013. “Wampum, Sequoyan, and Story: Decolonizing the Digital Archive.” College English 76:2. 116–35.
Deloria, Philip Joseph. 2004. Indians in Unexpected Places. Lawrence: University Press of Kansas.
Roy, Loriene, and Mark Christal. 2002. “Digital Repatriation: Constructing a Culturally Responsive Virtual Museum Tour.” Journal of Library and Information Science 28:1. 14–18.
Senier, Siobhan, ed. 2014. Dawnland Voices: An Anthology of Indigenous Writing from New England. Lincoln: University of Nebraska Press.
Silko, Leslie Marmon. 1981. “An Old-Time Indian Attack Conducted in Two Parts.” In Geary Hobson, ed. The Remembered Earth: An Anthology of Contemporary Native American Literature. Albuquerque: University of New Mexico Press. 211–16
Silva, Noenoe K. 2004. Aloha Betrayed: Native Hawaiian Resistance to American Colonialism. Durham: Duke University Press.
Srinivasan, Ramesh, et al. 2010. “Diverse Knowledges and Contact Zones within the Digital Museum.” Science Technology Human Values 35:5. 735–68.
God made the sun so that animals could learn arithmetic – without the succession of days and nights, one supposes, we should not have thought of numbers. The sight of day and night, months and years, has created knowledge of number, and given us the conception of time, and hence came philosophy. This is the greatest boon we owe to sight.
– Plato, Timaeus
The term “computational capital” understands the rise of capitalism as the first digital culture with universalizing aspirations and capabilities, and recognizes contemporary culture, bound as it is to electronic digital computing, as something like Digital Culture 2.0. Rather than seeing this shift from Digital Culture 1.0 to Digital Culture 2.0 strictly as a break, we might consider it as one result of an overall intensification in the practices of quantification. Capitalism, says Nick Dyer-Witheford (2012), was already a digital computer and shifts in the quantity of quantities lead to shifts in qualities. If capitalism was a digital computer from the get-go, then “the invisible hand”—as the non-subjective, social summation of the individualized practices of the pursuit of private (quantitative) gain thought to result in (often unknown and unintended) public good within capitalism—is an early, if incomplete, expression of the computational unconscious. With the broadening and deepening of the imperative toward quantification and rational calculus posited then presupposed during the early modern period by the expansionist program of Capital, the process of the assignation of a number to all qualitative variables—that is, the thinking in numbers (discernible in the commodity-form itself, whereby every use-value was also an encoded as an exchange-value)—entered into our machines and our minds. This penetration of the digital, rendering early on the brutal and precise calculus of the dimensions of cargo-holds in slave ships and the sparse economic accounts of ship ledgers of the Middle Passage, double entry bookkeeping, the rationalization of production and wages in the assembly line, and more recently, cameras and modern computing, leaves no stone unturned. Today, as could be well known from everyday observation if not necessarily from media theory, computational calculus arguably underpins nearly all productive activity and, particularly significant for this argument, those activities that together constitute the command-control apparatus of the world system and which stretch from writing to image-making and, therefore, to thought.[1] The contention here is not simply that capitalism is on a continuum with modern computation, but rather that computation, though characteristic of certain forms of thought, is also the unthought of modern thought. The content indifferent calculus of computational capital ordains the material-symbolic and the psycho-social even in the absence of a conscious, subjective awareness of its operations. As the domain of the unthought that organizes thought, the computational unconscious is structured like a language, a computer language that is also and inexorably an economic calculus.
The computational unconscious allows us to propose that much contemporary consciousness (aka “virtuosity” in post-Fordist parlance) is a computational effect—in short, a form of artificial intelligence. A large part of what “we” are has been conscripted, as thought and other allied metabolic processes are functionalized in the service of the iron clad movements of code. While “iron clad” is now a metaphor and “code” is less the factory code and more computer code, understanding that the logic of industrial machinery and the bureaucratic structures of the corporation and the state have been abstracted and absorbed by discrete state machines to the point where in some quarters “code is law” will allow us to pursue the surprising corollary that all the structural inequalities endemic to capitalist production—categories that often appear under variants of the analog signs of race, class, gender, sexuality, nation, etc., are also deposited and thus operationally disappeared into our machines.
Put simply, and, in deference to contemporary attention spans, too soon, our machines are racial formations. They are also technologies of gender and sexuality.[2] Computational capital is thus also racial capitalism, the longue durée digitization of racialization and, not in any way incidentally, of regimes of gender and sexuality. In other words inequality and structural violence inherent in capitalism also inhere in the logistics of computation and consequently in the real-time organization of semiosis, which is to say, our practices and our thought. The servility of consciousness, remunerated or not, aware of its underlying operating system or not, is organized in relation not just to sociality understood as interpersonal interaction, but to digital logics of capitalization and machine-technics. For this reason, the political analysis of postmodern and, indeed, posthuman inequality must examine the materiality of the computational unconscious. That, at least, is the hypothesis, for if it is the function of computers to automate thinking, and if dominant thought is the thought of domination, then what exactly has been automated?
Already in the 1850s the worker appeared to Marx as a “conscious organ” in the “vast automaton” of the industrial machine, and by the time he wrote the first volume of Capital Marx was able to comment on the worker’s new labor of “watching the machine with his eyes and correcting its mistakes with his hands” (Marx 1867: 496, 502). Marx’s prescient observation with respect to the emergent role of visuality in capitalist production, along with his understanding that the operation of industrial machinery posits and presupposes the operation of other industrial machinery, suggests what was already implicit if not fully generalized in the analysis: that Dr. Ure’s notion, cited by Marx, of the machine as a “vast automaton,” was scalable—smaller machines, larger machines, entire factories could be thus conceived, and with the increasing scale and ubiquity of industrial machines, the notion could well describe the industrial complex as a whole. Historically considered, “watching the machine with his eyes and correcting the mistakes with his hands” thus appears as an early description of what information workers such as you and I do on our screens. To extrapolate: distributed computation and its integration with industrial process and the totality of social processes suggest that not only has society as a whole become a vast automaton profiting from the metabolism of its conscious organs, but further that the confrontation or interface with the machine at the local level (“where we are”) is an isolated and phenomenal experience that is not equivalent to the perspective of the automaton or, under capitalism, that of Capital. Given that here, while we might still be speaking about intelligence, we are not necessarily speaking about subjects in the strict sense, we might replace Althusser’s relation of S-s—Big Subject (God, the State, etc) to small subject (“you” who are interpellated with and in ideology)—with AI-ai— Big Artificial Intelligence (the world system as organized by computational capital) and “you” Little Artificial Intelligence (as organized by the same). Here subjugation is not necessarily intersubjective, and does not require recognition. The AI does not speak your language even if it is your operating system. With this in mind we may at once understand that the space-time regimes of subjectivity (point-perspective, linear time, realism, individuality, discourse function, etc.) that once were part of the digital armature of “the human,” have been profitably shattered, and that the fragments have been multiplied and redeployed under the requisites of new management. We might wager that these outmoded templates or protocols may still also meaningfully refer to a register of meaning and conceptualization that can take the measure of historical change, if only for some kind of species remainder whose value is simultaneously immeasurable, unknown and hanging in the balance.
Ironically perhaps, given the progress narratives attached to technical advances and the attendant advances in capital accumulation, Marx’s hypothesis in Capital Chapter 15, “Machinery and Large-Scale Industry,” that “it would be possible to write a whole history of the inventions made since 1830 for the purpose of providing capital with weapons against working class revolt” (1867, 563), casts an interesting light on the history of computing and its creation-imposition of new protocols. Not only have the incredible innovations of workers been abstracted and absorbed by machinery, but so also have their myriad antagonisms toward capitalist domination. Machinic perfection meant the imposition of continuity and the removal of “the hand of man” by fixed capital, in other words, both the absorption of know-how and the foreclosure of forms of disruption via automation (Marx 1867, 502).
Dialectically understood, subjectivity, while a force of subjugation in some respects, also had its own arsenal of anti-capitalist sensibilities. As a way of talking about non-conformity, anti-sociality and the high price of conformity and its discontents, the unconscious still has its uses, despite its unavoidable and perhaps nostalgic invocation of a future that has itself been foreclosed. The conscious organ does not entirely grasp the cybernetic organism of which it is a part; nor does it fully grasp the rationale of its subjugation. If the unconscious was machinic, it is now computational, and if it is computational it is also locked in a struggle with capitalism. If what underlies perceptual and cognitive experience is the automaton, the vast AI, what I will be referring to as The Computer, which is the totalizing integration of global practice through informatic processes, then from the standpoint of production we constitute its unconscious. However, as we are ourselves unaware of our own constitution, the Unconscious of producers is their/our specific relation to what Paolo Virno acerbically calls, in what can only be a lamentation of history’s perverse irony, “the communism of capital” (2004, 110). If the revolution killed its father (Marx) and married its mother (Capitalism), it may be worth considering the revolutionary prospects of an analysis of this unconscious.
Introduction: The Computational Unconscious
Beginning with the insight that the rise of capitalism marks the onset of the first universalizing digital culture, this essay, and the book of which it is chapter one, develops the insights of The Cinematic Mode of Production (Beller 2006) in an effort to render the violent digital subsumption by computational racial capital that the (former) “humans” and their (excluded) ilk are collectively undergoing in a manner generative of sites of counter-power—of, let me just say it without explaining it, derivatives of counter-power, or, Derivative Communism. To this end, the following section offers a reformulation of Marx’s formula for capital, Money-Commodity-Money’ (M-C-M’), that accounts for distributed production in the social factory, and by doing so hopes to direct attention to zones where capitalist valorization might be prevented or refused. Prevented or refused not only to break a system which itself functions by breaking the bonds of solidarity and mutual trust that formerly were among the conditions that made a life worth living, but also to posit the redistribution of our own power towards ends that for me are still best described by the word communist (or perhaps meta-communist but that too is for another time). This thinking, political in intention, speculative in execution and concrete in its engagement, also proposes a revaluation of the aesthetic as an interface that sensualizes information. As such, the aesthetic is both programmed, and programming—a privileged site (and indeed mode) of confrontation in the digital apartheid of the contemporary.
Along these lines, and similar to the analysis pursued in The Cinematic Mode of Production, I endeavor to de-fetishize a platform—computation itself—one that can only be properly understood when grasped as a means of production embedded in the bios. While computation is often thought of as being the thing accomplished by hardware churning through a program (the programmatic quantum movements of a discrete state machine), it is important to recognize that the universal Turing machine was (and remains) media indifferent only in theory and is thus justly conceived of as an abstract machine in the realm of ideas and indeed of the ruling ideas. However, it is an abstract machine that, like all abstractions, evolves out of concrete circumstances and practices; which is to say that the universal Turing Machine is itself an abstraction subject to historical-materialist critique. Furthermore, Turing Machines iterate themselves on the living, on life, reorganizing its practices. One might situate the emergence and function of the universal Turing machine as perhaps among the most important abstract machines in the last century, save perhaps that of capital itself. However, both their ranking and even their separability is here what we seek to put into question.
Without a doubt, the computational process, like the capitalist process, has a corrosive effect on ontological precepts, accomplishing a far-reaching liquidation of tradition that includes metaphysical assumptions regarding the character of essence, being, authenticity and presence. And without a doubt, computation has been built even as it has been discovered. The paradigm of computation marks an inflection point in human history that reaches along temporal and spatial axes: both into the future and back into the past, out to the cosmos and into the sub-atomic. At any known scale, from plank time (10^-44 seconds) to yottaseconds (10^24 seconds), and from 10^-35 to 10^27 meters, computation, conceptualization and sense-making (sensation) have become inseparable. Computation is part of the historicity of the senses. Just ask that baby using an iPad.
The slight displacement of the ontology of computation implicit in saying that it has been built as much as discovered (that computation has a history even if it now puts history itself at risk) allows us to glimpse, if only from what Laura Mulvey calls “the half-light of the imaginary” (1975, 7)—the general antagonism is feminized when the apparatus of capitalization has overcome the symbolic—that computation is not, so far as we can know, the way of the universe per se, but rather the way of the universe as it has become intelligible to us vis-à-vis our machines. The understanding, from a standpoint recognized as science, that computation has fully colonized the knowable cosmos (and is indeed one with knowing) is a humbling insight, significant in that it allows us to propose that seeing the universe as computation, as, in short, simulable, if not itself a simulation (the computational effect of an informatic universe), may be no more than the old anthropocentrism now automated by apparatuses. We see what we can see with the senses we have—autopoesis. The universe as it appears to us is figured by—that is, it is a figuration of—computation. That’s what our computers tell us. We build machines that discern that the universe functions in accord with their self-same logic. The recursivity effects the God trick.
Parametrically translating this account of cosmic emergence into the domain of history, reveals a disturbing allegiance of computational consciousness organized by the computational unconscious, to what Silvia Federici calls the system of global apartheid. Historicizing computational emergence pits its colonial logic directly against what Fred Moten and Stefano Harney identify as “the general antagonism” (2013, 10) (itself the reparative antithesis, or better perhaps the reverse subsumption of the general intellect as subsumed by capital). The procedural universalization of computation is a cosmology that attributes and indeed enforces a sovereignty tantamount to divinity and externalities be damned. Dissident, fugitive planning and black study – a studied refusal of optimization, a refusal of computational colonialism — may offer a way out of the current geo-(post-)political and its computational orthodoxy.
Computational Idolatry and Multiversality
In the new idolatry cathetcted to inexorable computational emergence, the universe is itself currently imagined as a computer. Here’s the seductive sound of the current theology from a conference sponsored by the sovereign state of NYU:
As computers become progressively faster and more powerful, they’ve gained the impressive capacity to simulate increasingly realistic environments. Which raises a question familiar to aficionados of The Matrix—might life and the world as we know it be a simulation on a super advanced computer? “Digital physicists” have developed this idea well beyond the sci-fi possibilities, suggesting a new scientific paradigm in which computation is not just a tool for approximating reality but is also the basis of reality itself. In place of elementary particles, think bits; in place of fundamental laws of physics, think computer algorithms. (Scientific American 2011)
Science fiction, in the form of “the Matrix,” is here used to figure a “reality” organized by simulation, but then this reality is quickly dismissed as something science has moved well beyond. However, it would not be illogical here to propose that “reality” is itself a science fiction—a fiction whose current author is no longer the novel or Hollywood but science. It is in a way no surprise that, consistent with “digital physics,” MIT physicist, Max Tegmark, claims that consciousness is a state of matter: Consciousness as a phenomenon of information storage and retrieval, is a property of matter described by the term “computronium.” Humans represent a rather low level of complexity. In the neo-Hegelian narrative in which the philosopher—scientist reveals the working out of world—or, rather, cosmic—spirit, one might say that it is as science fiction—one of the persistent fictions licensed by science—that “reality itself” exists at all. We should emphasize that the trouble here is not so much with “reality,” the trouble here is with “itself.” To the extent that we recognize that poesis (making) has been extended to our machines and it is through our machines that we think and perceive, we may recognize that reality is itself a product of their operations. The world begins to look very much like the tools we use to perceive it to the point that Reality itself is thus a simulation, as are we—a conclusion that concurs with the notion of a computational universe, but that seems to (conveniently) elide the immediate (colonial) history of its emergence. The emergence of the tools of perception is taken as universal, or, in the language of a quantum astrophysics that posits four levels of multiverses: multiversal. In brief, the total enclosure by computation of observer and observed is either reality itself becoming self-aware, or tautological, waxing ideological, liquidating as it does historical agency by means of the suddenly a priori stochastic processes of cosmic automation.
Well! If total cosmic automation, then no mistakes, so we may as well take our time-bound chances and wager on fugitive negation in the precise form of a rejection of informatic totalitarianism. Let us sound the sedimented dead labor inherent in the world-system, its emergent computational armature and its iconic self-representations. Let us not forget that those machines are made out of embodied participation in capitalist digitization, no matter how disappeared those bodies may now seem. Marx says, “Consciousness is… from the very beginning a social product and remains so for as long as men exist at all” (Tucker 1978, 178). The inescapable sociality and historicity of knowledge, in short, its political ontology, follows from this—at least so long as humans “exist at all.”
The notion of a computational cosmos, though not universally or even widely consented to by scientific consciousness, suggests that we respire in an aporiatic space—in the null set (itself a sign) found precisely at the intersection of a conclusion reached by Gödel in mathematics (Hofstadter 1979)—that there is no sufficiently powerful logical system that is internally closed such that logical statements cannot be formulated that can neither be proved nor disproved—and a different conclusion reached by Maturana and Varela (1992), and also Niklas Luhmann (1989), that a system’s self-knowing, its autopoesis, knows no outside; it can know only in its own terms and thus knows only itself. In Gödel’s view, systems are ineluctably open, there is no closure, complete self-knowledge is impossible and thus there is always an outside or a beyond, while in the latter group’s view, our philosophy, our politics and apparently our fate is wedded to a system that can know no outside since it may only render an outside in its own terms, unless, or perhaps, even if/as that encounter is catastrophic.
Let’s observe the following: 1) there must be an outside or a beyond (Gödel); 2) we cannot know it (Maturana and Varela); 3) and yet…. In short, we don’t know ourselves and all we know is ourselves. One way out of this aporia is to say that we cannot know the outside and remain what we are. Enter history: Multiversal Cosmic Knoweldge, circa 2017, despite its awesome power, turns out to be pretty local. If we embrace the two admittedly humbling insights regarding epistemic limits—on the one hand, that even at the limits of computationally—informed knowledge (our autopoesis) all we can know is ourselves, along with Gödel’s insight that any “ourselves” whatsoever that is identified with what we can know is systemically excluded from being All—then it as axiomatic that nothing (in all valences of that term) fully escapes computation—for us. Nothing is excluded from what we can know except that which is beyond the horizon of our knowledge, which for us is precisely nothing. This is tantamount to saying that rational epistemology is no longer fully separable from the history of computing—at least for any us who are, willingly or not, participant in contemporary abstraction. I am going to skip a rather lengthy digression about fugitive nothing as precisely that bivalent point of inflection that escapes the computational models of consciousness and the cosmos, and just offer its conclusion as the next step in my discussion: We may think we think—algorithmically, computationally, autonomously, or howsoever—but the historically materialized digital infrastructure of the socius thinks in and through us as well. Or, as Marx put it, “The real subject remains outside the mind and independent of it—that is to say, so long as the mind adopts a purely speculative, purely theoretical attitude. Hence the subject, society, must always be envisaged as the premises of conception even when the theoretical method is employed” (Marx: vol. 28, 38-39).[3]
This “subject, society” in Marx’s terms, is present even in its purported absence—it is inextricable from and indeed overdetermines theory and, thus, thought: in other words, language, narrative, textuality, ideology, digitality, cosmic consciousness. This absent structure informs Althusser’s Lacanian-Marxist analysis of Ideology (and of “the ideology of no ideology,” 1977) as the ideological moment par excellance: an analog way of saying “reality” is simulation) as well as his beguiling (because at once necessary and self-negating) possibility of a subjectless scientific discourse. This non-narrative, unsymbolizeable absent structure akin to the Lacanian “Real” also informs Jameson’s concept of the political unconscious as the black-boxed formal processor of said absent structure, indicated in his work by the term “History” with a capital “H” (1981). We will take up Althusser and Jameson in due time (but not in this paper). For now, however, for the purposes of our mediological investigation, it is important to pursue the thought that precisely this functional overdetermination, which already informed Marx’s analysis of the historicity of the senses in the 1844 manuscripts, extends into the development of the senses and the psyche. As Jameson put it in The Political Unconscious thirty-five years ago: “That the structure of the psyche is historical and has a history, is… as difficult for us to grasp as that the senses are not themselves natural organs but rather the result of a long process of differentiation even within human history”(1981, 62).
The evidence for the accuracy of this claim, built from Marx’s notion that “the forming of the five senses requires the history of the world down to the present” has been increasing. There is a host of work on the inseparability of technics and the so-called human (from Mauss to Simondon, Deleuze and Guattari, and Bernard Stiegler) that increasingly makes it possible to understand and even believe that the human, along with consciousness, the psyche, the senses and, consequently, the unconscious are historical formations. My own essay “The Unconscious of the Unconscious” from The Cinematic Mode of Production traces Lacan’s use of “montage,” “the cut,” the gap, objet a, photography and other optical tropes and argues (a bit too insistently perhaps) that the unconscious of the unconscious is cinema, and that a scrambling of linguistic functions by the intensifying instrumental circulation of ambient images (images that I now understand as derivatives of a larger calculus) instantiates the presumably organic but actually equally technical cinematic black box known as the unconscious.[iv] Psychoanalysis is the institutionalization of a managerial technique for emergent linguistic dysfunction (think literary modernism) precipitated by the onslaught of the visible.
More recently, and in a way that suggests that the computational aspects of historical materialist critique are not as distant from the Lacanian Real as one might think, Lydia Liu’s The Freudian Robot (2010) shows convincingly that Lacan modeled the theory of the unconscious from information theory and cybernetic theory. Liu understands that Lacan’s emphasis on the importance of structure and the compulsion to repeat is explicitly addressed to “the exigencies of chance, randomness, and stochastic processes in general” (2010, 176). She combs Lacan’s writings for evidence that they are informed by information theory and provides us with some smoking guns including the following:
By itself, the play of the symbol represents and organizes, independently of the peculiarities of its human support, this something which is called the subject. The human subject doesn’t foment this game, he takes his place in it, and plays the role of the little pluses and minuses in it. He himself is an element in the chain which, as soon as it is unwound, organizes itself in accordance with laws. Hence the subject is always on several levels, caught up in the crisscrossing of networks. (quoted in Liu 2010, 176)
Liu argues that “the crisscrossing of networks” alludes not so much to linguistic networks but to communication networks, and precisely references the information theory that Lacan read, particularly that of George Gilbaud, the author of What is Cybernetics?. She writes that, “For Lacan, ‘the primordial couple of plus and minus’ or the game of even and odd should precede linguistic considerations and is what enables the symbolic order.”
“You can play heads or tails by yourself,” says Lacan, “but from the point of view of speech, you aren’t playing by yourself – there is already the articulation of three signs comprising a win or a loss and this articulation prefigures the very meaning of the result. In other words, if there is no question, there is no game, if there is no structure, there is no question. The question is constituted, organized by the structure” (quoted in Liu 2010, 179). Liu comments that “[t]his notion of symbolic structure, consistent with game theory, [has] important bearings on Lacan’s paradoxically non-linguistic view of language and the symbolic order.”
Let us not distract ourselves here with the question of whether or not game theory and statistical analysis represent discovery or invention. Heisenberg, Schrödinger, and information theory formalized the statistical basis that one way or another became a global (if not also multiversal) episteme. Norbert Wiener, another father, this time of cybernetics, defined statistics as “the science of distribution” (Weiner 1989, 8). We should pause here to reflect that, given that cybernetic research in the West was driven by military and, later, industrial applications, that is, applications deemed essential for the development of capitalism and the capitalist way of life, such a statement calls for a properly dialectical analysis. Distribution is inseparable from production under capitalism, and statistics is the science of this distribution. Indeed, we would want to make such a thesis resonate with the analysis of logistics recently undertaken by Moten and Harney and, following them, link the analysis of instrumental distribution to the Middle Passage, as the signal early modern consequence of the convergence of rationalization and containerization—precisely the “science” of distribution worked out in the French slave ship Adelaide or the British ship Brookes. For the moment, we underscore the historicity of the “science of distribution” and thus its historical emergence as socio-symbolic system of organization and control. Keeping this emergence clearly in mind helps us to understand that mathematical models quite literally inform the articulation of History and the unconscious—not only homologously as paradigms in intellectual history, but materially, as ways of organizing social production in all domains. Whether logistical, optical or informatic, the technics of mathematical concepts, which is to say programs, orchestrate meaning and constitute the unconscious.
Perhaps more elusive even than this historicity of the unconscious grasped in terms of a digitally encoded matrix of materiality and epistemology that constitutes the unthought of subjective emergence, may be that the notion that the “subject, society” extends into our machines. Vilém Flusser, in Towards a Philosophy of Photography, tells us,
Apparatuses were invented to simulate specific thought processes. Only now (following the invention of the computer), and as it were in hindsight, it is becoming clear what kind of thought processes we are dealing with in the case of all apparatuses. That is: thinking expressed in numbers. All apparatuses (not just computers) are calculating machines and in this sense “artificial intelligences,” the camera included, even if their inventors were not able to account for this. In all apparatuses (including the camera) thinking in numbers overrides linear, historical thinking. (Flusser 2000, 31)
This process of thinking in numbers, and indeed the generalized conversion of multiple forms of thought and practice to an increasingly unified systems language of numeric processing, by capital markets, by apparatuses, by digital computers requires further investigation. And now that the edifice of computation—the fixed capital dedicated to computation that either recognizes itself as such or may be recognized as such—has achieved a consolidated sedimentation of human labor at least equivalent to that required to build a large nation (a superpower) from the ground up, we are in a position to ask in what way has capital-logic and the logic of private property, which as Marx points out is not the cause but the effect of alienated wage- (and thus quantified) labor, structured computational paradigms? In what way has that “subject, society” unconsciously structured not just thought, but machine-thought? Thinking, expressed in numbers, materialized first by means of commodities and then in apparatuses capable of automating this thought. Is computation what we’ve been up to all along without knowing it? Flusser suggests as much through his notion that 1) the camera is a black box that is a programme, and, 2) that the photograph or technical image produces a “magical” relation to the world in as much as people understand the photograph as a window rather than as information organized by concepts. This amounts to the technical image as itself a program for the bios and suggests that the world has long been unconsciously organized by computation vis-à-vis the camera. As Flusser has it, cameras have organized society in a feedback loop that works towards the perfection of cameras. If the computational processes inherent in photography are themselves an extension of capital logic’s universal digitization (an argument I made in TheCinematic Mode of Production and extended in The Message is Murder), then that calculus has been doing its work in the visual reorganization of everyday life for almost two centuries.
Put another way, thinking expressed in numbers (the principles of optics and chemistry) materialized in machines automates thought (thinking expressed in numbers) as program. The program of say, the camera, functions as a historically produced version of what Katherine Hayles has recently called “nonconscious cognition” (Hayles 2016). Though locally perhaps no more self-aware than the sediment sorting process of a riverbed (another of Hayles’s computational examples) the camera nonetheless affects purportedly conscious beings from the domain known as the unconscious, as, to give but one shining example, feminist film theory clearly shows: The function of the camera’s program organizes the psycho-dynamics of the spectator in a way that at once structures film form through market feedback, gratifies the (white-identified) male ego and normalizes the violence of heteropatriarchy, and does so at a profit. Now that so much human time has gone into developing cameras, computer hardware and programming, such that hardware and programming are inextricable from the day to day and indeed nano-second to nano-second organization of life on planet earth (and not only in the form of cameras), we can ask, very pointedly, which aspects of computer function, from any to all, can be said to be conditioned not only by sexual difference but more generally still, by structural inequality and the logistics of racialization? Which computational functions perpetuate and enforce these historically worked up, highly ramified social differences ? Structural and now infra-structural inequalities include social injustices—what could be thought of as and in a certain sense are algorithmic racism, sexism and homophobia, and also programmatically unequal access to the many things that sustain life, and legitimize murder (both long and short forms, executed by, for example, carceral societies, settler colonialism, police brutality and drone strikes), and catastrophes both unnatural (toxic mine-tailings, coltan wars) and purportedly natural (hurricanes, droughts, famines, ambient environmental toxicity). The urgency of such questions resulting from the near automation of geo-political emergence along with a vast conscription of agents is only exacerbated as we recognize that we are obliged to rent or otherwise pay tribute (in the form of attention, subscription, student debt) to the rentier capitalists of the infrastructure of the algorithm in order to access portions of the general intellect from its proprietors whenever we want to participate in thinking.
For it must never be assumed that technology (even the abstract machine) is value-neutral, that it merely exists in some uninterested ideal place and is then utilized either for good or for ill by free men (it would be “men” in such a discourse). Rather, the machine, like Ariella Azoulay’s understanding of photography, has a political ontology—it is a social relation, and an ongoing one whose meaning is, as Azoulay says of the photograph, never at an end (2012, 25). Now that representation has been subsumed by machines, has become machinic (overcoded as Deleuze and Guattari would say) everything that appears, appears in and through the machine, as a machine. For the present (and as Plato already recognized by putting it at the center of the Republic), even the Sun is political. Going back to my opening, the cosmos is merely a collection of billions of suns—an infinite politics.
But really, this political ontology of knowledge, machines, consciousness, praxis should be obvious. How could technology, which of course includes the technologies of knowledge, be anything other than social and historical, the product of social relations? How could these be other than the accumulation, objectification and sedminentation of subjectivities that are themselves an historical product? The historicity of knowledge and perception seems inescapable, if not fully intelligible, particularly now, when it is increasingly clear that it is the programmatic automation of thought itself that has been embedded in our apparatuses. The programming and overdetermination of “choice,” of options, by a rationality that was itself embedded in the interested circumstances of life and continuously “learns” vis-à-vis the feedback life provides has become ubiquitous and indeed inexorable (I dismiss “Object Oriented Ontology” and its desperate effort to erase white-boy subjectivity thusly: there are no ontological objects, only instrumental epistemic horizons). To universalize contemporary subjectivity by erasing its conditions of possibility is to naturalize history; it is therefore to depoliticize it and therefore to recapitulate its violence in the present.
The short answer then regarding digital universality is that technology (and thus perception, thought and knowledge) can only be separated from the social and historical—that is, from racial capitalism—by eliminating both the social and historical (society and history) through its ownoperations. While computers, if taken as a separate constituency along with a few of their biotic avatars, and then pressed for an answer, might once have agreed with Margaret Thatcher’s view that “there is no such thing as society,” one would be hard-pressed to claim that this post-sociological (and post-Birmingham) “discovery” is a neutral result. Thatcher’s observation, that “the problem with socialism is that you eventually run out of other people’s money,” while admittedly pithy, if condescending, classist and deadly, subordinates social needs to existing property-relations and their financial calculus at the ontological level. She smugly valorizes the status quo by positing capitalism as an untranscendable horizon since the social product is by definition always already “other people’s money.” But neoliberalism has required some revisioning of late (which is a polite way of saying that fascism has needed some updating): the newish but by now firmly-established term “social media” tells us something more about the parasitic relation that the cold calculus this mathematical universe of numbers has to the bios. To preserve global digital apartheid requires social media, the process(ing) of society itself cybernetically-interfaced with the logistics of racial-capitalist computation. This relation, a means of digital expropriation aimed to profitably exploit an equally significant global aspiration towards planetary communicativity and democratization, has become the preeminent engine of capitalist growth. Society, at first seemingly negated by computation and capitalism, is now directly posited as a source of wealth, for what is now explicitly computational capital and actually computational racial capital. The attention economy, immaterial labor, neuropower, semio-capitalism: all of these terms, despite their differences, mean in effect that society, as a deterritorialized factory, is no longer disappeared as an economic object; it disappears only as a full beneficiary of the dominant economy which is now parasitical on its metabolism. The social revolution in planetary communicativity is being farmed and harvested by computational capitalism.
Dialectics of the Human-Machine
For biologists it has become au courant when speaking of humans to speak also of the second genome—one must consider not just the 26 chromosomes of the human genome that replicate what was thought of as the human being as an autonomous life-form, but the genetic information and epigenetic functionality of all the symbiotic bacteria and other organisms without which there are no humans. Pursuant to this thought, we might ascribe ourselves a third genome: information. No good scientist today believes that human beings are free standing forms, even if most (or really almost all) do not make the critique of humanity or even individuality through a framework that understands these categories as historically emergent interfaces of capitalist exchange. However, to avoid naturalizing the laws of capitalism as simply an expression of the higher (Hegalian) laws of energetics and informatics (in which, for example ATP can be thought to function as “capital”), this sense of “our” embeddedness in the ecosystem of the bios must be extended to that of the materiality of our historical societies, and particularly to their systems of mediation and representational practices of knowledge formation—including the operations of textuality, visuality, data visualization and money—which, with convergence today, means precisely, computation.
If we want to understand the emergence of computation (and of the anthropocene), we must attend to the transformations and disappearances of life forms—of forms of life in the largest sense. And we must do so in spite of the fact that the sedimentation of the history of computation would neutralize certain aspects of human aspiration and of humanity—including, ultimately, even the referent of that latter sign—by means of law, culture, walls, drones, derivatives, what have you. The biosynthetic process of computation and human being gives rise to post-humanism only to reveal that there were never any humans here in the first place: We have never been human—we know this now. “Humanity,” as a protracted example of maiconaissance—as a problem of what could be called the humanizing-machine or, better perhaps, the human-machine, is on the wane.
Naming the human-machine, is of course a way of talking about the conquest, about colonialism, slavery, imperialism, and the racializing, sex-gender norm-enforcing regimes of the last 500 years of capitalism that created the ideological legitimation of its unprecedented violence in the so-called humanistic values it spat out. Aimé Césaire said it very clearly when he posed the scathing question in Discourse on Colonialism: “Civilization and Colonization?” (1972). “The human-machine” names precisely the mechanics of a humanism that at once resulted from and were deployed to do the work of humanizing planet Earth for the quantitative accountings of capital while at the same time divesting a large part of the planetary population of any claims to the human. Following David Golumbia, in The Cultural Logic of Computation (2009), we might look to Hobbes, automata and the component parts of the Leviathan for “human” emergence as a formation of capital. For so many, humanism was in effect more than just another name for violence, oppression, rape, enslavement and genocide—it was precisely a means to violence. “Humanity” as symptom of The Invisible Hand, AI’s avatar. Thus it is possible to see the end of humanism as a result of decolonization struggles, a kind of triumph. The colonized have outlasted the humans. But so have the capitalists.
This is another place where recalling the dialectic is particularly useful. Enlightenment Humanism was a platform for the linear time of industrialization and the French revolution with “the human” as an operating system, a meta-ISA emerging in historical movement, one that developed a set of ontological claims which functioned in accord with the early period of capitalist digitality. The period was characterized by the institutionalization of relative equality (Cedric Robinson does not hesitate to point out that the precondition of the French Revolution was colonial slavery), privacy, property. Not only were its achievements and horrors inseparable the imposition of logics of numerical equivalence, they were powered by the labor of the peoples of Earth, by the labor-power of disparate peoples, imported as sugar and spices, stolen as slaves, music and art, owned as objective wealth in the form of lands, armies, edifices and capital, and owned again as subjective wealth in the form of cultural refinement, aesthetic sensibility, bourgeois interiority—in short, colonial labor, enclosed by accountants and the whip, was expatriated as profit, while industrial labor, also expropriated, was itself sustained by these endeavors. The accumulation of the wealth of the world and of self-possession for some was organized and legitimated by humanism, even as those worlded by the growth of this wealth struggled passionately, desultorily, existentially, partially and at times absolutely against its oppressive powers of objectification and quantification. Humanism was colonial software, and the colonized were the outsourced content providers—the first content providers—recruited to support the platform of so-called universal man. This platform humanism is not so much a metaphor; rather it is the tendency that is unveiled by the present platform post-humanism of computational racial capital. The anatomy of man is the key to the anatomy of the ape, as Marx so eloquently put the telos of man. Is the anatomy of computation the key to the anatomy of “man”?
So the end of humanism, which in a narrow (white, Euro-American, technocratic) view seems to arrive as a result of the rise of cyber-technologies, must also be seen as having been long willed and indeed brought about by the decolonizing struggles against humanism’s self-contradictory and, from the point of view of its own self-proclaimed values, specious organization. Making this claim is consistent with Césaire’s insight that people of the third world built the European metropoles. Today’s disappearance of the human might mean for the colonizers who invested so heavily in their humanisms, that Dr. Moreau’s vivisectioned cyber-chickens are coming home to roost. Fatally, it seems, since Global North immigration policy, internment centers, border walls, police forces give the lie to any pretense of humanism. It might be gleaned that the revolution against the humans has also been impacted by our machines. However, the POTUSian defeat of the so-called humans is double-edged to say the least. The dialectic of posthuman abundance on the one hand and the posthuman abundance of dispossession on the other has no truck with humanity. Today’s mainstream futurologists mostly see “the singularity” and apocalypse. Critics of the posthuman with commitments to anti-racist world-making have clearly understood the dominant discourse on the posthuman as not the end of the white liberal human subject but precisely, when in the hands of those not committed to an anti-racist and decolonial project as a means for its perpetuation—a way of extending the unmarked, transcendental, sovereign, subject (of Hobbes, Descartes, C.B. Macpherson)—effectively the white male sovereign who was in possession of a body rather than forced to be a body. Sovereignty itself must change (in order, as Guiseppe Lampedusa taught us, to remain the same), for if one sees production and innovation on the side of labor, then capital’s need to contain labors’ increasing self-organization has driven it into a position where the human has become an impediment to its continued expansion. Human rights, though at times also a means to further expropriation, are today in the way.
Let’s say that it is global labor that is shaking off the yoke of the human from without, as much as it the digital machines that are devouring it from within. The dialectic of computational racial capital devours the human as a way of revolutionizing the productive forces. Weapon-makers, states, and banks, along with Hollywood and student debt, invoke the human only as a skeuomorph—an allusion to an old technology that helps facilitate adoption of the new. Put another way, the human has become a barrier to production, it is no longer a sustainable form. The human, and those (human and otherwise) falling under the paradigm’s dominion, must be stripped, cut, bundled, reconfigured in derivative forms. All hail the dividual. Again, female and racialized bodies and subjects have long endured this now universal fragmentation and forced recomposition and very likely dividuality may also describe a precapitalist, pre-colonial interface with the social. However we are obliged to point out that this, the current dissolution of the human into the infrastructure of the world-system, is double-edged, neither fully positive, nor fully negative—the result of the dialectics of struggles for liberation distributed around the planet. As a sign of the times, posthumanism may be, as has been remarked about capitalism itself, among those simultaneously best and worst things to ever happen in history. On the one hand, the disappearance of presumably ontological protections and legitimating status for some (including the promise of rights never granted to most), on the other, the disappearance of a modality of dehumanization and exclusion that legitimated and normalized white supremacist patriarchy by allowing its values to masquerade as universals. However, it is difficult to maintain optimism of the will when we see that that which is coming, that which is already upon us may also be as bad or worse, in absolute numbers, is already worse, for unprecedented billions of concrete individuals. Frankly, in a world where the cognitive-linguistic functions of the species have themselves been captured by the ambient capitalist computation of social media and indeed of capitalized computational social relations, of what use is a theory of dispossession to the dispossessed?
For those of us who may consider ourselves thinkers, it is our burden—in a real sense, our debt, living and ancestral—to make theory relevant to those who haunt it. Anything less is betrayal. The emergence of the universal value form (as money, the general form of wealth) with its human face (as white-maleness, the general form of humanity) clearly inveighs against the possibility of extrinsic valuation since the very notion of universal valuation is posited from within this economy. What Cedric Robinson shows in his extraordinary Black Marxism (1983) is that capitalism itself is a white mythology. The history of racialization and capitalization are inseparable, and the treatment of capital as a pure abstraction deracinates its origins and functions – both its conditions of possibility as well as its operations—including those of the internal critique of capitalism that has been the basis of much of the Marxist tradition. Both capitalism and its negation as Marxism have proceeded through a disavowal of racialization. The quantitative exchange of equivalents, circulating as exchange values without qualities, are the real abstractions that give rise to philosophy, science, and white liberal humanism wedded to the notion of the objective. Therefore, when it comes to values, there is no degree zero, only perhaps nodal points of bounded equilibrium. To claim neutrality for an early digital machine, say, money, that is, to argue that money as a medium is value-neutral because it embodies what has (in many respects correctly, but in a qualified way) been termed “the universal value form,” would be to miss the entire system of leveraged exploitation that sustains the money-system. In an isolated instance, money as the product of capital might be used for good (building shelters for the homeless) or for ill (purchasing Caterpillar bulldozers) or both (building shelters using Caterpillar machines), but not to see that the capitalist-system sustains itself through militarized and policed expropriation and large-scale, long-term universal degradation is to engage in mere delusional, utopianism and self-interested (might one even say psychotic?) naysaying.
Will the apologists calmly bear witness to the sacrifice of billions of human beings so that the invisible hand may placidly unfurl its/their abstractions in Kubrikian sublimity? 2001’s (Kubrick 1968) cold longshot of the species lifespan as an instance of a cosmic program is not so distant from the endemic violence of postmodern—and, indeed, post-human—fascism he depicted in A Clockwork Orange (Kubrick 1971). Arguably, 2001 rendered the cosmology of early Posthuman Fascism while A Clockwork Orange portrayed its psychology. Both films explored the aesthetics of programming. For the individual and for the species, what we beheld in these two films was the annihilation of our agency (at the level of the individual and of the species) —and it was eerily seductive, Benjamin’s self-destruction as an aesthetic pleasure of the highest order taken to cosmic proportions and raised to the level of Art (1969).
So what of the remainders of those who may remain? Here, in the face of the annihilation of remaindered life (to borrow a powerfully dialectical term from Neferti Tadiar, 2016) by various iterations of techné, we are posing the following question: how are computers and digital computing, as universals, themselves an iteration of long-standing historical inequality, violence, and murder, and what are the entry points for an understanding of computation-society in which our currently pre-historic (in Marx’s sense of the term) conditions of computation might be assessed and overcome? This question of technical overdetermination is not a matter of a Kittlerian-style anti-humanism in which “media determine our situation,” nor is it a matter of the post-Kittlerian, seemingly user-friendly repurposing of dialectical materialism which in the beer-drinking tradition of “good-German” idealism, offers us the poorly historicized, neo-liberal idea of “cultural techniques” courtesy of Cornelia Vismann and Bernhard Siegert (Vismann 2013, 83-93; Siegert 2013, 48-65). This latter is a conveniently deracinated way of conceptualizing the distributed agency of everything techno-human without having to register the abiding fundamental antagonisms, the life and death struggle, in anything. Rather, the question I want to pose about computing is one capable of both foregrounding and interrogating violence, assigning responsibility, making changes, and demanding reparations. The challenge upon us is to decolonize computing. Has the waning not just of affect (of a certain type) but of history itself brought us into a supposedly post-historical space? Can we see that what we once called history, and is now no longer, really has been pre-history, stages of pre-history? What would it mean to say in earnest “What’s past is prologue?”[6] If the human has never been and should never be, if there has been this accumulation of negative entropy first via linear time and then via its disruption, then what? Postmodernism, posthumanism, Flusser’s post-historical, and Berardi’s After the Future notwithstanding, can we take the measure of history?
I would like to conclude this essay with a few examples of techno-humanist dehumanization. In 1889, Herman Hollerith patented the punchcard system and mechanical tabulator that was used in the 1890 censuses in Germany, England, Italy, Russia, Austria, Canada, France, Norway, Puerto Rico, Cuba, and the Philippines. A national census, which normally took eight to ten years now took a single year. The subsequent invention of the plugboard control panel in 1906 allowed for tabulators to perform multiple sorts in whatever sequence was selected without having to be rebuild the tabulators—an early form of programming. Hollerith’s Tabulating Machine Company merged with three other companies in 1911 to become the Computing Tabulating Recording Company, which renamed itself IBM in 1924.
While the census opens a rich field of inquiry that includes questions of statistics, computing, and state power that are increasingly relevant today (particularly taking into account the ever-presence of the NSA), for now I only want to extract two points: 1) humans became the fodder for statistical machines and 2) as Vince Rafael has shown regarding the Philippine census and as Edwin Black has shown with respect to the holocaust, the development of this technology was inseparable from racialization and genocide (Rafael 2000; Black 2001)
Rafael shows that coupled to photographic techniques, the census at once “discerned” and imposed a racializing schema that welded historical “progress” to ever-whiter waves of colonization, from Malay migration to Spanish Colonialism to U.S. Imperialism (2000) Racial fantasy meets white mythology meets World Spirit. For his part, Edwin Black (2001) writes:
Only after Jews were identified—a massive and complex task that Hitler wanted done immediately—could they be targeted for efficient asset confiscation, ghettoization, deportation, enslaved labor, and, ultimately, annihilation. It was a cross-tabulation and organizational challenge so monumental, it called for a computer. Of course, in the 1930s no computer existed.
But IBM’s Hollerith punch card technology did exist. Aided by the company’s custom-designed and constantly updated Hollerith systems, Hitler was able to automate his persecution of the Jews. Historians have always been amazed at the speed and accuracy with which the Nazis were able to identify and locate European Jewry. Until now, the pieces of this puzzle have never been fully assembled. The fact is, IBM technology was used to organize nearly everything in Germany and then Nazi Europe, from the identification of the Jews in censuses, registrations, and ancestral tracing programs to the running of railroads and organizing of concentration camp slave labor.
IBM and its German subsidiary custom-designed complex solutions, one by one, anticipating the Reich’s needs. They did not merely sell the machines and walk away. Instead, IBM leased these machines for high fees and became the sole source of the billions of punch cards Hitler needed (Black 2001).
The sorting of populations and individuals by forms of social difference including “race,” ability and sexual preference (Jews, Roma, homosexuals, people deemed mentally or physically handicapped) for the purposes of sending people who failed to meet Nazi eugenic criteria off to concentration camps to be dispossessed, humiliated, tortured and killed, means that some aspects of computer technology—here, the Search Engine—emerged from this particular social necessity sometimes called Nazism (Black 2001). The Philippine-American War, in which Americans killed between 1/10th and 1/6th of the population of the Philippines, and the Nazi-administered holocaust are but two world historical events that are part of the meaning of early computational automation. We might say that computers bear the legacy of imperialism and fascism—it is inscribed in their operating systems.
The mechanisms, as well as the social meaning of computation, were refined in its concrete applications. The process of abstraction hid the violence of abstraction, even as it integrated the result with economic and political protocols and directly effected certain behaviors. It is a well-known fact that Claude Shannon’s landmark paper, “A Mathematical Theory of Communication,” proposed a general theory of communication that was content-indifferent (1948, 379-423). This seminal work created a statistical, mathematical model of communication while simultaneously consigning any and all specific content to irrelevance as regards the transmission method itself. Like use-value under the management of the commodity form, the message became only a supplement to the exchange value of the code. Elsewhere I have more to say about the fact that some of the statistical information Shannon derived about letter frequency in English used as its ur-text, Jefferson The Virginian (1948), the first volume of Dumas Malone’s monumental six volume study of Jefferson, famously interrogated by Annette Gordon-Reed in her Thomas Jefferson and Sally Hemmings: An American Controversy for its suppression of information regarding Jefferson’s relation to slavery (1997).[7] My point here is that the rules for content indifference were themselves derived from a particular content and that the language used as a standard referent was a specific deployment of language. The representative linguistic sample did not represent the whole of language, but language that belongs to a particular mode of sociality and racialized enfranchisement. Shannon’s deprivileging of the referent of the logos as referent, and his attention only to the signifiers, was an intensification of the slippage of signifier from signified (“We, the people…”) already noted in linguistics and functionally operative in the elision of slavery in Jefferson’s biography, to say nothing of the same text’s elision of slave-narrative and African-American speech. Shannon brilliantly and successfully developed a re-conceptualization of language as code (sign system) and now as mathematical code (numerical system) that no doubt found another of its logical (and material) conclusions (at least with respect to metaphysics) in post-structuralist theory and deconstruction, with the placing of the referent under erasure. This recession of the real (of being, the subject, and experience—in short, the signified) from codification allowed Shannon’s mathematical abstraction of rules for the transmission of any message whatsoever to become the industry standard even as they also meant, quite literally, the dehumanization of communication—its severance from a people’s history.
In a 1987 interview, Shannon was quoted as saying “I can visualize a time in the future when we will be to robots as dogs are to humans…. I’m rooting for the machines!” (1971). If humans are the robot’s companion species, they (or is it we?) need a manifesto. The difficulty is that the labor of our “being” such that it is/was is encrypted in their function. And “we” have never been “one.”
Tara McPherson has brilliantly argued that the modularity achieved in the development of UNIX has its analogue in racial segregation. Modularity and encapsulation, necessary to the writing of UNIX code that still underpins contemporary operating systems were emergent general socio-technical forms, what we might call technologies, abstract machines, or real abstractions. “I am not arguing that programmers creating UNIX at Bell Labs and at Berkeley were consciously encoding new modes of racism and racial understanding into digital systems,” McPherson argues, “The emergence of covert racism and its rhetoric of colorblindness are not so much intentional as systemic. Computation is a primary delivery method of these new systems and it seems at best naïve to imagine that cultural and computational operating systems don’t mutually infect one another.” (in Nakamura 2012, 30-31; italics in original)
This is the computational unconscious at work—the dialectical inscription and re-inscription of sociality and machine architecture that then becomes the substrate for the next generation of consciousness, ad infinitum. In a recent unpublished paper entitled “The Lorem Ipsum Project,” Alana Ramjit (2014) examines industry standards for the now-digital imaging of speech and graphic images. These include Kodak’s “Shirley cards” for standard skin tone (white), the Harvard Sentences for standard audio (white), the “Indian Head Test Pattern” for standard broadcast image (white fetishism), and “Lenna,” an image of Lena Soderberg taken from Playboy magazine (white patriarchal unconscious) that has become the reference standard image for the development of graphics processing. Each of these examples testifies to an absorption of the socio-historical at every step of mediological and computational refinement.
More recently, as Chris Vitale, brought out in a powerful presentation on machine learning and neural networks given at Pratt Institute in 2016, Facebook’s machine has produced “Deep Face,” an image of the minimally recognizable human face. However, this ur-human face, purported to be, the minimally recognizable form of the human face turns out to be a white guy. This is a case in point of the extension of colonial relations into machine function. Given the racialization of poverty in the system of global apartheid (Federici 2012), we have on our hands (or, rather, in our machines) a new modality of automated genocide. Fascism and genocide have new mediations and may not just have adapted to new media but may have merged. Of course, the terms and names of genocidal regeimes change, but the consequences persist. Just yesterday it was called neo-liberal democracy. Today it’s called the end of neo-liberalism. The current world-wide crisis in migration is one of the symptoms of the genocidal tendencies of the most recent coalescence of the “practically” automated logistics of race, nation and class. Today racism is at once a symptom of the computational unconscious, an operation of non-conscious cognition, and still just the garden variety self-serving murderous stupidity that is the legacy of slavery, settler colonialism and colonialism.
Thus we may observe that the statistical methods utilized by IBM to find Jews in the Shtetl are operative in Weiner’s anti-aircraft cybernetics as well as in Israel’s Iron Dome missile defense system. But, the prevailing view, even if it is not one of pure mathematical abstraction, in which computational process has its essence without reference to any concrete whatever, can be found in what follows. As an article entitled “Traces of Israel’s Iron Dome can be found in Tech Startups” for Bloomberg News almost giddily reports:
The Israeli-engineered Iron Dome is a complex tapestry of machinery, software and computer algorithms capable of intercepting and destroying rockets midair. An offshoot of the missile-defense technology can also be used to sell you furniture. (Coppola 2014)[8]
Not only is war good computer business, it’s good for computerized business. It is ironic that te is likened to a tapestry and now used to sell textiles – almost as if it were haunted by Lisa Nakamura’s recent findings regarding the (forgotten) role of Navajo women weavers in the making of early transistor’s for Silicon Valley legend and founding father, as well as infamous eugenicist, William Shockley’s company Fairchild.[9] The article goes on to confess that the latest consumer spin-offs that facilitate the real-time imaging of couches in your living room capable of driving sales on the domestic fronts exist thanks to the U. S. financial support for Zionism and its militarized settler colonialism in Palestine. “We have American-backed apartheid and genocide to thank for being able to visualize a green moderne couch in our very own living room before we click “Buy now.”” (Okay, this is not really a quotation, but it could have been.)
Census, statistics, informatics, cryptography, war machines, industry standards, markets—all management techniques for the organization of otherwise unruly humans, sub-humans, posthumans and nonhumans by capitalist society. The ethos of content indifference, along with the encryption of social difference as both mode and means of systemic functionality is sustainable only so long as derivative human beings are themselves rendered as content providers, body and soul. But it is not only tech spinoffs from the racist war dividends we should be tracking. Wendy Chun (2004, 26-51) has shown in utterly convincing ways that the gendered history of the development of computer programming at ENIAC in which male mathematicians instructed female programmers to physically make the electronic connections (and remove any bugs) echoes into the present experiences of sovereignty enjoyed by users who have, in many respects, become programmers (even if most of us have little or no idea how programming works, or even that we are programming).
Chun notes that “during World War II almost all computers were young women with some background in mathematics. Not only were women available for work then, they were also considered to be better, more conscientious computers, presumably because they were better at repetitious, clerical tasks” (Chun 2004, 33)One could say that programming became programming and software became software when commands shifted from commanding a “girl” to commanding a machine. Clearly this puts the gender of the commander in question.
Chun suggests that the augmentation of our power through the command-control functions of computation is a result of what she calls the “Yes sir” of the feminized operator—that is, of servile labor (2004). Indeed, in the ENIAC and other early machines the execution of the operator’s order was to be carried out by the “wren” or the “slave.” For the desensitized, this information may seem incidental, a mere development or advance beyond the instrumentum vocale (the “speaking tool” i.e., a roman term for “slave”) in which even the communicative capacities of the slave are totally subordinated to the master. Here we must struggle to pose the larger question: what are the implications for this gendered and racialized form of power exercised in the interface? What is its relation to gender oppression, to slavery? Is this mode of command-control over bodies and extended to the machine a universal form of empowerment, one to which all (posthuman) bodies might aspire, or is it a mode of subjectification built in the footprint of domination in such a way that it replicates the beliefs, practices and consequences of “prior” orders of whiteness and masculinity in unconscious but nonetheless murderous ways.[10] Is the computer the realization of the power of a transcendental subject, or of the subject whose transcendence was built upon a historically developed version of racial masculinity based upon slavery and gender violence?
Andrew Norman Wilson’s scandalizing film Workers Leaving the Googleplex (2011), the making of which got him fired from Google, depicts lower class, mostly of color workers leaving the Google Mountain View campus during off hours. These workers are the book scanners, and shared neither the spaces nor the perks with Google white collar workers, had different parking lots, entrances and drove a different class of vehicles. Wilson also has curated and developed a set of images that show the condom-clad fingers (black, brown, female) of workers next to partially scanned book pages. He considers these mis-scans new forms of documentary evidence. While digitization and computation may seem to have transcended certain humanistic questions, it is imperative that we understand that its posthumanism is also radically untranscendent, grounded as it is on the living legacies of oppression, and, in the last instance, on the radical dispossession of billions. These billions are disappeared, literally utilized as a surface of inscription for everyday transmissions. The dispossessed are the substrate of the codification process by the sovereign operators commanding their screens. The digitized, rewritable screen pixels are just the visible top-side (virtualized surface) of bodies dispossessed by capital’s digital algorithms on the bottom-side where, arguably, other metaphysics still pertain. Not Hegel’s world spirit—whether in the form of Kurzweil’s singularity or Tegmark’s computronium—but rather Marx’s imperative towards a ruthless critique of everything existing can begin to explain how and why the current computational eco-system is co-functional with the unprecedented dispossession wrought by racial computational capitalism and its system of global apartheid. Racial capitalism’s programs continue to function on the backs of those consigned to servitude. Data-visualization, whether in the form of selfie, global map, digitized classic or downloadable sound of the Big Bang, is powered by this elision. It is, shall we say, inescapably local to planet earth, fundamentally historical in relation to species emergence, inexorably complicit with the deferral of justice.
The Global South, with its now world-wide distribution, is endemic to the geopolitics of computational racial capital—it is one of its extraordinary products. The computronics that organize the flow of capital through its materials and signs also organize the consciousness of capital and with it the cosmological erasure of the Global South. Thus the computational unconscious names a vast aspect of global function that still requires analysis. And thus we sneak up on the two principle meanings of the concept of the computational unconscious. On the one hand, we have the problematic residue of amortized consciousness (and the praxis thereof) that has gone into the making of contemporary infrastructure—meaning to say, the structural repression and forgetting that is endemic to the very essence of our technological buildout. On the other hand, we have the organization of everyday life taking place on the basis of this amortization, that is, on the basis of a dehistoricized, deracinated relation to both concrete and abstract machines that function by virtue of the fact that intelligible history has been shorn off of them and its legibility purged from their operating systems. Put simply, we have forgetting, the radical disappearance and expunging from memory, of the historical conditions of possibility of what is. As a consequence, we have the organization of social practice and futurity (or lack thereof) on the basis of this encoded absence. The capture of the general intellect means also the management of the general antagonism. Never has it been truer that memory requires forgetting – the exponential growth in memory storage means also an exponential growth in systematic forgetting – the withering away of the analogue. As a thought experiment, one might imagine a vast and empty vestibule, a James Ingo Freed global holocaust memorial of unprecedented scale, containing all the oceans and lands real and virtual, and dedicated to all the forgotten names of the colonized, the enslaved, the encamped, the statisticized, the read, written and rendered, in the history of computational calculus—of computer memory. These too, and the anthropocene itself, are the sedimented traces that remain among the constituents of the computational unconscious.
_____
Jonathan Beller is Professor of Humanities and Media Studies and Director of the Graduate Program in Media Studies at Pratt Institute. His books include The Cinematic Mode of Production: Attention Economy and the Society of the Spectacle (2006); Acquiring Eyes: Philippine Visuality, Nationalist Struggle, and the World-Media System (2006); and The Message Is Murder: Substrates of Computational Capital (2017). He is a member of the Social Text editorial collective..
[1]A reviewer of this essay for b2o: An Online Journal notes, “the phrase ‘digital computer’ suggests something like the Turing machine, part of which is characterized by a second-order process of symbolization—the marks on Turing’s tape can stand for anything, & the machine processing the tape does not ‘know’ what the marks ‘mean.’” It is precisely such content indifferent processing that the term “exchange value,” severed as it is of all qualities, indicates.
[2] It should be noted that the reverse is also true: that race and gender can be considered and/as technologies. See Chun (2012), de Lauretis (1987).
[3] To insist on first causes or a priori consciousness in the form of God or Truth or Reality is to confront Marx’s earlier acerbic statement against a form of abstraction that eliminates the moment of knowing from the known in The Economic and Philosophic Manuscripts of 1844,
Who begot the first man and nature as a whole? I can only answer you: Your question is itself a product of abstraction. Ask yourself how you arrived at that question. Ask yourself it that question is not posed from a standpoint to which I cannot reply, because it is a perverse one. Ask yourself whether such a progression exists for a rational mind. When you ask about the creation of nature and man you are abstracting in so doing from man and nature. You postulate them as non-existent and yet you want me to prove them to you as existing. Now I say give up your abstraction and you will give up your question. Or, if you want to hold onto your abstraction, then be consistent, and if you think of man and nature as non-existent, then think of yourself as non-existent, for you too are surely man and nature. Don’t think, don’t ask me, for as soon as you think and ask, your abstraction from the existence of nature and man has no meaning. Or are you such an egoist that you postulate everything as nothing and yet want yourself to be?” (Tucker 1978, 92)
[4] If one takes the derivative of computational process at a particular point in space-time one gets an image. If one integrates the images over the variables of space and time, one gets a calculated exploit, a pathway for value-extraction. The image is a moment in this process, the summation of images is the movement of the process.
[5] See Harney and Moten (2013). See also Browne (2015), especially 43-50.
[6] In practical terms, the Alternative Informatics Association, in the announcement for their Internet Ungovernance Forum puts things as follows:
We think that Internet’s problems do not originate from technology alone, that none of these problems are independent of the political, social and economic contexts within which Internet and other digital infrastructures are integrated. We want to re-structure Internet as the basic infrastructure of our society, cities, education, heathcare, business, media, communication, culture and daily activities. This is the purpose for which we organize this forum.
The significance of creating solidarity networks for a free and equal Internet has also emerged in the process of the event’s organization. Pioneered by Alternative Informatics Association, the event has gained support from many prestigious organizations worldwide in the field. In this two-day event, fundamental topics are decided to be ‘Surveillance, Censorship and Freedom of Expression, Alternative Media, Net Neutrality, Digital Divide, governance and technical solutions’. Draft of the event’s schedule can be reached at https://iuf.alternatifbilisim.org/index-tr.html#program (Fidaner, 2014).
[8] Coppola writes that “Israel owes much of its technological prowess to the country’s near—constant state of war. The nation spent $15.2 billion, or roughly 6 percent of gross domestic product, on defense last year, according to data from the International Institute of Strategic Studies, a U.K. think-tank. That’s double the proportion of defense spending to GDP for the U.S., a longtime Israeli ally. If there’s one thing the U.S. Congress can agree on these days, it’s continued support for Israel’s defense technology. Legislators approved $225 million in emergency spending for Iron Dome on Aug. 1, and President Barack Obama signed it into law three days later.”
Black, Edwin. 2001. IBM and the Holocaust: The Strategic Alliance between Nazi Germany and America’s Most Powerful Corporation. New York: Crown Publishers.
Browne, Simone. 2015. Dark Matters: On the Surveillance of Blackness. Durham: Duke University Press.
Césaire, Aimé. 1972. Discourse on Colonialism New York: Monthly Review Press.
Coppola, Gabrielle. 2014. “Traces of Israel’s Iron Dome Can Be Found in Tech Startups.” Bloomberg News (Aug 11).
Chun, Wendy Hui Kyong. 2004. “On Software, or the Persistence of Visual Knowledge,” Grey Room 18, Winter: 26-51.
Chun, Wendy Hui Kyong. 2012. In Nakamura and Chow-White (2012). 38-69.
De Lauretis, Teresa. 1987. Technologies of Gender: Essays on Theory, Film, and Fiction Bloomington, IN: Indiana University Press.
Millions of the sex whose names were never known beyond the circles of their own home influences have been as worthy of commendation as those here commemorated. Stars are never seen either through the dense cloud or bright sunshine; but when daylight is withdrawn from a clear sky they tremble forth. (Hale 1853, ix)
As this poetic quote by Sarah Josepha Hale, nineteenth-century author and influential editor, reminds us, context is everything. The challenge, if we wish to write women back into history via Wikipedia, is to figure out how to shift the frame of reference so that our stars can shine, since the problem of who precisely is “worthy of commemoration” so often seems to exclude women. This essay takes on one of the “tests” used to determine whether content is worthy of inclusion in Wikipedia, notability, to explore how the purportedly neutral concept works against efforts to create entries about female historical figures.
According to Wikipedia “notability,” a subject is considered notable if it “has received significant coverage in reliable sources that are independent of the subject.” (“Wikipedia:Notability” 2017) To a historian of women, the gender biases implicit in these criteria are immediately recognizable; for most of written history, women were de facto considered unworthy of consideration (Smith 2000). Unsurprisingly, studies have pointed to varying degrees of bias in coverage of female figures in Wikipedia compared to male figures. One study of Encyclopedia Britannica and Wikipedia concluded,
Overall, we find evidence of gender bias in Wikipedia coverage of biographies. While Wikipedia’s massive reach in coverage means one is more likely to find a biography of a woman there than in Britannica, evidence of gender bias surfaces from a deeper analysis of those articles each reference work misses. (Reagle and Rhue 2011)
Five years later, another study found this bias persisted; women constituted only 15.5 percent of the biographical entries on the English Wikipedia, and that for women born prior to the 20th century, the problem of exclusion was wildly exacerbated by “sourcing and notability issues” (“Gender Bias on Wikipedia” 2017).
One potential source for buttressing the case of notable women has been identified by literary scholar Alison Booth. Booth identified more than 900 volumes of prosopography published during what might be termed the heyday of the genre, 1830-1940, when the rise of the middle class and increased literacy combined with relatively cheap production of books to make such volumes both practicable and popular (Booth 2004). Booth also points out that, lest we consign the genre to the realm of mere curiosity, the volumes were “indispensable aids in the formation of nationhood” (Booth 2004, 3).
To reveal the historical contingency of the purportedly neutral criteria of notability, I utilized longitudinal data compiled by Booth which reveals that notability has never been the stable concept Wikipedia’s standards take it to be. Since notability alone cannot explain which women make it into Wikipedia, I then turn to a methodology first put forth by historian Mary Ritter Beard in her critique of the Encyclopedia Britannica to identify missing entries (Beard 1977). Utilizing Notable American Women, as a reference corpus, I calculated the inclusion of individual women from those volumes in Wikipedia (Boyer and James 1971). In this essay I extend that analysis to consider the difference between notability and notoriety from a historical perspective. One might be well known while remaining relatively unimportant from a historical perspective. Such distinctions are collapsed in Wikipedia, assuming that a body of writing about a historical subject stands as prima facie evidence of notability.
While inclusion in Notable American Women does not necessarily translate into presence in Wikipedia, looking at the categories of women that have higher rates of inclusion offers insights into how female historical figures do succeed in Wikipedia.My analysis suggests that criterion of notability restricts the women who succeed in obtaining pages in Wikipedia to those who mirror “the ‘Great Man Theory’ of history (Mattern 2015) or are “notorious” (Lerner 1975).
Alison Booth has compiled a list of the most frequently mentioned women in a subset of female prosopographical volumes and tracked their frequency over time (2004, 394–396). She made this data available on the web, allowing for the creation of Figure 1 which focuses on the inclusion of US historical figures in volumes published from 1850 to 1930.
Figure 1. US women by publication date of books that included them (image source: author)
This chart clarifies what historians already know: notability is historically specific and contingent. For example, Mary Washington, mother of the first president, is notable in the nineteenth century but not in the twentieth. She drops off because over time, motherhood alone ceases to be seen as a significant contribution to history. Wives of presidents remain quite popular, perhaps because they were at times understood as playing an important political role, so Mary Washington’s daughter-in-law Martha still appears in some volumes in the latter period. A similar pattern may be observed for foreign missionary Anne Hasseltine Judson in the twentieth century. The novelty of female foreign missionaries like Judson faded as more women entered the field. Other figures, like Laura Bridgman, “the first deaf-blind American child to gain a significant education in the English language,” were supplanted by later figures in what might be described as the “one and done” syndrome, where only a single spot is allotted for a specific kind of notable woman (“Laura Bridgman” 2017). In this case, Bridgman likely fell out of favor as Helen Keller’s fame rose.
Although their notability changed over time, all the women depicted in figure 1 have Wikipedia pages; this is unsurprising as they were among the most mentioned women in the sort of volumes Wikipedia considers “reliable sources.” But what about more contemporary examples? Does inclusion in a relatively recent work that declares women as notable mean that these women would meet Wikipedia’s notability standards? To answer this question, I relied on a methodology of calculating missing biographies in Wikipedia, utilizing a reference corpus to identify women who might reasonably be expected to appear in Wikipedia and to calculate the percentage that do not. Working with the digitized copy of Notable American Women in the Women and Social Movements database, I compiled a missing biographies quotient for individuals in selected sections of the “classified list of biographies” that appear at the end of the third volume of Notable American Women. The eleven categories with no missing entries offer some insights into how women do succeed in Wikipedia (Table 1).
Classification
% missing
Astronomers
0
Biologists
0
Chemists & Physicists
0
Heroines
0
Illustrators
0
Indian Captives
0
Naturalists
0
Psychologists
0
Sculptors
0
Wives of Presidents
0
Table 1. Classifications from Notable American Women with no missing biographies in Wikipedia
Characteristics that are highly predictive of success in Wikipedia for women include association with a powerful man, as in the wives of presidents, and recognition in a male-dominated field of science, social science and art. Additionally, extraordinary women, such as heroines, and those who are quite rare, such as Indian captives, also have a greater chance of success in Wikipedia.[1]
Further analysis of the classifications with greater proportions of missing women reflects Gerda Lerner’s complaint that the history of notable women is the story of exceptional or deviant women (Lerner 1975). “Social worker,” which has the highest percentage of missing biographies at 67%, illustrates that individuals associated with female-dominated endeavors are less likely to be considered notable unless they rise to a level of exceptionalism (Table 2).
Name
Included?
Dinwiddie, Emily Wayland
no
Glenn, Mary Willcox Brown
no
Kingsbury, Susan Myra
no
Lothrop, Alice Louise Higgins
no
Pratt, Anna Beach
no
Regan, Agnes Gertrude
no
Breckinridge, Sophonisba Preston
page
Richmond, Mary Ellen
page
Smith, Zilpha Drew
stub
Table 2. Social Workers from Notable American Women by inclusion in Wikipedia
Sophonisba Preston Breckinridge’s Wikipedia entry describes her as “an American activist, Progressive Era social reformer, social scientist and innovator in higher education” who was also “the first woman to earn a Ph.D. in political science and economics then the J.D. at the University of Chicago, and she was the first woman to pass the Kentucky bar” (“Sophonisba Breckinridge” 2017). While the page points out that “She led the process of creating the academic professional discipline and degree for social work,” her page is not linked to the category of American social workers (“Category:American Social Workers” 2015). If a female historical figure isn’t as exceptional as Breckinridge, she needs to be a “first” like Mary Ellen Richmond who makes it into Wikipedia as the “social work pioneer” (“Mary Richmond” 2017).
This conclusion that being a “first” facilitates success in Wikipedia is supported by analysis of the classification of nurses. Of the ten nurses who have Wikipedia entries, 80% are credited with some sort of temporally marked achievement, generally a first or pioneering role (Table 3).
Individual
Was she a first?
Was she a participant in a male-dominated historical event?
Was she a founder?
Delano, Jane Arminda
leading pioneer
World War I
founder of the American Red Cross Nursing Service
Fedde, Sister Elizabeth*
established the Norwegian Relief Society
Maxwell, Anna Caroline
pioneering activities
Spanish-American War
Nutting, Mary Adelaide
world’s first professor of nursing
World War I
founded the American Society of superintendents of Training Schools for Nurses
co-founded the National Association of Colored Graduate Nurses
* Fredde appears in Wikipedia primarily as a Norwegian LutheranDeaconess. The word “nurse” does not appear on her page.
Table 3. Classifications from Notable American Women with no missing biographies in Wikipedia
As the entries for nurses reveal, in addition to being first, a combination of several additional factors work in a female subject’s favor in achieving success in Wikipedia. Nurses who founded an institution or organization or participated in a male-dominated event already recognized as historically significant, such as war, were more successful than those who did not.
If distinguishing oneself, by being “first” or founding something, as part of a male-dominated event facilitates higher levels of inclusion in Wikipedia for women in female dominated fields, do these factors also explain how women from classifications that are not female-dominated succeed? Looking at labor leaders, it appears these factors can offer only a partial explanation (Table 4).
Individual
Was she a first?
Was she a participant in a male-dominated historical event?
Was she a founder?
Description from Wikipedia
Bagley, Sarah G.
“probably the first”
No
formed the Lowell Female Labor Reform Association
headed up female department of newspaper until fired because “a female department. … would conflict with the opinions of the mushroom aristocracy … and beside it would not be dignified”
Barry, Leonora Marie Kearney
“only woman” “first woman”
KNIGHTS OF LABOR
“difficulties faced by a woman attempting to organize men in a male-dominated society. Employers also refused to allow her to investigate their factories.”
Bellanca, Dorothy Jacobs
“first full-time female organizer”
No
0rganized the Baltimore buttonhole makers into Local 170 of the United Garment Workers of America, one of four women who attended founding convention of Amalgamated Clothing Workers of America
“ “men resented” her
Haley, Margaret Angela
“pioneer leader”
No
No
dubbed the “lady labor slugger”
Jones, Mary Harris
No
KNIGHTS OF LABOR
IWW
“most dangerous woman in America”
Nestor, Agnes
No
WOMEN’S TRADE UNION LEAGUE
founded International Glove Workers Union
O’Reilly, Leonora
No
WOMEN’S TRADE UNION LEAGUE
founded the Wage Earners Suffrage League
“O’Reilly as a public speaker was thought to be out of place for women at this time in New York’s history.”
O’Sullivan, Mary Kenney
the first woman AFL employed
WOMEN’S TRADE UNION LEAGUE
founder of the Women’s Trade Union League
Stevens, Alzina Parsons
first probation officer
KNIGHTS OF LABOR
Table 4. Classifications from Notable American Women with no missing biographies in Wikipedia
In addition to being a “first” or founding something, two other variables emerge from the analysis of labor leaders that predict success in Wikipedia. One is quite heartening: affiliation with the Women’s Trade Union League (WTUL), a significant female-dominated historical organization, seems to translate into greater recognition as historically notable. Less optimistically, it also appears that what Lerner labeled as “notorious” behavior predicts success: six of the nine women were included for a wide range of reasons, from speaking out publicly to advocating resistance.
The conclusions here can be spun two ways. If we want to get women into Wikipedia, to surmount the obstacle of notability, we should write about women who fit well within the great man school of history. This could be reinforced within the architecture of Wikipedia by creating links within a woman’s entry to men and significant historical events, while also making sure that the entry emphasizes a woman’s “firsts” and her institutional ties. Following these practices will make an entry more likely to overcome challenges and provide a defense against proposed deletion. On the other hand, these are narrow criteria for meeting notability that will likely not encompass a wide range of female figures from the past.
The larger question remains: should we bother to work in Wikipedia at all? (Raval 2014). Wikipedia’s content is biased not only by gender, but also by race and region (“Racial Bias on Wikipedia” 2017). A concrete example of this intersectional bias can be seen if the fact that “only nine of Haiti’s 37 first ladies have Wikipedia articles, whereas all 45 first ladies of the United States have entries” (Frisella 2017). Critics have also pointed to the devaluation of Indigenous forms of knowledge within Wikipedia (Senier 2014; Gallart and van der Velden 2015).
Wikipedia, billed as “the encyclopedia anyone can edit” and purporting to offer “the sum of all human knowledge,” is notorious for achieving neither goal. Wikipedia’s content suffers from systemic bias related to the unbalanced demographics of its contributor base (Wikipedia, 2004, 2009c). I have highlighted here disparities in gendered content, which parallel the well-documented gender biases against female contributors (“Wikipedia:WikiProject Countering Systemic Bias” 2017). The average editor of Wikipedia is white, from Western Europe or the United States, between 30-40, and overwhelmingly male. Furthermore, “super users” contribute most of Wikipedia’s content. A 2014 analysis revealed that “the top 5,000 article creators on English Wikipedia have created 60% of all articles on the project. The top 1,000 article creators account for 42% of all Wikipedia articles alone.” A study of a small sample of these super users revealed that they are not writing about women. “The amount of these super page creators only exacerbates the [gender] problem, as it means that the users who are mass-creating pages are probably not doing neglected topics, and this tilts our coverage disproportionately towards male-oriented topics” (Hale 2014). For example, the “List of Pornographic Actresses” on Wikipedia is lengthier and more actively edited than the “List of Female Poets” (Kleeman 2015).
The hostility within Wikipedia against female contributors remains a significant barrier to altering its content since the major mechanism for rectifying the lack of entries about women is to encourage women to contribute them (New York Times 2011; Peake 2015; Paling 2015). Despite years of concerted efforts to make Wikipedia more hospitable toward women, to organize editathons, and place Wikipedians in residencies specifically designed to add women to the online encyclopedia, the results have been disappointing (MacAulay and Visser 2016; Khan 2016). Authors of a recent study of “Wikipedia’s infrastructure and the gender gap” point to “foundational epistemologies that exclude women, in addition to other groups of knowers whose knowledge does not accord with the standards and models established through this infrastructure” which includes “hidden layers of gendering at the levels of code, policy and logics” (Wajcman and Ford 2017).
Among these policies is the way notability is implemented to determine whether content is worthy of inclusion. The issues I raise here are not new; Adrianne Wadewitz, an early and influential feminist Wikipedian, noted in 2013 “A lack of diversity amongst editors means that, for example, topics typically associated with femininity are underrepresented and often actively deleted”(Wadewitz 2013). Wadewitz pointed to efforts to delete articles about Kate Middleton’s wedding gown, as well as the speedy nomination for deletion of an entry for reproductive rights activist Sandra Fluke. Both pages survived, Wadewicz emphasized, reflecting the way in which Wikipedia guidelines develop through practice, despite their ostensible stability.
This is important to remember – Wikipedia’s policies, like everything on the site, evolves and changes as the community changes. … There is nothing more essential than seeing that these policies on Wikipedia are evolving and that if we as feminists and academics want them to evolve in ways we feel reflect the progressive politics important to us, we must participate in the conversation. Wikipedia is a community and we have to join it. (Wadewitz 2013)
While I have offered some pragmatic suggestions here about how to surmount the notability criteria in Wikipedia, I want to close by echoing Wadewitz’s sentiment that the greater challenge must be to question how notability is implemented in Wikipedia praxis.
_____
Michelle Moravec is an associate professor of history at Rosemont College.
[1] Seven of the eleven categories in my study with fewer than ten individuals have no missing individuals.
_____
Works Cited
Beard, Mary Ritter. 1977. “A Study of the Encyclopaedia Britannica in Relation to Its Treatment of Women.” In Ann J. Lane, ed., Making Women’s History: The Essential Mary Ritter Beard. Feminist Press at CUNY. 215–24.
Booth, Alison. 2004. How to Make It as a Woman: Collective Biographical History from Victoria to the Present. Chicago, Ill.: University of Chicago Press.
Boyer, Paul, and Janet Wilson James, eds. 1971. Notable American Women: A Biographical Dictionary. III vols. Cambridge, MA: Harvard University Press.
Gallart, Peter, and Maja van der Velden. 2015. “The Sum of All Human Knowledge? Wikipedia and Indigenous Knowledge.” In Nicola Bidwell and Heike Winschiers-Theophilus, eds., At the Intersection of Indigenous and Traditional Knowledge and Technology Design. Santa Rosa, CA: Informing Science Press. 117–34
Hale, Sarah Josepha Buell. 1853. Woman’s Record: Or, Sketches of All Distinguished Women, from “the Beginning” Till A.D. 1850. Arranged in Four Eras. With Selections from Female Writers of Every Age. Harper & Brothers.
Wajcman, Judy, and Heather Ford. 2017. “‘Anyone Can Edit’, Not Everyone Does: Wikipedia’s Infrastructure and the Gender Gap.” Social Studies of Science 47:4. 511-27.
Content Warning: The following text references algorithmic systems acting in racist ways towards people of color.
Artificial intelligence and thinking machines have been key components in the way Western cultures, in particular, think about the future. From naïve positivist perspectives, as illustrated by the Rosie the Robot maid from 1962’s TV show The Jetsons, to ironic reflections on the reality of forced servitude to one’s creator and quasi-infinite lifespans in Douglas Adams’s Hitchhiker’s Guide to the Galaxy’s Marvin the Paranoid Android, as well as the threatening, invisible, disembodied, cruel HAL 9000 in Arthur C. Clarke’s Space Odyssey series and its total negation in Frank Herbert’s Dune books, thinking machines have shaped a lot of our conceptions of society’s future. Unless there is some catastrophic event, the future seemingly will have strong Artificial Intelligences (AI). They will appear either as brutal, efficient, merciless entities of power or as machines of loving grace serving humankind to create a utopia of leisure, self-expression and freedom from the drudgery of labor.
Those stories have had a fundamental impact on the perception of current technologic trends and developments. The digital turn has increasingly made growing parts of our social systems accessible to automation and software agents. Together with a 24/7 onslaught of increasingly optimistic PR messages by startups, the accompanying media coverage has prepared the field for a new kind of secular techno-religion: The Church of AI.
A Promise Fulfilled?
For more than half a century, experts in the field have maintained that genuine, human-level artificial intelligence has always been just around the corner, has been “about 10 to 20 years away.” Today’s experts and spokespeople continue to express the same timeline for their hopes. Asking experts and spokespeople in the field, that number has stayed mostly unchanged until today.
In 2017 AI is the battleground that the current IT giants are fighting over: for years, Google has developed machine learning techniques and has integrated them into their conversational assistant which people carry around installed in their smart devices. It’s gotten quite good at answering simple questions or triggering simple tasks: “OK Google, how far is it from here to Hamburg,” tells me that given current traffic it will take me 1 hour and 43 minutes to get there. Google’s assistant also knows how to use my calendar and email to warn me to leave the house in time for my next appointment or tell me that a parcel I was expecting has arrived.
Facebook and Microsoft are experimenting with and propagating intelligent chat bots as the future of computer interfaces. Instead of going to a dedicated web page to order flowers, people will supposedly just access a chat interface of a software service that dispatches their request in the background. But this time, it will be so much more pleasant than the experience everyone is used to when calling automated calling systems. Press #1 if you believe.
Old science fiction tropes get dusted off and re-released with a snazzy iPhone app to make them seem relevant again on an almost daily basis.
Nonetheless, the promise is always the same: given the success that automation of manufacturing and information processing has had in the last decades, AI is considered to be not only plausible or possible but, in fact, almost a foregone conclusion. In support of this, advocates (such as Google’s Ray Kurzweil) typically cite “Moore’s Law,”[1] an observation about the increasing quantity and quality of transistors, as being in direct correlation to the growing “intelligence” in digital services or cyber-physical systems like thermostats or “smart” lights.
Looking at other recent reports, a pattern emerges. Google’s AI lab recently trained a neural network to do lip-reading and found it better than human lip-readers (Chung, et al. 2016): where human experts were only able to pick the right word 12.4% of the time, Google’s neural network could reach 52.3% when being pointed at footage from BBC politics shows.
Another recent example from Google’s research department, which just shows how many resources Google invests into machine learning and AI: Google has trained a system of neural networks to translate different human languages (in their example, English, Japanese and Korean) into one another (Schuster, Johnson and Thorat 2016). This is quite the technical feat, given that most translation engines have to be meticulously tweaked to translate between two languages. But Google’s researchers finish their report with a very different proposition:
The success of the zero-shot translation raises another important question: Is the system learning a common representation in which sentences with the same meaning are represented in similar ways regardless of language — i.e. an “interlingua”?….This means the network must be encoding something about the semantics of the sentence rather than simply memorizing phrase-to-phrase translations. We interpret this as a sign of existence of an interlingua in the network. (Schuster, Johnson and Thorat 2016)
Google’s researchers interpret the capabilities of the neural network as expressions of the neural network creating a common super-language, one language to finally express all other languages.
These current examples of success stories and narratives illustrate a fundamental shift in the way scientists and developers think about AI, a shift that perfectly resonates with the idea that AI has spiritual and transcendent properties. AI developments used to focus on building structured models of the world to enable reasoning. Whether researchers used logic or sets or newer modeling frameworks like RDF,[2] the basic idea was to construct “Intelligence” on top of a structure of truths and statements about the world. Intentionally modeled not by accident on basic logic, a lot of it looked like the first sessions in a traditional logic 101 lecture: All humans die. Aristotle is a human. Therefore, Aristotle will die.
But all these projects failed. Explicitly modeling the structures of the world hit a wall of inconsistencies rather early when natural language and human beings got involved. The world didn’t seem to follow the simple hierarchic structures some computer scientists hoped it would. And even when it came to very structured, abstract areas of life, the approach never took off. Projects like, for example, expressing the Canadian income tax in a prolog[3] model (Sherman 1987) never got past the abstract planning stage. RDF and the idea of the “semantic web,” the web of structured data allowing software agents to gather information and reason based on it, are still somewhat relevant in academic circles, but have failed to capture wide adoption in real world use cases.
And then came neural networks.
Neural networks are the structure behind most of the current AI projects having any impact, whether it’s translation of human language, self-driving cars or recognizing objects and people on pictures and video. Neural networks work in a fundamentally different way from the traditional bottom-up approaches that defined much of the AI research in the last decades of the 20th century. Based on a simplified mathematical model of human neurons, networks of said neurons can be “trained” to react in a certain way.
Say you need a neural network to automatically detect cats on pictures. First, you need an input layer with enough neurons to assign one to every pixel of the pictures you want to feed it. You add an output layer with two neurons, one signaling “cat” and one signaling “not a cat.” Now you add a few internal layers of neurons and connect them to each other. Input gets fed into the network through the input layer. The internal layers do their thing and make the neurons in the output layer “fire.” But the necessary knowledge is not yet ingrained into the network—it needs to be trained.
There are different ways of training these networks, but they all come down to letting the network process a large amount of training data with known properties. For our example, a substantial set of pictures with cats would be necessary. When processing these pictures, the network gets positive feedback if the right neuron (the one signaling the detection of a cat) fires and strengthens the connections that lead to this result. Where it has a 50/50 chance of being right on the first try, that chance will quickly improve to the point that it will reach very good results, given that the set of training data is good enough. To evaluate the quality of the network, it then is tested against different pictures of cats and pictures without cats.
Neural networks are really good at learning to detect structures (objects in images, sound patterns, connections in data streams) but there’s a catch: even when a neural network is really good at its task, it’s largely impossible for humans to say why: neural networks are just sets of neurons and their weighted connections. But what does the weight of 1.65 say about a connection? What are its semantics? What do the internal layers and neurons actually mean? Nobody knows.
Many currently available services based on these technologies can achieve impressive results. Cars are able to drive as well if not better and safer than human drivers (given Californian conditions of light, lack of rain or snow and sizes of roads), automated translations of language can almost instantly give people at least an idea of what the rest of the world is talking about, and Google’s photo service allows me to search for “mountain” and shows me pictures of mountains in my collection. Those services surely feel intelligent. But are they really?
Despite optimistic reports about another big step towards “true” AI (like in the movies!) that tech media keeps churning out like a machine, just looking at recent months the trouble with the current mainstream in AI has recently become quite obvious.
In June 2015, Google’s Photos service was involved in a scandal: their AI was tagging faces of people of color with the term “gorilla” (Bergen 2015). Google quickly pointed out how difficult image recognition was and “fixed” the issue by blocking its AI from applying that specific tag, promising a “long term solution.” And even just staying with the image detection domain, there have been, in fact, numerous examples of algorithms acting in ways that don’t imply too much intelligence: cameras trained on Western, white faces detect people with Asian descent as “blinking” (Rose 2010); algorithms employed as impartial “beauty judges” seemingly don’t like dark skin (Levin 2016). The list goes on and on.
While there seems to be a big consensus among thought leaders, AI companies, and tech visionaries that AI is inevitable and imminent, the definition of “intelligence” seems to be less than obvious. Is an entity intelligent if it can’t explain its reasoning?
John Searle already explained this argument in the “Chinese Room“ thought experiment (Searle 1980): Searle proposes a computer program that can act convincingly as if it understands Chinese by taking in Chinese input, transforming it in some algorithmic way to output a response of Chinese characters. Does that machine really understand Chinese? Or is it just an automaton simulating understanding Chinese? Searle continues the experiment by assuming that the rules used by the machine get translated to readable English for a person to follow. A person locked in a room with these rules, pencil and paper could respond to every Chinese text given to that person as convincingly as the machine could. But few would propose that that person does now “understand” Chinese in the sense that a human being who knows Chinese does.
Current trends in the reception of AI seem to disagree: if a machine can do something that used to be only possible for human cognition, it surely must be intelligent. This assumption of Intelligence serves as foundation for a theory of human salvation: if machines are already a little intelligent (putting them into the same category as humans) and machines only get faster and more efficient, isn’t it reasonable to assume that they will solve the issues that humans have struggled with for ages?
But how can a neural network save us if it can’t even distinguish monkeys from humans?
Thy Kingdom Come 2.0
The story of AI is a technology narrative only at first glance. While it does depend on technology and technological progress, faster processors, and cleverer software libraries (ironically written and designed by human beings), it is really a story about automation, biases and implicit structures of power.
Technologists who have traditionally been very focused on the scientific method, on verifiable processes and repeatable experiments have recently opened themselves to more transcendent arguments: the proposition of a neural network, of an AI creating a generic ideal language to express different human languages as one structure (Schuster, Johnson and Thorat 2016) is a first very visible step of “upgrading” an automated process to becoming more than meets the eye. The multi-language-translation network is not an interesting statistical phenomenon that needs reflection by experts in the analyzed languages and the cultures using them with regards to their structural, social similarities and the way they influence(d) one another. Rather, it is but a miraculous device making steps towards creating an ideal language that would have made Ludwig Wittgenstein blush.[4]
But language and translation isn’t the only area in which these automated systems are being tested. Artificial intelligences are being trained to predict people’s future economic performance, their shopping profile, and their health. Other machines are deployed to predict crime hotspots, to distribute resources and to optimize production of goods.
But while predicting crimes still gets most people feeling uncomfortable, the idea that machines are the supposedly objective arbiters of goods and services is met with far less skepticism. But “goods and services” can include a great deal more than ordinary commercial transactions. If the machine gives one candidate a 33% chance of survival and the other one 45%, who should you give the heart transplant to?
Computers cannot lie, they just act according to their programming. They don’t discriminate against people based on their gender, race or background. At least that’s the popular opinion that very happily assigns computers and software systems the role of the objective arbiter of truth and fairness. People are biased, imperfect, and error-prone, so why shouldn’t we find the best processes and decision algorithms and put them into machines to dispose fair and optimal rulings efficiently and correctly? Isn’t that the utopian ideal of a fair and just society in which machines automate not just manual labor but also the decisions that create conflict and attract corruption and favoritism?
The idea of computers as machines of truth is being challenged more and more each day, especially given new AI trends; in traditional algorithmic systems, implicit biases were hard-coded into the software. They could be analyzed, patched. Closely mirroring the scientific method, this ideal world view saw algorithms getting better, becoming fairer with every iteration. But how to address implicit biases or discriminations when the internal structure of a system cannot be effectively analyzed or explained? When AI systems make predictions based on training data, who can check whether the original data wasn’t discriminatory or whether it’s still suitable for use today?
One original promise of computers—amongst others—had to do with accountability: code could be audited to legitimize its application within sociotechnical systems of power. But current AI trends have replaced this fundamental condition for the application of algorithms with belief.
The belief is that simple simulacra of human neurons will—given enough processing power and learning data—evolve to be Superman. We can characterize this approach as a belief system because it has immunized itself against criticism: when an AI system fails horribly, creates or amplifies existing social discrimination or violence, the dogma of AI proponents often tends to be that it just needs more training, needs to be fed more random data to create better internal structures, better “truths.” Faced with a world of inconsistencies and chaos, the hope is that some neural network, given enough time and data, will make sense of it, even though we might not be able to truly understand it.
Religion is a complex topic without one simple definition to apply to things to decide whether they are, in fact, religions. Religions are complex social systems of behaviors, practices and social organization. Following Wittgenstein’s ideas about language games, it might not even be possible to completely and selectively define religion. But there are patterns that many popular religions share.
Many do, for example, share the belief in some form of transcendental power such as a god or a pantheon or even more abstract conceptual entities. Religions also tend to provide a path towards achieving greater, previously unknowable truths, truths about the meaning of life, of suffering, of Good itself. Being social structures, there often is some form of hierarchy or a system to generate and determine status and power within the group. This can be a well-defined clergy or less formal roles based on enlightenment, wisdom, or charity.
While this is in no way anywhere close to a comprehensive list of attributes of religions, these key aspects can help analyze the religiousness of the AI narrative.
Singulatarianism
Here I want to focus on one very specific, influential sub-group within the whole AI movement. And no other group within tech displays religious structure more explicitly than the singulatarians.
Singulatarians believe that the creation of adaptable AI systems will spark a rapid and ever increasing growth in these systems’ capabilities. This “runaway reaction” of cycles of self-improvement will lead to one or more artificial super-intelligences surpassing all human mental and cognitive capabilities. This point is called “the Singularity” which will be—according to singulatarians—followed by a phase of extremely rapid technological developments whose speed and structure will be largely incomprehensible to human consciousness. At this point the AI(s) will (and according to most singulatarians shall) take control of most aspects of society. While the possibility of the Super-AI taking over by force is always lingering around in the back of singulatarians’ minds, the dominant position is that humans will and should hand over power to the AI for the good of the people, for the good of society.
Here we see singulatarianism taking the idea that computers and software are machines of truth to its extreme. Whether it’s the distribution of resources and wealth, or the structure of the law and regulation, all complex questions are reduced to a system of equations that an AI will solve perfectly, or at least so close to perfectly that human beings might not even understand said perfection.
According to the “gospel” as taught by the many proponents of the Singularity, the explosive growth in technology will provide machines that people can “upload” their consciousness to, thus providing human beings with durable, replaceable bodies. The body, and with it death itself, are supposedly being transcended, creating everlasting life in the best of all possible worlds watched over by machines of loving grace, at least in theory.
While the singularity has existed as an idea (if not the name) since at least the 1950s, only recently did singulatarians gain “working prototypes.” Trained AI systems are able to achieve impressive cognitive feats even today and the promise of continuous improvement that’s—seemingly—legitimized by references to Moore’s Law makes this magical future almost inevitable.
It’s very obvious how the Singularity can be, no, must be characterized as a religious idea: it presents an ersatz-god in the form of a super-AI that is beyond all human understanding and reasoning. Quoting Ray Kurzweil from his The Age of Spiritual Machines: “Once a computer achieves human intelligence it will necessarily roar past it” (Kurzweil 1999). Kurzweil insists that surpassing human capabilities is a necessity. Computers are the newborn gods of silicon and code that—once awakened—will leave us, its makers, in the dust. It’s not a question of human agency but a law of the universe, a universal truth. (Not) coincidentally Kurzweil’s own choice of words in this book is deeply religious, starting with the title of the book.
With humans therefore unable to challenge an AI’s decisions, human beings’ goal is to be to work within the world as defined and controlled by the super-AI. The path to enlightenment lies in the acceptance of the super-AI and by helping every form of scientific progress to finally achieve everlasting life through digital uploads of consciousness on to machines. Again quoting Kurzweil: “The ethical debates are like stones in a stream. The water runs around them. You haven’t seen any biological technologies held up for one week by any of these debates” (Kurzweil 2003 ). Ethical debates are in Kurzweil’s perception fundamentally pointless with the universe and technology as god necessarily moving past them—regardless of what the result of such debates might ever be. Technology transcends every human action, every decision, every wish. Thy will be done.
Because the intentions and reasoning of the super-AI being are opaque to human understanding, society will need people to explain, rationalize, and structure the AI’s plans for the people. The high-priests of the super-AI (such as Ray Kurzweil) are already preparing their churches and sermons.
Not every proponent of AI goes as far as the singulatarians go. But certain motifs keep appearing even in supposedly objective and scientific articles about AI, the artificial control system for (parts of) human society probably being the most popular: AIs are supposed to distribute power in smart grids for example (Qudaih and Mitani 2011) or decide fully automatically where police should focus their attention (Perry et al 2013). The second example (usually referred to as “predictive policing”) illustrates this problem probably the best: all training data used to build the models that are supposed to help police be more “efficient” is soaked in structural racism and violence. A police trained on data that always labels people of color as suspect will keep on seeing innocent people of color as suspect.
While there is value to automating certain dangerous or error-prone processes, like for example driving cars in order to protect human life or protect the environment, extending that strategy to society as a whole is a deeply problematic approach.
The leap of faith that is required to truly believe in not only the potential but also the reality of these super-powered AIs doesn’t only leave behind the idea of human exceptionalism, (which in itself might not even be too bad), but the idea of politics as a social system of communication. When decisions are made automatically without any way for people to understand the reasoning, to check the way power acts and potentially discriminates, there is no longer any political debate apart from whether to fall in line or to abolish the system all together. The idea that politics is an equation to solve, that social problems have an optimal or maybe even a correct solution is not only a naïve technologist’s dream but, in fact, a dangerous and toxic idea making the struggle of marginalized groups, making any political program that’s not focused on optimizing[5] the status quo, unthinkable.
Singulatarianism is the most extreme form, but much public discourse about AI is based on quasi-religious dogmas of the boundless realizable potential of AIs and life. These dogmas understand society as an engineering problem looking for an optimal solution.
Daemons in the Digital Ether
Software services on Unix systems are traditionally called “daemons,” a word from mythology that refers to god-like forces of nature. It’s an old throw-away programmer joke that seems like a precognition of sorts looking at today.
Even if we accept that AI has religious properties, that it serves as a secular ersatz-religion for the STEM-oriented crowd, why should that be problematic?
Marc Andreessen, venture capitalist and one of the louder proponents of the new religion, claimed in 2011 that “software is eating the world.” (Andreessen 2011) And while statements about the present and future from VC leaders should always be taken with a grain of salt, given that they are probably pitching their latest investment, in this case Andreessen was right: software and automation are slowly swallowing increasing aspects of everyday life. The digitalization of even mundane actions and structures, the deployment of “smart” devices in private homes and the public sphere, the reality of social life happening on technological platforms all help to give algorithmic systems more and more access to people’s lives and realities. Software is eating the world, and what it gnaws on, it standardizes, harmonizes, and structures in ways that ease further software integration
The world today is deeply cyber-physical. The separation of the digital and the “real” worlds that sociologist Nathan Jurgenson fittingly called “digital dualism” (Jurgenson 2011) can these days be called an obvious fallacy. Virtual software systems, hosted “in the cloud,” define whether we will get health care, how much we’ll have to pay for a loan and in certain cases even whether we may cross a border or not. These processes of power were traditionally “running on” social systems, on government organs or organizations; or maybe just individuals are now moving into software agents, removing the risky, biased human factor, as well as checks and balances.
The issue at hand is not the forming of a new tech-based religion itself. The problem emerges from the specific social group promoting it, its ignorance towards this matter and the way that group and its paradigms and ideals are seen in the world. The problem is not the new religion but the way its supporters propose it as science.
Science, technology, engineering, math—abbreviated as STEM—currently take center stage when it comes to education but also when it comes to consulting the public on important matters. Scientists, technologists, engineers and mathematicians are not only building their own models in the lab but are creating and structuring the narratives that are debatable. Science as a tool to separate truth from falsehood is always deeply political, even more so in a democracy. By defining the world and what is or is not, science does not just structure a society’s model of the world but also elevates its experts to high and esteemed social positions.
With the digital turn transforming and changing so many aspects of everyday life, the creators and designers of digital tools are—in tandem with a society hungry for explanations of the ongoing economic, technological and social changes—forming their own privileged caste, a caste whose original defining characteristic was its focus on the scientific method.
When AI morphed from idea or experiment to belief system, hackers, programmers, “data scientists,”[6] and software architects became the high priests of a religious movement that the public never identified and parsed as such. The public’s mental checks were circumvented by the hidden switch of categories. In Western democracies the public is trained to listen to scientists and experts in order to separate objective truth from opinion. Scientists are perceived as impartial, only obligated to the truth and the scientific method. Technologists and engineers inherited that perceived neutrality and objectivity given their public words, a direct line into the public’s collective consciousness.
On the other hand, the public does have mental guards against “opinion” and “belief” in place that get taught to each and every child in school from a very young age. Those things are not irrelevant in the public discourse—far from it—but the context they are evaluated in is different, more critical. This protection, this safeguard is circumvented when supposedly objective technologists propose their personal tech-religion as fact.
Automation has always both solved and created problems: products became easier, safer, quicker or mainly cheaper to produce, but people lost their jobs and often the environment suffered. In order to make a decision, in order to evaluate the good and bad aspects of automation, society always relied on experts analyzing these systems.
Current AI trends turn automation into a religion, slowly transforming at least semi-transparent systems into opaque systems whose functionality and correctness can neither be verified nor explained. By calling these systems “intelligent” a certain level of agency is implied, a kind of intentionality and personalization.[7] Automated systems whose neutrality and fairness is constantly implied and reaffirmed through ideas of godlike machines governing the world with trans-human intelligence are being blessed with agency and given power removing the actual entities of power from the equation.
But these systems have no agency. Meticulously trained in millions of iterations on carefully chosen and massaged data sets, these “intelligences” just automate the application of the biases and values of the organizations deploying and developing them as many scientists as Cathy O’Neil in her book Weapons of Math Destruction illustrates:
Here we see that models, despite their reputation for impartiality, reflect goals and ideology. When I removed the possibility of eating Pop-Tarts at every meal, I was imposing my ideology on the meals model. It’s something we do without a second thought. Our own values and desires influence our choices, from the data we choose to collect to the questions we ask. Models are opinions embedded in mathematics. (O’Neil 2016, 21)
For many years, Facebook has refused all responsibility for the content on their platform and the way it is presented; the same goes for Google and its search products. Whenever problems emerge, it is “the algorithm” that “just learns from what people want.” AI systems serve as useful puppets doing their masters’ bidding without even requiring visible wires. Automated systems predicting areas of crime claim not to be racist despite targeting black people twice as often as white ones (Pulliam-Moore 2016). The technologist Maciej Cegłowski probably said it best: “Machine learning is like money laundering for bias.”
Amen
The proponents of AI aren’t just selling their products and services. They are selling a society where they are in power, where they provide the exegesis for the gospel of what “the algorithm” wants: Kevin Kelly, founder of Wired magazine, leading technologist and evangelical Christian, even called his book on this issue What Technology Wants (Kelly 2011) imbuing technology itself with agency and a will. And all that without taking responsibility for it. Because progress and—in the end—the singularity are inevitable.
But this development is not a conspiracy or an evil plan. It grew from a society desperately demanding answers and scientists and technologists eagerly providing them. From deeply rooted cultural beliefs in the general positivity of technological progress, and from the trust in the powers of truth creation of the artifacts the STEM sector produces.
The answer to the issue of an increasingly powerful and influential social group hardcoding its biases into the software actually running our societies cannot be to turn back time and de-digitalize society. Digital tools and algorithmic systems can serve a society to create fairer, more transparent processes that are, in fact, not less but more accountable.
But these developments will require a reevaluation of the positioning, status and reception of the tech and science sectors. The answer will require the development of social and political tools to observe, analyze and control the power wielded by the creators of the essential technical structures that our societies rely on.
Current AI systems can be useful for very specific tasks, even in matters of governance. The key is to analyze, reflect, and constantly evaluate the data used to train these systems. To integrate perspectives of marginalized people, of people potentially affected negatively even in the first steps of the process of training these systems. And to stop offloading responsibility for the actions of automated systems to the systems themselves, instead of holding accountable the entities deploying them, the entities giving these systems actual power.
Amen.
_____
tante (tante@tante.cc) is a political computer scientist living in Germany. His work focuses on sociotechnical systems and the technological and economic narratives shaping them. He has been published in WIRED, Spiegel Online, and VICE/Motherboard among others. He is a member of the other wise net work, otherwisenetwork.com.
[1] Moore’s Law describes the observation that the number of transistors per square inch doubles roughly every 2 years (or every 18 months, depending on which version of the law is cited) made popular by Intel co-founder Gordon Moore.
[3]Prolog is a purely logical programming language that expresses problems as resolutions to logical expressions
[4] In the Philosophical Investigations (1953) Ludwig Wittgenstein argued against language somehow corresponding to reality in a simple way. He used the concept of “language games” illustrating that meanings of language overlap and are defined by the individual use of language rejecting the idea of an ideal, objective language.
[5] Optimization always operates in relationship to a specific goal codified in the metric the optimization system uses to compare different states and outcomes with one another. “Objective” or “general” optimizations of social systems are therefore by definition impossible.
[7] The creation of intelligence, of life as a feat is traditionally reserved to the gods of old. This is another link to religious world views as well as a rejection of traditional religions which is less than surprising in a subculture that’s most of the fan base of current popular atheists such as Richard Dawkins or Sam Harris. Vocal atheist Sam Harris himself being an open supporter of the new Singularity religion is just the cherry on top of this inconsistency sundae.
O’Neil, Cathy. 2016. Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. New York: Crown.
Perry, Walter L, Brian McInnis, Carter C. Price, Susan C. Smith, and John S. Hollywood. 2013. Predictive Policing: The Role of Crime Forecasting in Law Enforcement Operations. RAND Corporation.
Searle, John R. 1980. “Minds, Brains, and Programs.” Behavioral and Brain Sciences 3:3. 417–424.
Sherman, D. M. 1987. “A Prolog Model of the Income Tax Act of Canada.” ICAIL ‘87 Proceedings of the 1st International Conference on Artificial Intelligence and Law. New York, NY, USA: ACM. 127-136.
The intersection of digital studies and Indigenous studies encompasses both the history of Indigenous representation on various screens, and the broader rhetorics of Indigeneity, Indigenous practices, and Indigenous activism in relation to digital technologies in general. Yet the surge of critical work in digital technology and new media studies has rarely acknowledged the centrality of Indigeneity to our understanding of systems such as mobile technologies, major programs such as Geographic Information Systems (GIS), digital aesthetic forms such as animation, or structural and infrastructural elements of hardware, circuitry, and code. This essay on digital Indigenous studies reflects on the social, historical, and cultural mediations involved in Indigenous production and uses of digital media by exploring moments in the integration of the Cherokee syllabary onto digital platforms. We focus on negotiations between the Cherokee Nation’s goal to extend their language and writing system, on the one hand, and the systems of standardization upon which digital technologies depend, such as Unicode, on the other. The Cherokee syllabary is currently one of the most widely available North American Indigenous language writing systems on digital devices. As the language has become increasingly endangered, the Cherokee Nation’s revitalization efforts have expanded to include the embedding of the Cherokee syllabary in the Windows Operating System, Google search engine, Gmail, Wikipedia, Android, iPhone and Facebook.
Figure 1. Wikipedia in Cherokee
With the successful integration of the syllabary onto multiple platforms, the digital practices of Cherokees suggest the advantages and limitations of digital technology for Indigenous cultural and political survivance (Vizenor 2000).
Our collaboration has resulted in a multi-voiced analysis across several essay sections. Hearne describes the ways that engaging with specific problems and solutions around “glitches” at the intersection of Indigenous and technological protocols opens up issues in the larger digital turn in Indigenous studies. Joseph Erb (Cherokee) narrates critical moments in the adoption of the Cherokee syllabary onto digital devices, drawn from his experience leading this effort at the Cherokee Nation language technology department. Connecting our conceptual work with community history, we include excerpts from an interview with Cherokee linguist Durbin Feeling—author of the Cherokee-English Dictionary and Erb’s close collaborator—about the history, challenges, and possibilities of Cherokee language technology use and experience. In the final section, Mark Palmer (Kiowa) presents an “indigital” framework to describe a range of possibilities in the amalgamations of Indigenous and technological knowledge systems (2009, 2012). Fragmentary, contradictory, and full of uncertainties, indigital constructs are hybrid and fundamentally reciprocal in orientation, both ubiquitous and at the same time very distant from the reality of Indigenous groups encountering the digital divide.
Native to the Device
Indigenous people have always been engaged with technological change. Indigenous metaphors for digital and networked space—such as the web, the rhizome, and the river—describe longstanding practices of mnemonic retrieval and communicative innovation using sign systems and nonlinear design (Hearne 2017). Jason Lewis describes the “networked territory” and “shared space” of digital media as something that has “always existed for Aboriginal people as the repository of our collected and shared memory. That hardware technology has made it accessible through a tactile regime in no way diminishes its power as a spiritual, cosmological, and mythical ‘realm’” (175). Cherokee scholar (and former programmer) Brian Hudson includes Sequoyah in a genealogy of Indigenous futurism as a representative of “Cherokee cyberpunk.” While retaining these scholars’ understanding of the technological sophistication and adaptability of Indigenous peoples historically and in the present, taking up a heuristic that recognizes the problems and disjunction between Indigenous knowledge and digital development also enables us to understand the challenges faced by communities encountering unequal access to computational infrastructures such as broadband, hardware, and software design. Tracing encounters between the medium specificity of digital devices and the specificity of Indigenous epistemologies returns us to the incommensurate purposes of the digital as both a tool for Indigenous revitalization and as a sociopolitical framework that makes users do things according to a generic pattern.
The case of the localization of Cherokee on digital devices offers insights into the paradox around the idea of the “digital turn” explored in this b2o: An Online Journal special issue—that on the one hand, the digital turn “suggests that the objects of our world are becoming better versions of themselves. On the other hand, it suggests that these objects are being transformed so completely that they are no longer the things they were to begin with.” While the former assertion is reflected in the techno-positive orientation of much news coverage of the Cherokee adoption on the iPhone (Evans 2011) as well as other Indigenous initiatives such as video game production (Lewis 2014), the latter description of transformation beyond recognizable identity resembles the goals of various historical programs of assimilation, one of the primary “logics of elimination” that Patrick Wolfe identifies in his seminal essay on settler colonialism.
The material, representational, and participatory elements of digital studies have particular resonance in Indigenous studies around issues of land, language, political sovereignty, and cultural practice. In some cases the digital realm hosts or amplifies the imperial imaginaries pre-existing in the mediascape, as Jodi Byrd demonstrates in her analyses of colonial narratives—narratives of frontier violence in particular—normalized and embedded in the forms and forums of video games (2015). Indigeneity is also central to the materialities of global digitality in the production and dispensation of the machines themselves. Internationally, Indigenous lands are mined for minerals to make hardware and targeted as sites for dumping used electronics. Domestically in the United States, Indigenous communities have provided the labor to produce delicate circuitry (Nakamura 2014), even as rural, remote Indigenous communities and reservations have been sites of scarcity for digital infrastructure access (Ginsburg 2008). Indigenous communities such as those in the Cherokee Nation are rightly on guard against further colonial incursions, including those that come with digital environments. Communities have concerns about language localization projects: how are we going to use this for our own benefit? If it’s not for our benefit, then why not compute in the colonial language? Are they going to steal our medicine? Is this a further erosion of what we have left?
Lisa Nakamura (2013) has taken up the concept of the glitch as a way of understanding online racism, first as it is understood by some critics as a form of communicative failure or “glitch racism,” and second as the opposite, “not as a glitch but as part of the signal,” an “effect of internet on a technical level” that comprises “a discursive act in itself, not an obstruction to that act.” In this article we offer another way of understanding the glitch as a window onto the obstacles, refusals, and accommodations that take place at an infrastructural level in Indigenous negotiations of the digital. Olga Goriunova and Alexei Shulgin define “glitch” as “an unpredictable change in the system’s behavior, when something obviously goes wrong” (2008, 110).
A glitch is a singular dysfunctional event that allows insight beyond the customary, omnipresent, and alien computer aesthetics. A glitch is a mess that is a moment, a possibility to glance at software’s inner structure, whether it is a mechanism of data compression or HTML code. Although a glitch does not reveal the true functionality of the computer, it shows the ghostly conventionality of the forms by which digital spaces are organized. (114)
Attending to the challenges that arise in Indigenous-settler negotiations of structural obstacles—the work-arounds, problem-solving, false starts, failures of adoption—reveals both the adaptations summoned forth by the standardization built into digital platforms and the ways that Indigenous digital activists have intervened in digital homogeneity. By making visible the glitches—ruptures and mediations of rupture—in the granular work of localizing Cherokee, we arrive again and again at the cultural and political crossroads where Indigenous boundaries become visible within infrastructures of settler protocol (Ginsburg 1991). What has to be done, what has to be addressed, before Cherokee speakers can use digital devices in their own language and their own writing system, and what do those obstacles reveal about the larger orientation of digital environments? In particular, new digital platforms channel adaptations towards the bureaucratization of language, dictating the direction of language change through conventions like abbreviations, sorting requirements, parental controls and autocorrect features.
Within the framework of computational standardization, Indigenous distinctiveness—Indigenous sovereignty itself—becomes a glitch. We can see instantiations of such glitches arising from moments of politicized refusal, as defined by Mohawk scholar Audra Simpson’s insight that “a good is not a good for everyone” (1). Yet we can also see moments when Indigenous refusals “to stop being themselves” (2) lead to strategies of negotiation and adoption, and even, paradoxically, to a politics of accommodation (itself a form of agency) in the uptake of digital technologies. Michelle Raheja takes up the intellectual and aesthetic iterations of sovereignty to theorize Indigenous media production in terms of “visual sovereignty,” which she defines as “the space between resistance and compliance” within which Indigenous media-makers “revisit, contribute to, borrow from, critique, and reconfigure” film conventions, while still “operating within and stretching the boundaries of those same conventions” (1161). We suggest that like Indigenous self-representation on screen, Indigenous computational production occupies a “space between resistance and compliance,” a space which is both sovereigntist and, in its lived reality at the intersection of software standardization and Indigenous language precarity, glitchy.
Our methodology, in the case study of Cherokee language technology development that follows, might be called “glitch retrieval.” We focus on pulse points, moments, stories and small landmarks of adaptation, accommodation, and refusal in the adoption of Sequoyah’s Cherokee syllabary to mobile digital devices. In the face of the wave of publicity around digital apps (“there’s an app for that!”), the story of the Cherokee adoption is not one of appendage in the form of downloadable apps but rather the localization of the language as “native to the device.” Far from being a straightforward development, the process moved in fits and starts, beset with setbacks and surprises, delineating unique minority and endangered Indigenous language practices within majoritarian protocols. To return to Goriunova and Shulgin’s definition, we explore each glitch as an instance of “a mess” that is also “a moment, a possibility,” one that “allows insight” (2008). Each of the brief moments narrated below retrieves an intersection of problem and solution that reveals Indigenous presence as well as “the ghostly conventionality of the forms by which digital spaces are organized” (114). Retrieving the origin stories of Cherokee language technology—the stories of the glitches—gives us new ways to see both the limits of digital technology as it has been imagined and built within structures of settler colonialism, and the action and shape of Indigenous persistence through digital practices.
Cherokee Language Technology and Mobile Devices
Each generation is crucial to the survival of Indigenous languages. Adaptation, and especially adaptation to new technologies, is an important factor in Indigenous language persistence (Hermes et al 2016). The Cherokee, one of the largest of the Southeast tribes, were early adopters of language technologies, beginning with the syllabary writing system developed by Sequoyah between 1809 and 1820 and presented to the Cherokee Council in 1821. The circumstances of the development of the Cherokee syllabary are nearly unique in that 1) the writing system originated from the work of one man, and in the space of a single decade; and 2) in the fact that it was initiated and ultimately widely adopted from within the Indigenous community itself rather than being developed and introduced by non-Native missionaries, linguists or other outsiders.
Unlike alphabetic writing based on individual phonemes, a syllabary consists of written symbols indicating whole syllables, which can be more easily developed and learned than alphabetic systems due to the stability of each syllable sound. The Cherokee Syllabary system uses written characters that represent consonant and vowel sounds, such as “Ꮉ”, which is the sound of “ma,” and Ꮀ, for the sound “ho.” The original writing of Sequoyah was done with a quill and pen, an inking process that involved cursive characters, but this handwritten orthography gave way to a block print character set for the Cherokee printing press (Cushman 2011). The Cherokee Phoenix was the first Native American newspaper in the Americas, published in Cherokee and English beginning in 1828. Since then, Cherokee people have adapted their language and writing system early and often to new technologies, from typewriters to dot matrix printers. This historical adaptation includes a millennial transformation from technologies that required training to access machines like specially-designed typewriters with Cherokee characters, to the embedding of the syllabary as a standard feature on all platforms for commercially available computers and mobile devices. Very few Indigenous languages have this level of computational integration—in part because very few Indigenous languages have their own writing systems—and the historical moments we present here in the technologization of the Cherokee language illustrate both problems and possibilities of language diversity in standardization-dependent platforms. In the following section, we offer a community-based history of Cherokee language technology in stories of the transmission of knowledge between two generations—Cherokee linguist Durbin Feeling, who began teaching and adapting the language in the 1960s, and Joseph Erb, who worked on digital language projects starting in the early 2000s—focusing on shifts in the uptake of language technology.
In the early and mid-twentieth century, churches in the Cherokee Nation were among the sites for teaching and learning Cherokee literacy. Durbin Feeling grew up speaking Cherokee at home, and learned to read the language as a boy by following along as his father read from the Cherokee New Testament. He became fluent in writing the language while serving in the US military in Vietnam, when he would read the Book of Psalms in Cherokee. His curiosity about the language grew as he continued to notice the differences between the written Cherokee usage of the 1800s—codified in texts like the New Testament—and the Cherokee spoken by his community in the 1960s. Beginning with the bilingual program at Northeastern University (translating syllabic writing into phonetic writing), Feeling worked on Cherokee language lessons and a Cherokee dictionary, for which he translated words from a Webster’s dictionary, on handwritten index cards, to a recorder. Feeling recalls that in the early 1970s,
Back then they had reel to reel recorders and so I asked for one of those and talked to anybody and everybody and mixed groups, men and women, men with men, women with women. Wherever there were Cherokees, I would just walk up and say do you mind if I just kind of record while you were talking, and they didn’t have a problem with that. I filled up those reel to reel tapes, five of them….I would run it back and forth every word, and run it forward and back again as many times as I had to, and then I would hand write it on a bigger card.
So I filled, I think, maybe about five of those in a shoe box and so all I did was take the word, recorded it, take the next word, recorded it, and then through the whole thing…
There was times the churches used to gather and cook some hog meat, you know. It would attract the people and they would just stand around and joke and talk Cherokee. Women would meet and sew quilts and they’d have some conversations going, some real funny ones. Just like that, you know? Whoever I could talk with. So when I got done with that I went back through and noticed the different kinds of sounds…the sing song kind of words we had when we pronounced something (Erb and Feeling 2016).
The project began with handwriting in syllabary, but the dictionary used phonetics with tonal markers, so Feeling went through each of five boxes of index cards again, labeling them with numbers to indicate the height of sounds and pitches.
Feeling and his team experimented with various machines, including manual typewriters with syllabary keys (manufactured by the well-known Hermes typewriter company), new fonts using a dot matrix printer, and electric typewriters with Cherokee syllabary in the ball key—the typist had to memorize the location of all 85 keys. Early attempts to build computer programs allowing users to type in Cherokee resulted in documents that were confined to one computer and could not be easily shared except through printing documents.
Figure 2. Typewriter keyboard in Cherokee (image source: authors)
Beginning around 1990, a number of linguists and programmers with interests in Indigenous languages began working with the Cherokee, including Al Webster, who used Mac computers to create a program that, as Feeling described it, “introduced what you could do with fonts with a fontographer—he’s the one who made those fonts that were just like the old print, you know way back in the eighteen hundreds.” Then in the mid-1990s Michael Everson began working with Feeling and others to integrate Cherokee glyphs into Unicode, the primary system for software internationalization. Arising from discussions between engineers at Apple and Xerox, Unicode began in late 1987 as a project to standardize languages for computation. Although the original goal of Unicode was to encode all world writing systems, major languages came first. Michael Everson’s company Evertype has been critical to broader language inclusion, encoding minority and Indigenous languages such as Cherokee, which was added to the Unicode Standard in 1999 with the release of version 3.0.
Having begun language work with handwritten index cards in the 1960s, and later typewriters available to only one or two people with specialized skills, Feeling saw Cherokee adopted into Unicode in 1999, and integrated into Apple computer operating systems in 2003. When Apple and the Cherokee Nation publicized the new localization of Cherokee on the 4.1 iPhone in December 2010, the story was picked up internationally, as well as locally among Cherokee communities. By 2013, users could text, email, and search Google in the syllabary on smartphones and laptops, devices that came with the language already embedded as a standardized feature and that were available at chain stores like Walmart. This development involved different efforts at multiple locations, sometimes simultaneously, and over time. While Apple adopted Unicode-compliant Cherokee glyphs to the Macintosh in 2003, the Cherokee Nation, as a government entity, used PC computers rather than Macs. PCs had yet to implement Unicode-compliant Cherokee Fonts, so there was little access to the writing system on their computers and no known community adoption. At the time, the Cherokee Nation was already using an adapted English font that displayed Cherokee characters but was not Unicode compliant.
One of the first attempts to introduce Unicode-compliant Cherokee font and keyboard came with the Indigenous Language Institute conference at Northeastern State University in Oklahoma in 2006, where the Institute made the font available on flash drives and provided training to language technologists at the Cherokee Nation. However, the program was not widely adopted due to anticipated wait times in getting the software installed on Cherokee Nation computers. Further, the majority of users did not understand the difference between the new Unicode compliant fonts and the non-Unicode fonts they were already using. The non-Unicode Cherokee font and keyboard adapted the same keystrokes, and looked the same on screen as the Unicode compliant system, but certain keys (especially those for punctuation) produced glyphs that would not transfer between computers, so files could not be sent and re-opened on another computer without requiring extensive corrections. The value of Unicode compliance involves the additional interoperability to move between systems, the crucial first step towards integration with mobile devices, which are more useful in remote communities than desktop computers. Addition to Unicode is the first of five steps—including development of CLDR, open source font, keyboard layout design, and a word frequency list—before companies can encode a new language into their platforms for computer operating systems. These five steps act as space of exchange between Indigenous writing systems and digital platforms, within which differences are negotiated.
CLDR
The Common Local Data Repository (CLDR) is a set of key terms for localization, including months, days, years, countries, and currencies, as well as their abbreviations. This core information is localized on the iPhone and becomes the base which calendars and other native and external apps feed from on the device. Many Indigenous languages, including Cherokee, don’t have bureaucratic language, such as abbreviations for days of the week, and need to create them—Translation Department and Language Technology Department worked together to create new Cherokee abbreviations for calendrical terms.
Figure 3. Weather in Cherokee (image source: authors)
Open Source Font
Small communities don’t have budgets to purchase fonts for their languages, and such fonts also aren’t financially viable for commercial companies to develop, so the challenge for minority language activists is to find sponsorship for the creation of an open source font that will work across systems, available for anyone to adopt into any computer or device system. Working with Feeling, Michael Everson developed the open source font for Cherokee. Plantagenet font (designed by Ross Mills) was the first to adopt Cherokee into Windows (Vista) and Mac (Panther). If there is no font on a Unicode-compliant device—that is, the device does not have the language glyphs embedded—then users will see a string of boxes, the default filler for Unicode points that are not showing up in the system.
Keyboard Layout
New languages need an input method, and companies generally want the most widely used versions made available in open source. Cherokee has both a QWERTY keyboard, which is a phonetically-based Cherokee language keyboard, and a “Cherokee Nation” layout using the syllabary. Digital keyboards for mobile technologies are more complicated to create than physical keyboards and involve intricate collaboration between language specialists and developers. When developing the Cherokee digital keyboard for the iPhone, Apple worked in conjunction with the Translation Department and Language Technology Department at the Cherokee Nation, experimenting with several versions to accommodate the 85 Cherokee characters in the syllabary without creating too many alternate keyboards (the Cherokee Nation’s original involved 13 keyboards, whereas English has 3). Apple ultimately adapted a keyboard that involved two different ways of typing on the same keyboard, combining pop-up keys and an autocomplete system.
Figure 4. Mobile device keyboard in Cherokee (image source: authors)
Word Frequency List
The word frequency list is a standard requirement for most operating systems to support autocorrect spelling and other tasks on digital devices. Programmers need a word database, in Unicode, large enough to adequately source programs such as autocomplete. In order to generate the many thousands of words needed to seed the database, the Cherokee Nation had to provide Cherokee documents typed in the Unicode version of the language. But as with other languages, there were many older attempts to embed Cherokee in typewriters and computers that predate Unicode, leading to a kind of catch 22: The Cherokee Nation needed to already have documents produced in Unicode in order to get the language into computer and operating systems and adopted for mobile technologies, but they didn’t have many documents in Unicode because the language hadn’t yet been integrated into those Unicode-compliant systems. In the end the CN employed Cherokee speakers to create new documents in Unicode—re-typing the Cherokee Bible and other documents—to create enough words for a database. Their efforts were complicated by the existence of multiple versions of the language and spelling, and previous iterations of language technology and infrastructure.
Translation
Many of the English language words and phrases that are important to computational concepts, such as “security,” don’t have obvious equivalents in Cherokee (or as Feeling said, “we don’t have that”). How does one say “error message” in Cherokee? The CN Translation Department invented words—striving for both clarity and agreement—in order to address coding concepts for operating systems, error messages, and other phrases (which are often confusing even in English) as well as more general language such as the abbreviations discussed above. Feeling and Erb worked together with elders, CN staff, and professional Cherokee translators to invent descriptive Cherokee words for new concepts and technologies, such as ᎤᎦᏎᏍᏗ (u-ga-ha-s-di) or “to watch over something” for security; ᎦᎵᏓᏍᏔᏅ ᏓᎦᏃᏣᎳᎬᎯ (ga-li-da-s-ta-nv da-ga-no-tsa-la-gv-hi) or “something is wrong” for error message; ᎠᎾᎦᎵᏍᎩ ᎪᏪᎵ (a-na-ga-li-s-gi go-we-li) or “lightning paper” for email; and ᎠᎦᏙᎥᎯᏍᏗ ᎠᏍᏆᏂᎪᏗᏍᎩ (a-ga-no-v-hi-s-di a-s-qua-ni-go-di-s-gi) or “knowledge keeper” for computers. For English words like “luck” (as in “I’m feeling lucky,” a concept which doesn’t exist in Cherokee), they created new idioms, such as “ᎡᎵᏊ ᎢᎬᏱᏊ ᎠᏆᏁᎵᏔᏅ ᏯᏂᎦᏛᎦ” (e-li-quu i-gv-yi-quu a-qua-ne-li-ta-na ya-ni-ga-dv-ga) or “I think I’ll find it on the first try.”
Sorting
When the Unicode-compliant Plantagenet Cherokee font was first introduced in Microsoft Windows OS in Vista (2006), the company didn’t add Cherokee to the sorting function (the ability to sort files by numeric or alphabetic order) in its system. When Cherokee speakers named files in the language, they arrived at the limits of the language technology. These limits determine parameters in a user’s personal computing, the point at which naming files in Cherokee or keeping a computer calendar in Cherokee become forms of language activism that reveal the underlying dominance of English in the deeper infrastructure of computational systems. When a user sent a file with Cherokee characters, such as “ᏌᏊ” (sa-quu, or “one”) and “ᏔᎵ” (ta-li or “two”), receiving computers could not put the file into one place or another because the core operating system had no sorting order for the Unicode points of Cherokee, and the computer would crash. Sorting orders in Cherokee were not added to Microsoft until Windows 8.
Parental Controls
Part of the protocol for operating systems involves standard protections like parental controls—the ability to enable a program to automatically censor inappropriate language. In order to integrate Cherokee into an OS, the company needed lists of offensive language or “curse words” that could be flagged in parental restrictions settings for their operating system. Meeting the needs of these protocols was difficult linguistically and culturally, because Cherokee does not have the same cultural taboos as English around words for sexual acts or genitals; most Cherokee words are “clean words,” with offensive speech communicated through context rather than the words themselves. Also, because the Cherokee language involves tones, inappropriate meanings can arise from alternate tonal emphases (and the tone is not reflected in the syllabary). Elder Cherokee speakers found it culturally difficult to speak aloud those elements of Cherokee speech that are offensive, while non-Cherokee speaking computer company employees who had worked with other Indigenous languages did not always understand that not all Indigenous languages are alike—“curse words” in one language are not inappropriate in others. Finally, almost all of the potentially offensive Cherokee words that certain technology companies sought not only did not carry the same offensive connotation as its translation in English, but also carried dual or multiple meanings, and if blocked would also block a common word that had no inappropriate meaning.
Mapping and Place Names
One of the difficulties for Cherokees working to create Cherokee language country names and territories was the Cherokee Nation’s own exclusion from the lists. Speakers translated the names of even tiny nations into Cherokee for lists and maps in which the Cherokee Nation itself did not appear. Discussion of terminologies for countries and territories were frustrating because Cherokee themselves were not included, making colonial erasure of Indigenous nationhood and territories visible to Cherokee speakers as they did the translations. Erb is currently working with Google Maps to revise their digital maps to show federally recognized tribal nations’ territories.
Passwords and Security
One of the first attempts to introduce Unicode-compliant Cherokee on computers for the Immersion School, ᏣᎳᎩ ᏧᎾᏕᎶᏆᏍᏗ (tsa-la-gi tsu-na-de-lo-qua-s-di), involved problems and glitches that temporarily set back adoption of Unicode systems. The CN Language Technology Department added the Unicode-compliant font and keyboards on an Immersion School curriculum developer’s computer. However, at the time computers could only accept English passwords. After the curriculum developer had been typing in Cherokee and left their desk, their computer automatically logged off (auto-logoff is standard security for government computers). Temporarily locked out of their computer, they couldn’t switch their keyboard back to English to type the English password. Other teachers and translators heard about this “lockout” and most decided against having the new Unicode compliant fonts on their computers. Glitches like these slowed the roll out of Unicode-compliant fonts and set back the adoption process in the short term.
Community Adoption
When computers began to enter Cherokee communities, Feeling recalls his own hesitation about social media sites like Facebook: “I was afraid to use that.” When in 2011 there was a contested election for Chief of the Nation, and social media provided faster updates than traditional media, many community members signed up for Facebook accounts so they could keep abreast of the latest news about the election.
Figure 5. Facebook in Cherokee (image source: authors)
Similarly, when Cherokee first became available on the iPhone 4.1, many Cherokee people were reluctant to use it. Feeling says he was “scared that it wouldn’t work, like people would get mad or something.” But older speakers wanted to communicate with family members in Cherokee, and they provided the pressure for others to begin using mobile devices in the language. Feeling’s older brother, also a fluent speaker, bought an iPhone just to text with his brother in Cherokee, because his Android phone wouldn’t properly display the language.
In 2009, the Cherokee Nation introduced Macintosh computers in a 1:1 computer-to-student ratio for the second and third grades of the Cherokee Immersion school, and gave students air cards to get wireless internet service at home through cell towers (because internet was unavailable in many rural Cherokee homes). Up to this point the students spoke in Cherokee at school, but rarely generalized their Cherokee language outside of school or spoke it at home. With these tools, students could—and did—get on FaceTime and iChat from home and in other settings to talk with classmates in Cherokee. For some parents, it was the first time they had heard their children speaking Cherokee at home. This success convinced many in the community of the worth of Cherokee language technologies for digital devices.
The ultimate community adoption of Cherokee in digital forms—computers, mobile devices, search engines and social media—came when the technologies were most applicable to community needs. What worked was not clunky modems for desktops but iPhones that could function in communities without internet infrastructure. The story of Cherokee adoption into digital devices illustrates the pull towards English-language structures of standardization for Indigenous and minority language speakers, who are faced with challenges of skill acquisition and adaptation; language development histories that involve versions of orthographies, spellings, neologisms and technologies; and problems of abstraction from community context that accompany codifying practices. Facing the precarity of an eroding language base and the limitations and possibilities digital devices, the Cherokee and other Indigenous communities have strategically adapted hardware and software for cultural and political survivance. Durbin Feeling describes this adaptation as a Cherokee trait: “It’s the type of people that are curious or are willing to learn. Like we were in the old times, you know? I’m talking about way back, how the Cherokees adapted to the English way….I think it’s those kind of people that have continued in a good way to use and adapt to whatever comes along, be it the printing press, typewriters, computers, things like that. … Nobody can take your language away. You can give it away, yeah, or you can let it die, but nobody can take it away.”
Indigital Frameworks
Our case study reveals important processes in the integration of Cherokee knowledge systems with the information and communication technologies that have transformed notions of culture, society and space (Brey 2003). This kind of creative fusion is nothing new—Indigenous peoples have been encountering and exchanging with other peoples from around the world and adopting new materials, technologies, ideas, standards, and languages to meet their own everyday needs for millennia. The emerging concept indigital describes such encounters and collisions between the digital world and Indigenous knowledge systems, as highlighted in The Digital Arts and Humanities (Travis and von Lünen 2016). Indigital describes the hybrid blending or amalgamation of Indigenous knowledge systems including language, storytelling, calendar making, and song and dance, with technologies such as computers, Internet interfaces, video, maps, and GIS (Palmer 2009, 2012, 2013, 2016). Indigital constructs are forms of what Bruno Latour calls technoscience (1987), the merging of science, technology, and society—but while Indigenous peoples are often left out of global conversations regarding technoscience, the indigital framework attempts to bridge such conversations.
Indigital constructs exist because knowledge systems like language are open, dynamic, and ever-changing; are hybrid as two or more systems mix, producing a third; require the sharing of power and space which can lead to reciprocity; and are simultaneously everywhere and nowhere (Palmer 2012). Palmer associates indigital frameworks with Indigenous North Americans and the mapping of Indigenous lands by or for Indigenous peoples using maps and GIS (2009; 2012; 2016). GIS is a digital mapping and database software used for collecting, manipulating, analyzing, and mapping various spatial phenomena. Indigenous language, place-names, and sacred sites often converge with GIS resulting in indigital geographic information networks. The indigital framework, however, can be applied to any encounter and exchange involving Indigenous peoples, technologies, and cultures.
First, indigital constructs emerge locally, often when individuals or groups of individuals adopt and experiment with culture and technology within spaces of exchange, as happens in the moments of challenge and success in the integration of Cherokee writing systems to digital devices outlined in this essay. Within spaces of exchange, cultural systems like language and technology do not stand alone as dichotomous entities. Rather, they merge together creating multiplicity, uncertainty, and hybridization. Skilled humans, typewriters, index cards, file cabinets, language orthographies, Christian Bibles, printers, funding sources, transnational corporations, flash drives, computers, and cell-phones all work to stabilize and mobilize the digitization of the Cherokee language. Second, indigital constructs have the potential to flow globally; Indigenous groups and communities tap into power networks constructed by global transnational corporations, like Apple, Google, or IBM. Apple and Google are experts at creating standardized computer designs while connecting with a multitude of users. During negotiations with Indigenous communities, digital technologies are transformative and can be transformed. Finally, indigital constructs introduce different ways that languages can be represented, understood, and used. Differences associated with indigital constructs include variations in language translations, multiple meanings of offensive language, and contested place-names. Members of Indigenous communities have different experiences and reasons for adopting or rejecting the use of indigital constructs in the form of select digital devices like personal computers and cell-phones.
One hopeful aspect in this process is the fact that Indigenous knowledge systems and digital technologies are combinable. The idea of combinability is based on the convergent nature of digital technologies and the creative intention of the artist-scientist. In fact, electronic technologies enable new forms from such combinations, like Cherokee language keyboards, Kiowa story maps and GIS, or Maori language dictionaries. Digital recordings of community members or elders telling important stories that hold lessons for future generations are becoming more widely available, made either using audio or visual devices or combination of both formats. Digital prints of maps can be easily carried to roundtables for discussion about the environment (Palmer 2016), with audiovisual images edited on digital devices and uploaded or downloaded to other digital devices and eventually connected to websites. The mapping of place-names, creation of Indigenous language keyboards, and integration of stories into GIS require standardization, yet those standards are often defined by technocrats far removed from Indigenous communities, with a lack of input from community members and elders. Whatever the intention of the elders telling the story or the digital artist creating the construction, this is an opportunity for the knowledge system and its accompanying information to be shared.
Ultimately, how do local negotiations on technological projects influence final designs and representations? Indigital constructions (and spaces) are hybrid and require mixing at least two things to create a new third construct or third space (Bhabha 2006). Creation of a new Cherokee bureaucratic language to meet the needs of the iPhone CLDR requirements for representing calendar elements, with the negotiations between Cherokee language specialists and computer language specialists, resulted in hybrid space-times; a hybrid calendar shared as a form Cherokee-constructed technoscience. The same process applied to the development of specialized and now standardized Cherokee fonts and keyboards for the iPhone. A question for future research might be how much Unicode standardization transforms the Cherokee language in terms of meaning and understanding. What elements of Cherokee are altered and how are the new constructs interpreted by community members? How might Cherokee fonts and keyboards contribute to the sustainability of Indigenous culture and put language into practice?
Survival of indigital constructs requires reciprocity between systems. Indigital constructions are not set up as one-way flows of knowledge and information. Rather, indigital constructions are spaces for negotiation, featuring the ideas and thoughts of the participants. Reciprocity in this sense means cross-cultural exchange on equal footing, as having too much power will consume any kind of rights-based approach to building bridges among all participants. One-way flows of knowledge are revealed when Cherokee or other Indigenous informants providing place-names to Apple, Microsoft, or Google realize that their own geographies are not represented. They are erased from the maps. Indigenous geographies are often trivialized as being local, vernacular, and particular to a culture which goes against the grain of technoscience standardization and universalization. The trick of indigital reciprocity is shared power, networking (Latour 2005), assemblages (Deleuze and Guattari 1988), decentralization, trust, and collective responsibility. If all these relations are in place, rights-based approaches to community problems have a chance of success.
Indigital constructions are everywhere—Cherokee iPhone language applications or Kiowa stories in GIS are just a few examples, and many more occur in film, video, and other digital media types not discussed in this article. Yet, ironically, indigital constructions are also very distant from the reality of many Indigenous people on a global scale. Indigital constructions are primarily composed in the developed world, especially what is referred to as the global north. There is still a deep digital divide among Indigenous peoples and many Indigenous communities do not have access to digital technologies. How culturally appropriate are digital technologies like video, audio recordings, or digital maps? The indigital is distant in terms of addressing social problems within Indigenous communities. Oftentimes, there is a fear of the unknown in communities like the one described by Durbin Feeling in reference to adoption of social media applications like Facebook. Some Indigenous communities consider carefully the implications of adopting social media or language applications created for community interactions. Adoption may be slow, or not meet the expectations of software developers. Many questions arise in this process. Do creativity and social application go hand in hand? Sometimes we struggle to understand how our work can be applied to everyday problems. What is the potential of indigital constructions being used for rights-based initiatives?
Conclusion
English-speakers don’t often pause to consider how their language comes to be typed, displayed, and shared on digital devices. For Indigenous communities, the dominance of majoritarian languages on digital devices has contributed to the erosion of their language. While the isolation of many Indigenous communities in the past helped to protect their languages, that same isolation has required incredible efforts for minority language speakers to assert their presence in the infrastructures of technological systems. The excitement over the turn to digital media in Indian country is an easy story to tell to a techno-positive public, but in fact this turn involves a series of paradoxes: we take materials out of Indigenous lands to make our devices, and then we use them to talk about it; we assert sovereignty within the codification of standardized practices; we engage new technologies to sustain Indigenous cultural practices even as technological systems demand cultural transformation. Such paradoxes get to the heart of deeper questions about culturally-embedded technologies, as the modes and means of our communication shift to the screen. To what extent does digital media re-make the Indigenous world, or can it function just as a tool? Digital media are functionally inescapable and have come to constitute elements of our self-understanding; how might such media change the way Indigenous participants understand the world, even as they note their own absences from the screen? The insights from the technologization of Cherokee writing engage us with these questions along with closer insights into multiple forms of Indigenous information and communications technology and the emergence of indigital creations, inventing the next generation of language technology.
_____
Joseph Lewis Erb is a computer animator, film producer, educator, language technologist and artist enrolled in the Cherokee Nation. He earned his MFA from the University of Pennsylvania, where he created the first Cherokee animation in the Cherokee language, “The Beginning They Told.” He has used his artistic skills to teach Muscogee Creek and Cherokee students how to animate traditional stories. Most of this work is created in the Cherokee Language, and he has spent many years working on projects that will expand the use of Cherokee language in technology and the arts. Erb is an assistant professor at the University of Missouri, teaching digital storytelling and animation.
Joanna Hearne is associate professor in the English Department at the University of Missouri, where she teaches film studies and digital storytelling. She has published a number of articles on Indigenous film and digital media, animation, early cinema, westerns, and documentary, and she edited the 2017 special issue of Studies in American Indian Literatures on “Digital Indigenous Studies: Gender, Genre and New Media.” Her two books are Native Recognition: Indigenous Cinema and the Western (SUNY Press, 2012) and Smoke Signals: Native Cinema Rising (University of Nebraska Press, 2012).
Mark H. Palmer is associate professor in the Department of Geography at the University of Missouri who has published research on institutional GIS and the mapping of Indigenous territories. Palmer is a member of the Kiowa Tribe of Oklahoma.
[*] The authors would like to thank Durbin Feeling for sharing his expertise and insights with us, and the University of Missouri Peace Studies Program for funding interviews and transcriptions as part of the “Digital Indigenous Studies” project.
_____
Works Cited
Bhabha, Homi K. and J. Rutherford. 2006. “Third Space.” Multitudes 3. 95-107.
Brey, P. 2003. “Theorizing Modernity and Technology.” In Modernity and Technology, edited by T.J. Misa, P. Brey, and A. Feenberg, 33-71. Cambridge: MIT Press.
Byrd, Jodi A. 2015. “’Do They Not Have Rational Souls?’: Consolidation and Sovereignty in Digital New Worlds.” Settler Colonial Studies: 1-15.
Cushman, Ellen. 2011. The Cherokee Syllabary: Writing the People’s Perseverance. Norman: University of Oklahoma Press.
Deleuze, Gilles, and Félix Guattari. 1988. A Thousand Plateaus: Capitalism and Schizophrenia. New York: Bloomsbury Publishing.
Feeling, Durbin and Joseph Erb. 2016. Interview with Durbin Feeling, Tahlequah, Oklahoma. 30 July.
Feeling, Durbin. 1975. Cherokee-English Dictionary. Tahlequah: Cherokee Nation of Oklahoma.
Ginsburg, Faye. 1991. “Indigenous Media: Faustian Contract or Global Village?” Cultural Anthropology 6:1. 92-112.
Ginsburg, Faye. 2008. “Rethinking the Digital Age.” In Global Indigenous Media: Culture, Poetics, and Politics, edited by Pamela Wilson and Michelle Stewart. Durham: Duke University Press. 287-306.
Goriunova, Olga and Alexei Shulgin. 2008. “Glitch.” In Software Studies: A Lexicon, edited by David Fuller. Cambridge, MA: MIT Press. 110-18.
Hearne, Joanna. 2017. “Native to the Device: Thoughts on Digital Indigenous Studies.” Studies in American Indian Literatures 29:1. 3-26.
Hermes, Mary, et al. 2016. “New Domains for Indigenous Language Acquisition and Use in the USA and Canada.” In Indigenous Language Revitalization in the Americas, edited by Teresa L. McCarty and Serafin M. Coronel-Molina. London: Routledge. 269-291.
Hudson, Brian. 2016. “If Sequoyah Was a Cyberpunk.” 2nd Annual Symposium on the Future Imaginary, August 5th, University of British Columbia-Okanagan, Kelowna, B.C.
Latour, Bruno. 1987. Science in Action: How to Follow Scientists and Engineers through Society. Cambridge, MA: Harvard University Press.
Latour, Bruno. 2005. Reassembling the Social: An Introduction to Actor-Network Theory. Oxford: Oxford University Press.
Lewis, Jason. 2014. “A Better Dance and Better Prayers: Systems, Structures, and the Future Imaginary in Aboriginal New Media.” In Coded Territories: Tracing Indigenous Pathways in New Media Art, edited by Steven Loft and Kerry Swanson. Calgary: University of Calgary Press. 49-78.
Manovich, Lev. 2002. The Language of New Media. Cambridge, MA: MIT Press.
Nakamura, Lisa. 2014. “Indigenous Circuits: Navajo Women and the Racialization of Early Electronic Manufacture.” American Quarterly 66:4. 919-941.
Palmer, Mark. 2016. “Kiowa Storytelling around a Map.” In Travis and von Lunen (2016). 63-73.
Palmer, Mark. 2013. “(In)digitizing Cáuigú Historical Geographies: Technoscience as a Postcolonial Discourse”. In History and GIS: Epistemologies, Considerations and Reflections, edited by A. von Lunen and C. Travis. Dordrecht, NLD: Springer Publishing. 39-58.
Palmer, Mark. 2012. “Theorizing Indigital Geographic Information Networks.“ Cartographica: The International Journal for Geographic Information and Geovisualization 47:2. 80-91.
Palmer, Mark. 2009. “Engaging with Indigital Geographic Information Networks.” Futures: The Journal of Policy, Planning and Futures Studies 41. 33-40.
Palmer, Mark and Robert Rundstrom. 2013. “GIS, Internal Colonialism, and the U.S. Bureau of Indian Affairs.” Annals of the Association of American Geographers 103:5. 1142-1159.
Raheja, Michelle. 2011. Reservation Reelism: Redfacing, Visual Sovereignty, and Representations of Native Americans in Film. Lincoln: University of Nebraska Press.
Simpson, Audra. 2014. Mohawk Interruptus: Political Life Across the Borders of Settler States. Durham: Duke University Press.
Travis, C. and A. von Lünen. 2016. The Digital Arts and Humanities. Basel, Switzerland: Springer.
Vizenor, Gerald. 2000. Fugitive Poses: Native American Indian Scenes of Absence and Presence. Lincoln: University of Nebraska Press.
Wolf, Patrick. 2006. “Settler Colonialism and the Elimination of the Native.” Journal of Genocide Research 8:4. 387-409.
Christian Jacob, in The Sovereign Map, describes maps as enablers of fantasy: “Maps and globes allow us to live a voyage reduced to the gaze, stripped of the ups and downs and chance occurrences, a voyage without the narrative, without the narrative, without pitfalls, without even the departure” (2005). Consumers and theorists of maps, more than cartographers themselves, are especially set up to enjoy the “voyage reduced to the gaze” that cartographic artifacts (including texts) are able to provide. An outside view, distant from the production of the artifact, activates the epistemological potential of the artifact in a way that producing the same artifact cannot.
This dynamic is found at the conceptual level of interpreting cartography as a discipline as well. J.B. Harley, in his famous essay “Deconstructing the Map,” writes that:
a major roadblock to our understanding is that we still accept uncritically the broad consensus, with relatively few dissenting voices, of what cartographers tell us maps are supposed to be. In particular, we often tend to work from the premise that mappers engage in an unquestionably “scientific” or “objective” form of knowledge creation…It is better for us to begin from the premise that cartography is seldom what cartographers say it is (Harley 1989, 57).
Harley urges an interpretation of maps outside the purview and authority of the map’s creator, just as a literary scholar would insist on the critic’s ability to understand the text beyond the authority of what the authors say about their texts. There can be, in other words, a power in having distance from the act of making. There is clarity that comes from the role of the thinker outside of the process of creation.
The goal of this essay is to push back against the valorization of “tools” and “making” in the digital turn, particularly its manifestation in digital humanities (DH), by reflecting on illustrative examples of the cartographic turn, which, from its roots in the sixteenth century through to J.B. Harley’s explosive provocation in 1989 (and beyond) has labored to understand the relationship between the practice of making maps and the experiences of looking at and using them. By considering the stubborn and defining spiritual roots of cartographic research and the way fantasies of empiricism helped to hide the more nefarious and oppressive applications of their work, I hope to provide a mirror for the state of the digital humanities, a field always under attack, always defining and defending itself, and always fluid in its goals and motions.
Cartography in the sixteenth century, even as its tools and representational techniques were becoming more and more sophisticated, could never quite abandon the religious legacies of its past, nor did it want to. Roger Bacon in the thirteenth century had claimed that only with a thorough understanding of geography could one understand the Bible. Pauline Moffitt Watts, in her essay “The European Religious Worldview and Its Influence on Mapping” concludes that many maps, including those by Eskrich and Ortelius, preserved a sense of providential and divine meaning even as they sought to narrate smaller, local areas:
Although the messages these maps present are inescapably bound, their ultimate source—God—transcends and eclipses history. His eternity and omnipresence is signed but not constrained in the figurae, places, people, and events that ornament them. They offer fantastic, sometimes absurd vignettes and pastiches that nonetheless integrate the ephemera into a vision of providential history that maintained its power to make meaning well into the early modern era. (2007, 400)
The way maps make meaning is contained not just in the technical expertise of the way the maps are constructed but in the visual experiences they provide that “make meaning” for the viewer. By over-prioritizing an emphasis on the way maps are made or on the geometric innovations that make their creation possible, the cartographic historian and theorist would miss the full effect of the work.
Yet, the spiritual dimensions of mapmaking were not in opposition to technological expertise, and in many cases they went hand in hand. In his book Radical Arts, the Anglo-Dutch scholar Jan van Dorsten describes the spiritual motivations of sixteenth-century cosmographers disappointed by academic theology’s ability to ease the trauma of the European Reformation: “Theology…as the traditional science of revelation had failed visibly to unite mankind in one indisputably “true” perception of God’s plan and the properties of His creature. The new science of cosmography, its students seem to argue, will eventually achieve precisely that, thanks to its non-disputative method” (1970, 56-7). Some mapmakers of the sixteenth century in England, the Netherlands, and elsewhere—including Ortelius and others—imagined that the science and art of describing the created world, a text rivaling scripture in both revelatory potential and divine authorship, would create unity out of the disputation-prone culture of academic theology. Unlike theology, where thinkers are mapping an invisible world held in biblical scripture and apostolic tradition (as well as a millennium’s worth of commentary and exegesis), the liber naturae, the book of nature, is available to the eyes more directly, seemingly less prone to disputation.
Cartographers were attempting to create an accurate imago mundi—surely that was a more tangible and grounded goal than trying to map divinity. Yet, as Patrick Gautier Dalché notes in his essay “The Reception of Ptolemy’s Geography (End of the Fourteenth to Beginning of the Sixteenth Century),” the modernizing techniques of cartography after the “rediscovery” of Ptolemy’s work, did not exactly follow a straight line of empirical progress:
The modernization of the imago mundi and the work on modes of representation that developed during the early years of the sixteenth century should not be seen as either more or less successful attempts to integrate new information into existing geographic pictures. Nor should they be seen as steps toward a more “correct” representation, that is, toward conforming to our own notion of correct representation. They were exploratory games played with reality that took people in different directions…Ptolemy was not so much the source of a correct cartography as a stimulus to detailed consideration of an essential fact of cartographic representation: a map is a depiction based on a problematic, arbitrary, and malleable convention. (2007, 360).
So even as the maps of this period may appear more “correct” to us, they are still engaged in experimentation to a degree that undermines any sense of the map as simply an empirical graphic representation of the earth. The “problematic, arbitrary, and malleable” conventions, used by the cartographer but observed and understood by the cartographic theorist and historian, reveal the sort of synergetic relationship between maker and observer, practitioner and theorist that allow an artifact to come into greater focus.
Yet, cartography for much of its history turned away from seeing its work as culturally or even politically embedded. David Stoddart, in his history of geography, labels Cook’s voyage to the Pacific in 1769 as the origin point of cartography’s transforming into an empirical science.[1] Stoddart places geography, from that point onward, within the realm of the natural sciences based on, as Derek Gregory observes, “three features of decisive significance for the formation of geography as a distinctly modern, avowedly ‘objective’ science: a concern for realism in description, for systematic classification in collection, and for the comparative method in explanation” (Gregory 1994, 19). What is gone, then, in this march toward empiricism is any sense of culturally embedded codes within the map. The map, like a lab report of scientific findings, is meant to represent what is “actually” there. This term “actually” will come back to haunt us when we turn to the digital humanities.
Yet, in the long history of mapping, before and after this supposed empirical fulcrum, maps remain slippery and malleable objects that are used for a diverse range of purposes and that reflect the cultural imagination of their makers and observers. As maps took on the appearance of the empirical and began to sublimate the devotional and fantastical aspects they had once shown proudly, they were no less imprinted with cultural knowledge and biases. If anything, the veil of empiricism allowed the cultural, political, and imperial goals of mapmaking to be hidden.
In William Boelhower’s groundbreaking “Inventing America: The Culture of the Map” he argued precisely that maps had not simply graphically represented America, but rather that America was invented by maps. “Accustomed to the success of scientific discourse and imbued with the Cartesian tradition,” he writes, “the sons of Columbus naturally presumed that their version of reality was the version” (1988, 212). While Europeans believed they were simply mapping what they saw according to empirical principles, they didn’t realize they were actually inventing America in their own discursive image. He elaborates: “The Map is America’s precognition; at its center is not geography in se but the eye of the cartographer. The fact requires new respect for the in-forming relation between the history of modern cartography and the history of the Euro-American’s being-in-the-new-world” (213). Empiricism, then, was empire. “Empirical” maps were making the eye of the cartographer into the ideal “objective” viewer, producing a fictional way of seeing that reflected state power. Boelhower refers to the scale map as a kind of “panopticon” because of the “line’s achievement of an absolute and closed system no longer dependent on the local perspectivism of the image. With map in hand, the physical subject is theoretically everywhere and nowhere, truly a global operator” (222). What appears, then, simply to be the gathering, studying, and representation of data is, in fact, a system of discursive domination in which the cartographer asserts their worldview onto a site. As Boelhower puts it: “Never before had a nation-state sprung so rationally from a cartographic fiction, the Euclidean map imposing concrete form on a territory and a people” (223). America was a cartographic invention meant to appear as empirically identical to how the cartographers made it look.
To turn again to J.B. Harley’s 1989 bombshell, maps are always evidence of cultural norms and perspectives, even when they try their best to appear sparse and scientific. Referring to “plain scientific maps,” Harley claims that “such maps contain a dimension of ‘symbolic realism’ which is no less a statement of political authority or control than a coat-of-arms or a portrait of a queen placed at the head of an earlier decorative map.” Even “accuracy and austerity of design are now the new talismans of authority culminating in our own age with computer mapping” (60). To represent the world “is to appropriate it” and to “discipline” and “normalize” it (61). The more we move away from cultural markers for the mythical comfort of “empirical” data, the more we find we are creating dominating fictions. There is no representation of data that does not exist within the hierarchies of cultural codes and expectations.
What this rather eclectic history of cartography reveals is that even when maps and mapmaking attempt to hide or move beyond their cultural and devotional roots, cultural, ethical, and political markers inevitably embed themselves in the map’s role as a broker of power. Maps sort data, but in so doing they create worldviews with real world consequences. As some practitioners of mapmaking in the early modern period, such as those Familists who counted several cartographers among their membership, might have thought their cartographic work provided a more universal and less disputation-prone discursive focus than say, philosophy or theology, they were producing power through their maps, appropriating and taming the world around them in ways only fully accessible to the reader, the historian, the viewer. Harley invites us to push back against a definition of cartographic studies that follows what cartographers themselves believe cartography must be. One can now, like the author of this essay, be a theorist and historian of cartographic culture without ever having made a map. Having one’s work exist outside the power-formation networks of cartographic technology provides a unique view into how maps make meaning and power out in the world. The main goal of this essay, as I turn to the digital humanities, is to encourage those interested in the digital turn to make room for those who study, observe, and critique, but do not make.[2]
Though the digital turn in the humanities is often celebrated for its wider scope and its ability to allow scholars to interpret—or at least observe—data trends across many more books than one human could read in the research period of an academic project, I would argue that the main thrust of the fantasy of the digital turn can be understood through its preoccupation with a fantasy of access and a view of its labor as fundamentally different than the labor of traditional academic discourse. A radical hybridity is celebrated. Rather than just read books and argue about the contents, the digital humanist is able to draw from a wide variety of sources and expanded data. Michael Witmore, in a recent essay published in New Literary History, celebrates this age of hybridity: “If we speak of hybridization as the point where constraints cease to be either intellectual or physical, where changes in the earth’s mean temperature follow just as inevitably from the ‘political choices of human beings’ as they do from the ‘laws of nature,’ we get a sense of how rich and productive the modernist divide has been. Hybrids have proliferated. Indeed, they seem inexhaustible” (355). Witmore sees digital humanities as existing within this hybridity: “The Latourian theory of hybrids provides a useful starting point for thinking about a field of inquiry in which interpretive claims are supported by evidence obtained via the exhaustive, enumerative resources of computing” (355). The emphasis on the “exhaustive” and “enumerative” resources of computing would imply, even if this were not Witmore’s intention, that computing opens a depth of evidence not available to the non-hybrid, non-digitally enabled humanist.
Indeed, in certain corners of DH, one often finds a suspicious eye cast on the value of traditional exegetical practices practiced without any digital engagements. In The Digital Humanist: A Critical Inquiry by Teresa Numerico, Domenico Fiormonte, and Francesco Tomasi, “the authors call on humanists to acquire the skills to become digital humanists,” elaborating: “Humanists must complete a paso doble, a double step: to rediscover the roots of their own discipline and to consider the changes necessary for its renewal. The start of this process is the realization that humanists have indeed played a role in the history of informatics” (2015, x). Numerico, Fiormonte, and Tomasi offer a vision of the humanities as in need of “renewal” rather than under attack from external forces. The suggestion is that the humanities need to rediscover their roots while at the same time taking on the “tools necessary for [their] renewal,” tools which are related to their “role in the history of informatics” and computing. The humanities are then shown to be tied up in a double bind: they have forgotten their roots and they are unable to innovate without the help of considering the digital.
To offer a political aside: while Numerico, Fiormonte, and Tomasi offer a compelling and necessary history of the humanistic roots of computing, their argument is well in line with right-leaning attacks on the humanities In their view, the humanities have fallen away from their first purpose, their roots. While the authors of the volume see these roots as connected to the early years of modern computer science, they could just as easily, especially given what early computational humanities looked like, be urging a return to philology and to the world of concordances and indexing that were so important to early and mid-twentieth century literary studies. They might also gesture instead at the deep history of political and philosophical thought out of which the modern university was born, and which were considered fundamental to the very project of university education until only very recently. Barring a return to these roots, the least the humanities can do to survive is to renew itself based on a connection to the digital and to the site of modern work: the computer terminal.
Of course, what scholarly work is done outside the computer terminal? Journals and, increasingly, whole university press catalogs are being digitized and sold to university libraries on a subscription bases. Scholars read these materials and then type their own words into word processing programs onto machines (even, if like the recent Freewrite Machine released by Astrohaus, the machine attempts to appear as little like a computer as possible) and then, in almost all cases, email their work to editors who then edit it digitally and then publish it either in digitally-enabled print publishing or directly on-line. So why aren’t humanists of all sorts already considered connected to the digital?
The answer is complicated and, like so many things in DH, depends on which particular theorist or practitioner you ask. Matthew Kirschenbaum writes about how one knows one is a digital humanist:
You are a digital humanist if you are listened to by those who are already listened to as digital humanists, and they themselves got to be digital humanists by being listened to by others. Jobs, grant funding, fellowships, publishing contracts, speaking invitations—these things do not make one a digital humanist, though they clearly have a material impact on the circumstances of the work one does to get listened to. Put more plainly, if my university hires me as a digital humanist and if I receive a federal grant (say) to do such a thing that is described as digital humanities and if I am then rewarded by my department with promotion for having done it (not least because outside evaluators whom my department is enlisting to listen to as digital humanists have attested to its value to the digital humanities), then, well, yes, I am a digital humanist. Can you be a digital humanist without doing those things? Yes, if you want to be, though you may find yourself being listened to less unless and until you do some thing that is sufficiently noteworthy that reasonable people who themselves do similar things must account for your work, your thing, as a part of the progression of a shared field of interest. (2014, 55)
Kirschenbaum defines the digital humanist as, mostly, someone who does something that earns the recognition of other digital humanists. He argues that this is not particularly different from the traditional humanities in which publications, grants, jobs, etc. are the standard definition of who is or is not a scholar. Yet, one wonders, especially in the age of the complete collapse of the humanities job market, if such institutional distinctions are either ethical or accurate. What would we call someone with a Ph.D. (or even without) who spends their days readings books, reading scholarly articles, and writing in their own room about the Victorian verse monologue or the early Tudor dramatic interludes? If no one reads a scholar, are they still a scholar? For the creative arts, we seem to have answered this question. We believe that the work of a poet, artist, or philosopher matters much more than their institutional appreciation or memberships during the era of the work’s production. Also, the need to be “listened to” is particularly vexed and reflects some of the political critiques that are often launched at DH. Who is most listened to in our society? White, cisgendered, heterosexual men. In the age of Trump, we are especially attuned to the fact that whom we choose to listen to is not always the most deserving or talented voice, but the one reflecting existent narratives of racial and economic distribution.
Beyond this, the combined requirement of institutional recognition and economic investment (a salary from a university, a prestigious grant paid out) ties the work of the humanist to institutional rewards. One can be a poet, scholar, thinker in one’s own house, but you can’t be an investment banker or a lawyer or a police officer by self-declaration. The fluid nature of who can be a philosopher, thinker, poet, scholar has always meant that the work, not the institutional affiliation, of a writer/maker matters. Though DH is a diverse body of practitioners doing all sorts of work, it is often framed, sometimes only implicitly, as a return to “work” over “theory.” Kirschenbaum for instance, defending DH against accusations that it is against the traditional work of the humanities, writes: “Digital humanists don’t want to extinguish reading and theory and interpretation and cultural criticism. Digital humanists want to do their work… they want professional recognition and stability, whether as contingent labor, ladder faculty, graduate students, or in ‘alt-ac’ settings” (56). They essentially want the same things any other scholar does. Yet, while digital humanists are on the one hand defined by their ability to be listened to and to have professional recognition and stability, they are also in search of recognition and stability and eager to reshape humanistic work toward a more technological model.
This leads to a question that is not always explored closely enough in discussions of the digital humanities in higher education. Though scholars are rightly building bridges between STEM and the humanities (rightly pushing for STEAM over STEM), there are major institutional differences between how the humanities and the sciences have traditionally functioned. Scientific research largely happens because of institutional investment of some kind whether from governmental, NGO, or corporate grants. This is why the funding sources of any given study are particularly important to follow. In the humanities, of course, grants also exist and they are a marker of career prestige. No one could doubt the benefit of time spent in a far-away archive or at home writing instead of teaching because of a dissertation-completion grant. Grants, in other words, boost careers but they are not necessary.[3] Very successful humanists depend on only library resources to produce influential work. In many cases, access to a library, a computer, and a desk is all one needs and the digitization of many archives (a phenomenon not free from political and ethical complications) has expanded access to archival materials once only available to students of wealthy institutions with deep special collections budgets or those with grants able to travel and lodge themselves far away for their research.
All this is to say that a particular valorization of the sciences is risky business for the humanities. Kirschenbaum recommends that since “digital humanities…is sometimes said to suffer from Physics envy,” the field should embrace this label and turn to “a singularly powerful intellectual precedent for examining in close (yes, microscopic) detail the material conditions of knowledge production in scientific settings or configurations. Let us read citation networks and publication venues. Let us examine the usage patterns around particular tools. Let us treat the recensio of data sets” (60). Longing for the humanities to resemble the sciences is nothing new. Longing for data sets instead of individual texts, longing for “particular tools” rather than a philosophical problem or trend can sometimes be a helpful corrective to more Platonic searches for the “spirit” of a work or movement. And yet, there are risks to this approach, not least because the works themselves, that is, the object of inquiry, is treated in such general terms that it becomes essentially invisible. One can miss the tree for the forest and know more about the number of citations of Dante’s Commedia than the original text, or the spirit in which those citations are made. Surely, there is room for both, except when, because of shrinking hiring practices, there isn’t.
In fact the economic politics of digital humanities has long been a source of at time fiery debate. Daniel Allington, Sarah Brouillette, and David Golumbia, in “Neoliberal Tools (and Archives): A Political History of Digital Humanities,” argue that the digital humanities have long been more defined by their preference for lab and project-based sources of knowledge over traditional humanistic inquiry:
What Digital Humanities is not about, despite its explicit claims, is the use of digital or quantitative methodologies to answer research questions in the humanities. It is, instead, about the promotion of project-based learning and lab-based research over reading and writing, the rebranding of insecure campus employment as an empowering “alt-ac” career choice, and the redefinition of technical expertise as a form (indeed, the superior form) of humanistic knowledge. (Allington, Brouillette and Golumbia 2016)
This last point, the valorization of “technical expertise,” is, I would argue, profoundly difficult to perform in a way that doesn’t implicitly devalue the classic toolbox of humanistic inquiry. The motto “More hack, less yack”—a favorite of the THATCamps, collaborative “un-conferences”—encapsulates this idea. Too much unfettered talking could lead to discord, to ambiguity, and to strife. To hack, on the other hand, is understood as something tangible and something implicitly more worthwhile than the production of discourse outside of particular projects and digital labs. Yet Natalia Cecire has noted, “You show up at a THATCamp and suddenly folks are talking about separating content and form as if that were, like, a real thing you could do. It makes the head spin” (Cecire 2011). Context, with all its ambiguities, once the bedrock of humanistic inquiry, is being sidestepped for massive data analysis that, by the very nature of distant reading, cannot account for context to a degree that would satisfy, say, the many Renaissance scholars who trained me. Cecire’s argument is a valuable one. In her post, she does not argue that we should necessarily follow a strategy of “no hack,” only that “we should probably get over the aversion to ‘yack.’” As she notes, “[yack] doesn’t have to replace ‘hack’; the two are not antithetical.”
As DH continues to define itself, one can detect a sense that digital humanities’ focus on individual pieces or series of data, as well as their work in coding, embeds them in more empirical conversations that do not float to the level of speculation that is so emblematic of what used to be called high theory. This is, for many DH practitioners, a source of great pride. Kirschenbaum ends his essay with the following observation: “there is one thing that digital humanities ineluctably is: digital humanities is work, somebody’s work, somewhere, some thing, always. We know how to talk about work. So let’s talk about this work, in action, this actually existing work” (61). The author’s insistence on “some thing” and “this actually existing work” implies that there is work that is not centered on a thing or on work that actually exists, that the move to more concrete objects of inquiry, toward more empirical subjects, is a defining characteristic of digital humanities.
This, among other issues, has made many respond to the digital humanities as if they are cooperating with and participating in the corporatized ideologies of Silicon Valley “tech culture.” Whitney Trettien, in an insightful blogpost, claims, “Humanities scholars who engage with technology in non-trivial ways have done a poor job responding to such criticism” and accuses those who criticize digital humanities as “continuing to reify a diverse set of practices as a homogeneous whole.” Let me be clear: I am not claiming that Kirschenbaum or Trettien or any other scholar writing in a theoretical mode about digital humanities is representative of an entire field, but their writing is part of the discursive community and when those of us whose work is enabled by digital resources but who do not work to build digital tools see our work described as a “trivial” engagement with the digital and see our work put in contrast, implicitly but still clearly, with “this actually existing work,” it is hard not to feel as if the humanist working on texts with digital tools (but not about the digital tools or about data derived from digital modeling) were being somehow slighted.
For instance, in a short essay by Tom Scheinfeldt, “Why Digital Humanities is ‘Nice,’” the author claims: “One of the things that people often notice when they enter the field of digital humanities is how nice everybody is. This can be in stark contrast to other (unnamed) disciplines where suspicion, envy, and territoriality sometimes seem to rule. By contrast, our most commonly used bywords are ‘collegiality,’ ‘openness,’ and ‘collaboration’” (2012, 1). I have to admit I have not noticed what Scheinfeldt claims people often notice (perhaps I have spent too much time on twitter watching digital humanities debates unfurl in less than “nice” ways), but the claim, even as a discursive and defining fiction around DH, helps to understand one thread of the digital humanities’ project of self-definition: we are kind because what we work on is verifiable fact, not complicated and speculative philosophy or theory. Scheinfeldt says as much as he concludes his essay:
Digital humanities is nice because, as I have described in earlier posts, we’re often more concerned with method than we are with theory. Why should a focus on method make us nice? Because methodological debates are often more easily resolved than theoretical ones. Critics approaching an issue with sharply opposed theories may argue endlessly over evidence and interpretation. Practitioners facing a methodological problem may likewise argue over which tool or method to use. Yet at some point in most methodological debates one of two things happens: either one method or another wins out empirically, or the practical needs of our projects require us simply to pick one and move on. Moreover, as Sean Takats, my colleague at the Roy Rosenzweig Center for History and New Media (CHNM), pointed out to me today, the methodological focus makes it easy for us to “call bullshit.” If anyone takes an argument too far afield, the community of practitioners can always put the argument to rest by asking to see some working code, a useable standard, or some other tangible result. In each case, the focus on method means that arguments are short, and digital humanities stays nice. (2)
The most obvious question one is left with is: but what is the code doing? Where are the humanities in this vision of the digital? What truly discursive and interpretative work could produce fundamental disagreements that could be resolved simply by verifying the code in a community setting? Also, the celebration of how enforceable community norms are if an argument goes “too far afield” presents a troubling vision of a true discursive community where the appearance of agreement, enforceable through “empirical” testing, is more important than freedom of debate. In our current political climate, one wonders if such empirically-minded groupthink adequately makes room for more vulnerable, and not quite as loud, voices. When the goal is a functioning website or program, Scheinfeldt may be quite right, but when describing discursive work in the humanities, citing text for instance, rarely quells disagreement, but only makes clearer where the battle lines are drawn. This is particularly ironic given how the digital humanities, understood as a giant discursive, never-quite-adequate term for the field, is still defining itself and has been defining itself for decades with essay after essay defining just what DH is.
I am echoing here some of the arguments offered by Adeline Koh in her essay “Niceness, Building, and Opening the Genealogy of the Digital Humanities: Beyond the Social Contract of Humanities Computing.” In this quite important intervention, Koh argues that DH is centered in two linked characteristics, niceness and technological expertise. Though one might think these requirements are disparate, Koh reveals how they are linked in the formation of a DH social contract:
In my reading of this discursive structure, each rule reinforces the other. An emphasis on method as it applies to a project—which requires technical knowledge—requires resolution, which in turn leads to niceness and collegiality. To move away from technical knowledge—which appears to happen in [prominent DH scholar Stephen] Ramsay’s formulation of DH 2—is to move away from niceness and toward a darker side of the digital humanities. Proponents of technical knowledge appear to be arguing that to reject an emphasis on method is to reject an emphasis on civility. In other words, these two rules form the basis of an attempt to enforce a digital humanities social contract: necessary conditions (technical knowledge) that impose civic responsibilities (civility and niceness). (100)
Koh believes that what is necessary to reduce the link between DH social contracts and the tenets of liberalism, is an expanded genealogy of the digital humanities. Koh urges DH to consider its roots beyond humanities computing.[4]
To demand that one work with technical expertise on “this actually existing work”—whatever that work may end up being—is to state rather clearly that there are guidelines fencing in the digital humanities. As in the history of cartographic studies, the opinions of the makers paying attention to data sets have been allowed to determine what the digital humanities are (or what DH is). Like the moment when J.B. Harley challenged historians and theorists of cartography to ignore what the cartographers say and explore maps and mapmaking outside of the tools needed to make a map, perhaps DH is ready to enter a new phase where it begins its own renewal by no longer valorizing tools, code, and technology and letting the observers, the consumers, the fantasists, and the historians of power and oppression in (without their laptops). Indeed, what DH can learn from the history of cartography is to understand that what DH is, in all its many forms, is seldom (just) what digital humanists say it is.
_____
Tim Duffy is a scholar of Renaissance literature, poetics, and spatial philosophy.
[1] See David Stoddart, “Geography—a European Science” in On geography and its history, pp 28-40. For a discussion of Stoddart’s thinking, see Derek Gregory, Geographic Imaginations, pp. 16-21.
[2] Obviously, critics and writers make, but their critique exists outside of the production of the artifact that they study. Cartographic theorists, as this article will argue, need not be a cartographer themselves any more than a critic or theorist of the digital need be a programmer or creator of digital objects.
[3] For more on the political problems of dependence on grants, see Waltzer (2012): “One of those conditions is the dependence of the digital humanities upon grants. While the increase in funding available to digital humanities projects is welcome and has led to many innovative projects, an overdependence on grants can shape a field in a particular way. Grants in the humanities last a short period of time, which make them unlikely to fund the long-term positions that are needed to mount any kind of sustained challenge to current employment practices in the humanities. They are competitive, which can lead to skewed reporting on process and results, and reward polish, which often favors the experienced over the novice. They are external, which can force the orientation of the organizations that compete for them outward rather than toward the structure of the local institution and creates the pressure to always be producing” (340-341).
[4] In her reading of how digital humanities deploys niceness, Koh writes “In my reading of this discursive structure, each rule reinforces the other. An emphasis on method as it applies to a project—which requires technical knowledge—requires resolution, which in turn leads to niceness and collegiality. To move away from technical knowledge…is to move away from niceness and toward a darker side of the digital humanities. Proponents of technical knowledge appear to be arguing that to reject an emphasis on method is to reject an emphasis on civility” (100).
Dalché, Patrick Gautier. 2007. “The Reception of Ptolemy’s Geography (End of the Fourteenth to Beginning of the Sixteenth Century) in Cartography in the European Renaissance, Volume 3, Part 1. Edited by David Woodward. Chicago: University of Chicago Press. 285-364.
Fiormonte, Domenico, Teresa Numerico, and Francesca Tomasi. 2015. The Digital Humanist: A Critical Inquiry. New York: Punctum Books
Harley, J.B. 2011. “Deconstructing the Map” in The Map Reader: Theories of Mapping Practice and Cartographic Representation, First Edition, edited by Martin Dodge, Rob Kitchin and Chris Perkins. New York: John Wiley & Sons, Ltd. 56-64.
Jacob, Christian. 2005. The Sovereign Map. Translated by Tom Conley. Chicago: University of Chicago Press.
Kirschenbaum, Matthew. 2014. “What is ‘Digital Humanities,’ and Why Are They Saying Such Terrible Things about It?” Differences 25:1. 46-63.
Koh, Adeline. 2014. “Niceness, Building, and Opening the Genealogy of the Digital Humanities: Beyond the Social Contract of Humanities Computing.” Differences 25:1. 93-106.
Scheinfeldt, Tom. 2012. “Why Digital Humanities is ‘Nice.’” In Matthew Gold, ed., Debates in the Digital Humanities. Minneapolis: University of Minnesota Press.
Watts, Pauline Moffitt. 2007. “The European Religious Worldview and Its Influence on Mapping” in The History of Cartography: Cartography in the European Renaissance, Vol. 3, part 1. Edited by David Woodward. Chicago: University of Chicago Press). 382-400.
In a passage from his 2014 book Information Doesn’t Want to Be Free author and copyright reformer Cory Doctorow sounds a familiar note against strict copyright. “Creators and investors lose control of their business—they become commodity suppliers for a distribution channel that calls all the shots. Anti-circumvention [laws such as the Digital Millennium Copyright Act, which prohibits subverting controls on the intended use of digital objects] isn’t copyright protection, it’s middleman protection” (50).
This is the specter haunting the digital cultural economy, according to many of the most influential voices arguing to reform or disrupt it: the specter of the middleman, the monopolist, the distortionist of markets. Rather than an insurgency, this specter emanates from economic incumbency: these middlemen are the culture industries themselves. With the dual revolutions of personal computer and internet connection, record labels, book publishers, and movie studios could maintain their control and their profits only by asserting and strengthening intellectual property protections and squelching the new technologies that subverted them. Thus, these “monopolies” of cultural production threatened to prevent individual creators from using technology to reach their audiences independently.
Such a critique became conventional wisdom among a rising tide of people who had become accustomed to using the powers of digital technology to copy and paste in order to produce and consume cultural texts, beginning with music. It was most comprehensively articulated in a body of arguments, largely produced by technology evangelists and tech-aligned legal professionals, hailing from the Free Culture movement spearheaded by Lawrence Lessig. The critique’s practical form was the host of piratical activities and peer-to-peer technologies that, in addition to obviating traditional distribution chains, dedicated themselves to attacking culture industries, as well as their trade organizations such as the Recording Industry Association of America (RIAA) and the Motion Picture Association of America (MPAA).
Connected to this critique is an alternate vision of the digital economy, one that leverages new technological commons, peer production and network effects to empower creators. This vision has variations, and travels under a number of different political banners, from anarchist to libertarian to liberal and many more who prefer not to label.[1] It tells a compelling story (one Doctorow has adapted into novels for young people): against corporate monopolists and state regulation, a multitude, empowered by the democratizing effects bequeathed to society by networked personal computers, and other technologies springing from them, is posed to revolutionize the production of media and information, and, therefore, the political and economic structure as a whole. Work will be small-scale and independent, but, bereft of corporate behemoths, more lucrative than in the past.
This paper traces the contours of the critique put forth by Doctorow and other revolutionaries of networked digital production in light of a nineteenth-century thinker who espoused remarkably similar arguments over a century ago: the French anarchist Pierre-Joseph Proudhon. Few of these writers are evident readers of Proudhon or explicitly subscribe to his views, though some, such as the Center for Stateless Society do. Rather than a formal doctrine, what I call “Digital Proudhonism” is better understood as what Raymond Williams (1977) calls a “structure of feeling”: a kind of “practical consciousness” that identifies “meanings and values as they are actively lived and felt” (132), in this case, related to specific experiences of networked computer use. In the case under discussion these “affective elements of consciousness and relationships” are often articulated in a political, or at least polemical, register, with real effects on the political self-understanding of networked subjects, the projects they pursue, and their relationship to existing law, policy and institutions. Because of this, I seek to do more than identify currents of contemporary Digital Proudhonism. I maintain that the influence of this set of practices and ideas over the politics of digital production necessitates a critique. In this case, I argue that a return to Marx’s critique of Proudhon will aid us in piercing through the Digital Proudhonist mystifications of the Internet’s effects on politics and industry and reformulate both a theory of cultural production under digital capitalism as well as radical politics of work and technology for the 21st century.
From the Californian Ideology to Digital Proudhonism
What I am calling Digital Proudhonism has precedent in the social critique of techno-utopian beliefs surrounding the internet. It echoes Langdon Winner’s (1997) diagnosis of “cyberlibertarianism” in the Progress and Freedom Foundation’s 1994 manifesto “Magna Carta for the Knowledge Age,” where “the wedding of digital technology and the free market” manages to “realize the most extravagant ideals of classical communitarian anarchism” (15). Above all, it bears a marked resemblance to Barbrook and Cameron’s (1996) landmark analysis of the “Californian Ideology,” that “bizarre mish-mash of hippie anarchism and economic liberalism beefed up with lots of technological determinism” emerging from the Wired (in the sense of the magazine) corners of the rise of networked computers, which claims that digital technology is the key to realizing freedom and autonomy (56). As the authors put it, “the Californian Ideology promiscuously combines the free-wheeling spirit of the hippies and the entrepreneurial zeal of the yuppies. This amalgamation of opposites has been achieved through profound faith in the emancipatory potential of new information technologies” (45).
My contribution will follow the argument of Barbrook and Cameron’s exemplary study. As good Marxists, they recognized that ideology was not merely an abstract belief system, but “offers a way of understanding the lived reality” (50) of a specific social base: “digital artisans” of programmers, software developers, hackers and other skilled technology workers who “not only tend to be well-paid, but also have considerable autonomy over their pace of work and place of employment” (49). Barbrook and Cameron located the antecedents of the Californian Ideology in Thomas Jefferson’s belief that democracy was best secured by self-sufficient individual farmers, a kind of freedom that, as the authors trenchantly note, “was based upon slavery for black people” (59).
Thomas Jefferson is an oft-cited figure among the digital revolutionaries associated with copyright reform. Law professor James Boyle (2008) drafts Jefferson into the Free Culture movement as a fellow traveler who articulated “a skeptical recognition that intellectual property rights might be necessary, a careful explanation that they should not be treated as natural rights, and a warning of the monopolistic dangers that they pose” (21). Lawrence Lessig cites Jefferson’s remarks on intellectual property approvingly in Free Culture (2004, 84). “Thomas Jefferson and the other Founding Fathers were thoughtful, and got it right,” states Kembrew McLeod (2005) in his discussion of the U.S. Constitution’s clauses on patent and copyright (9).
There is a deeper political and economic resonance between Jefferson and internet activists beyond his views on intellectual property. Jefferson’s ideal productive arrangement of society was small individual landowners and petty producers: the yeoman farmer. Jefferson believed that individual self-sufficiency guaranteed a democratic society. The abundance of land in the New World and the willingness to expropriate it from the indigenous peoples living there gave his fantasy a plausibility and attraction many Americans still feel today. It was this vision of America as a frontier, an empty space waiting to be filled by new social formations, that makes his philosophy resonate with the techno-adept described by Barbrook and Cameron, who viewed the Internet in a similar way. One of these Californians, John Perry Barlow (1996), who famously declared to “governments of the Industrial World” that “cyberspace does not lie within your borders,” even co-founded an organization dedicated to a deregulated internet called the “Electronic Frontier Foundation.”
However, not everything online lent itself to the metaphor of a frontier. Particularly in the realm of music and video, artisans dealt with a field crowded with existing content, as well as thickets of intellectual property laws that attempted to regulate how that content was created and distributed. There could be no illusion of a blank canvas on which to project one’s ideal society: in fact, these artisans were noteworthy, not for producing work independently out of whole cloth, but for refashioning existing works through remix. Lawrence Lessig (2004) quotes mashup artist Girl Talk: “We’re living in this remix culture. This appropriation time where any grade-school kid has a copy of Photoshop and can download a picture of George Bush and manipulate his face how they want and send it to their friends” (14). The project of Lessig and others was not to create the conditions for erecting a new society upon a frontier, as a yeoman farmer might, but to politicize this class of artisans in order to challenge larger industrial concerns, such as record labels and film studios, who used copyright to protect their incumbent position. This very different terrain requires a different perspective from Jefferson’s.
Thomas Jefferson’s vision is not the only expression of the fantasy of a society built on the basis of petty producers. In nineteenth-century Europe, where most land had long been tied up in hereditary estates, large and small, the yeoman farmer ideal held far less influence. Without a belief in abundant land, there could be no illusion of a blank canvas on which a new society could be created: some kind of revolutionary change would have to occur within and against the old one. And so a similar, yet distinct, political philosophy sprang up in France among a similar social base of artisans and craftsmen—those who tended to control their own work process and own their own tools—who made up a significant part of the French economy. As they were used to an individualized mode of production, they too believed that self-sufficiency guaranteed liberty and prosperity. The belief that society should be organized along the lines of petty individual commodity producers, without interference from the state—a belief remarkably consonant with a variety of digital utopians—found its most powerful expression in the ideas of Pierre-Joseph Proudhon. It is to his ideas that I now turn.
What was Proudhonism?
An anarchist and influential member of the International Workingmen’s Association of which Karl Marx was also a part, Proudhon’s ideas were especially popular in his native France, where the economy was rooted far more deeply in small-scale artisanal production than the industrial-scale capitalism Marx experienced in Britain. His first major work, What Is Property? ([1840] 2011) (Proudhon’s pithy answer: property is theft) caught the attention of Marx, who admired the work’s thrust and style, even while he criticized its grasp of the science of political economy. After attempting to win over Proudhon by teaching him political economy and Hegelian dialectics, Marx became a vehement critic of Proudhon’s ideas, which held more sway over the First International than Marx’s own.
Proudhon was critical of the capitalism of his day, but made his criticisms, along with his ideas for a better society, from the perspective of a specific class. Rather than analyze, as Marx did, the contradictions of capitalism through the figure of the proletarian, who possesses nothing but their own capacity to work, Proudhon understood capitalism from the perspective of an artisanal small producer, who owns and labors with their own small-scale means of production. In David McNally’s (1993) survey of eighteenth- and nineteenth-century radical political economy, he summarizes Proudhon’s beliefs. Proudhon “envisages a society [of] small independent producers—peasants and artisans—who own the products of their personal labour, and then enter into a series of equal market exchanges. Such a society will, he insists, eliminate profit and property, and ‘pauperism, luxury, oppression, vice, crime and hunger will disappear from our midst’” (140).
For Proudhon, massive property accumulation of large firms and accompanying state collusion distorts these market exchanges. Under the prevailing system, he asserts in The Philosophy of Poverty, “there is irregularity and dishonesty in exchange” ([1847] 2012, 124) a problem exemplified by monopoly and its perversion of “all notions of commutative justice” (297). Monopoly permits unjust property extraction: Proudhon states in General Idea of theRevolution in the Nineteenth Century ([1851] 2003) that “the price of things is not proportionate to their VALUE: it is larger or smaller according to an influence which justice condemns, but the existing economic chaos excuses” (228). Exploitation becomes thereby a consequence of market disequilibria—the upward and downward deviations of price from value. It is a faulty market, warped by state intervention and too-powerful entrenched interests that is the cause of injustice. The Philosophy of Poverty details all manner of economic disaster caused by monopoly: “the interminable hours, disease, deformity, degradation, debasement, and all the signs of industrial slavery: all these calamities are born of monopoly” (290).
As McNally’s (1993) work shows, blaming economic woes on “monopolists” and “middlemen” ran rife in popular critiques of political economy during the seventeenth and eighteenth centuries, leading many radicals to call for free trade as a solution to widespread poverty. Proudhon’s anarchism was part of this general tendency. In General Idea of the Revolution in the Nineteenth Century ([1851] 2003), he railed against “middlemen, commission dealers, promoters, capitalists, etc., who, in the old order of things, stand in the way of producer and consumer” (90). The exploiters worked by obstructing and manipulating the exchange of goods and services on the market.
Proudhon’s particular view of economic injustice begets its own version of how best to change it. His revolutionary vision centers on the end of monopolies and currency reform, two ways that “monopolists” intervened in the smooth functioning of the market. He remained dedicated to the belief that the ills of capitalism arose from the concentrations of ownership creating unjust political power that could further distort the functioning of the market, and envisioned a market-based society where “political functions have been reduced to industrial functions, and that social order arises from nothing but transactions and exchanges” (1979, 11).
Proudhon evinced a technological optimism that Marx would later criticize. From his petty producer standpoint, he believed technology would empower workers by overcoming the division of labor:
Every machine may be defined as a summary of several operations, a simplification of powers, a condensation of labor, a reduction of costs. In all these respects machinery is the counterpart of division. Therefore through machinery will come a restoration of the parcellaire laborer, a decrease of toil for the workman, a fall in the price of his product, a movement in the relation of values, progress towards new discoveries, advancement of the general welfare. ([1847] 2012, 167)
While Proudhon recognized some of the dynamics by which machinery could immiserate workers through deskilling and automating their work, he remained strongly skeptical of organized measures to ameliorate this condition. He rejected compensating the unemployed through taxation because it would “visit ostracism upon new inventions and establish communism by means of the bayonet” ([1847] 2012, 207); he also criticized employing out-of-work laborers in public works programs. Technological development should remain unregulated, leading to eventual positive outcomes: “The guarantee of our liberty lies in the progress of our torture” (209).
Marx’s Critique of Proudhon
Marx, after attempting to influence Proudhon, became one of his most vehement critics, attacking his rival’s arguments, both major and marginal. Marx had a very different understanding of the new industrial society of the nineteenth century. Marx ([1865] 2016) diagnosed his rival’s misrepresentations of capitalism as derived from a particular class basis. Proudhon’s theories emanated “from the standpoint and with the eyes of a French small-holding peasant (later petit bourgeois)” rather than the proletarian, who possesses nothing but labor-power, which must be exchanged for a wage from the capitalist.
Since small producers own their own tools and depend largely on their own labor, they do not perceive any conflict between ownership of the means of production and labor: analysis from this standpoint, such as Proudhon’s, tends to collapse these categories together. Marx’s theorization of capitalism centered an emergent class of industrial proletarians, who, unlike small producers, owned nothing but their ability to sell their labor-power for a wage. Without any other means of survival, the proletarian could not experience the “labor market” as a meeting of equals coming to a mutually beneficial exchange of commodities, but as an abstraction from the concrete truth that working for whatever wage offered was compulsory, rather than a voluntary contract. Further, it was this very market for labor-power that, in the guise of equal exchange of commodities, helped to obscure that capitalist profit depended on extracting value from workers beyond what their wages compensated. This surplus value emerged in the production process, not, as Proudhon argued, at a later point where the goods produced were bought and sold. Without a conception of a contradiction between ownership and labor, the petty producer standpoint cannot see exploitation occurring in production.
Instead, Proudhon saw exploitation occurring after production, during exchanges on the market distorted by unfair monopolies held intact through state intervention, with which petty producers could not compete. However, Marx ([1867] 1992) demonstrated that “monopolies” were simply the outcome of the concentration of capital due to competition: in his memorable wording from Capital, “One capitalist always strikes down many others” (929). As producers compete and more and more producers fail and are proletarianized, capital is held in fewer and fewer hands. In other words, monopolies are a feature, not a bug, of market economies.
Proudhon’s misplaced emphasis on villainous monopolies is part of a greater error in diagnosing the momentous changes in the nineteenth-century economy: a neglect of the centrality of massive industrial-scale production to mature capitalism. In the first volume of Capital, Marx ([1867] 1992) argues that petty production was a historical phenomenon that would give way to capitalist production: “Private property which is personally earned, i.e., which is based, as it were, on the fusing together of the isolated, independent working individual with the conditions of his labour, is supplanted by capitalist private property, which rests on exploitation of alien, but formally free labour” (928). As producers compete and more and more producers fail and are proletarianized, capital—and with it, labor—concentrates.
However, petty production persisted alongside industrial capitalism in ways that masked how the continued existence of the former relies on the latter. Under capitalism, labor, through commodification of labor-power through the wage relationship, is transformed from concrete acts of labor into labor in the abstract in the system of industrial production for exchange. This abstract labor, the basis of surplus value, is for Marx the “specific social form of labour” in capitalism (Murray 2016, 124). Without understanding abstract labor, Proudhon could not perceive how capitalism functioned as not simply a means of producing profit, but a system of structuring all labor in society.
The importance of abstract labor to capitalism also meant that Proudhon’s plans to reform currency by making it worth labor-time would fail. As Marx ([1847] 1973) puts it in his book-length critique of Proudhon, “in large-scale industry, Peter is not free to fix for himself the time of his labor, for Peter’s labor is nothing without the co-operation of all the Peters and all the Pauls who make up the workshop” (77). In other words, because commodities under capitalism are manufactured through a complex division of labor, with different workers exercising differing levels of labor productivity, it is impossible to apportion specific quantities of time to specific labors on individual commodities. Without an understanding of the role of abstract labor to capitalist production, Proudhon could simply not grapple with the actual mechanisms of capitalism’s structuring of labor in society, and so, could not develop plans to overcome it. This overcoming could only occur through a political intervention that sought to organize production from the point of view of its socialization, not, as Proudhon believed, reforming elements of the exchange system to preserve individual producers.
The Roots of Digital Proudhonism
Many of Proudhon’s arguments were revived among digital radicals and reformers during the battles over copyright precipitated by networked digital technologies during the 1990s, of which Napster is the exemplary case. The techno-optimistic belief that the Internet would provide radical democratic change in cultural production took on a highly Proudhonian cast. The internet would “empower creators” by eliminating “middlemen” and “gatekeepers” such as record labels and distributors, who were the ultimate source of exploitation, and allowing exchange to happen on a “peer-to-peer” basis. By subverting the “monopoly” granted by copyright protections, radical change would happen on the basis of increased potential for voluntary market exchange, not political or social revolution.
Siva Vaidhyanathan’s Anarchist in the Library (2005) is a representative example of this argument, and made with explicit appeals to anarchist philosophy. According to Vaidhyanathan, “the new [peer-to-peer] technology evades the professional gatekeepers, flattening the production and distribution pyramid…. Digitization and networking have democratized the production of music” (48). This democratization by peer-to-peer distribution threatens “oligarchic forces such as global entertainment conglomerates” even as it works to “empower artists in new ways and connect communities of fans” (102).
The seeds of Digital Proudhonism were planted earlier than Napster, derived from the beliefs and practices of the Free Software movement. Threatened by intellectual property protections that signaled the corporatization of software development, the academics and amateurs of the Free Software movement developed alternative licenses that would keep software code “open” and thus able to share and build upon by any interested coder. This successfully protected the autonomous and collaborative working practices of the group. The movement’s major success was the Linux operating system, collaboratively built by a distributed team of mostly voluntary programmers who created a free alternative to the proprietary systems of Microsoft and Apple.
Linux indicated to those examining the front lines of technological development that, far from just a software development model, Free Software could actually be an alternative mode of production, and even a harbinger of democratic revolution. The triumph of an unpaid network-based community of programmers creating a free and open product in the face of the IP-dependent monopoly like Microsoft seemed to realize one of Marx’s ([1859] 1911) technologically determinist prophecies from A Contribution to the Critique of Political Economy:
At a certain stage of their development, the material forces of production in society come into conflict with the existing relations of production or—what is but a legal expression of the same thing—with the property relations within which they had been at work before. From forms of development of the forces of production these relations turn into their fetters. Then comes the era of social revolution. (12)
The Free Software movement provoked a wave of political initiatives and accompanying theorizations of a new digital economy based on what Yochai Benkler (2006) called “commons-based peer production.” With networked personal computers so widely distributed, “[t]he material requirements for effective information production and communication are now owned by numbers of individuals several orders of magnitude larger than the number of owners of the basic means of information production and exchange a mere two decades ago” (4). Suddenly, and almost as if by accident, the means of production were in the hands, not of corporations or states, but of individuals: a perfect encapsulation of the petty producer economy.
The classification of file sharing technologies such as Napster as “peer-to-peer” solidified this view. Napster’s design allowed users to exchange MP3 files by linking “peers” to one another, without storing files on Napster’s own servers. This performed two useful functions. It dispersed the server load for hosting and exchanging files among the computers and connections of Napster’s user base, alleviating what would have been massive bandwidth expenses. It also provided Napster with a defense against charges of infringement, as its own servers were not involved in copying files. This design might offer it protection from the charges that had doomed the site MP3.com, which had hosted user files.
While Napster’s suggestion that corporate structures for the distribution of culture could be supplanted by a voluntary federation of “peers” was important, it was ultimately a mystification. Not only did the courts find Napster liable for facilitating infringement, but the flat, “decentralized” topology of Napster still relied on the company’s central listing service to connect peers. Yet the ideological impact was profound. A law review article by Raymond Ku (2002), the then-director of the Institute of Law, Science & Technology, Seton Hall University School of Law is illustrative of both the nature of the arguments and how widespread and respectable they became in the post-Napster era: “the argument for copyright is primarily an argument for protecting content distributors in a world in which middlemen are obsolete. Copyright is no longer needed to encourage distribution because consumers themselves build and fund the distribution channels for digital content” (263). Clay Shirky’s (2008) paeans to “the mass amateurization of efforts previously reserved for media professionals” sound a similar note (55), presenting a technologically functionalist explanation for the existence of “gatekeeper” media industries: “It used to be hard to move words, images, and sounds from creator to consumer… The commercial viability of most media businesses involves providing those solutions, so preservation of the original problems became an economic imperative. Now, though, the problems of production, reproduction, and distribution are much less serious” (59). This narrative has remained persistent years after the brief flourishing of Napster: “the rise of peer-to-peer distribution systems… make middlemen hard to identify, if not cutting them out of the process altogether” (Kernfeld 2011, 217).
This situation was given an emancipatory political valence by intellectuals associated with copyright reform. Eager to protect an emerging sector of cultural production founded on sampling, remixing and file sharing, they described the accumulation of digital information and media online as a “commons,” which could be treated in an alternative way from forms of private property. Due to the lack of rivalry among digital goods (Benkler 2006, 36), users do not deplete the common stock, and so should benefit from a laxer approach to property rights. Law professor Lawrence Lessig (2004) started an initiative, Creative Commons, dedicated to establishing new licenses that would “build a layer of reasonable copyright on top of the extremes that now reign” (282). Part of Lessig’s argument for Creative Commons classifies media production and distribution, such as making music videos or mashups, as a “form of speech.” Therefore, copyright acted as unjust government regulation, and so must be resisted. “It is always a bad deal for the government to get into the business of regulating speech markets,” Lessig argues, even going so far as to raise the specter of communist authoritarianism: “It is the Soviet Union under Brezhnev” (128). Here Lessig performs a delicate rhetorical sleight of hand: the positioning cultural production as speech, it reifies a vision of such production as emanating from a solitary, individual producer who must remain unencumbered when bringing that speech to market.
Cory Doctorow (2014), a poster child of achievement in the new peer-to-peer world (in Free Culture, Lessig boasts of Doctorow’s successful promotional strategy of giving away electronic copies of his books for free), argues from a pro-market position against middlemen in his latest book: “copyright exists to protect middlemen, retailers, and distributors from being out-negotiated by creators and their investors” (48). While the argument remains the same, some targets have shifted: “investors” are “publishers, studios, record labels” while “intermediaries” are the platforms of distribution: “a distributor, a website like YouTube, a retailer, an e-commerce site like Amazon, a cinema owner, a cable operator, a TV station or network” (27).
While the thrust of these critiques of copyright focus on egregious overreach by the culture industries and their assault upon all manner of benign noncommercial activity, they also reveal a vision of an alternative cultural economy of independent producers who, while not necessarily anti-capitalist, can escape the clutches of massive centralized corporations through networked digital technologies. This facilitates both economic and political freedom via independence from control and regulation, and maximum opportunities on the market. “By giving artists the tools and technologies to take charge of their own production, marketing, and distribution, digitization underscored the disequilibrium of traditional record contracts and offered what for many is a preferable alternative” (Sinnreich 2013, 124). As it so often does, the fusion of ownership and labor characteristic of the petty producer standpoint, the structure of feeling of the independent artisan, articulates itself through the mantra of “Do It Yourself.”
These analyses and polemics reproduce the Proudhonist vision of an alternative to existing digital capitalism. Individual independent creators will achieve political autonomy and economic benefit through the embrace digital network technologies, as long as these creators are allowed to compete fairly with incumbents. Rather than insist on collective regulation of production, Digital Proudhonism seeks forms of deregulation, such as copyright reform, that will chip away at the existence of “monopoly” power of existing media corporations that fetters the market chances of these digital artisans.
Digital Proudhonism Today
Rooted in emergent digital methods of cultural production, the first wave of Digital Proudhonism shored up its petty producer standpoint through a rhetoric that centered the figure of the artist or “creator.” The contemporary term is the more expansive “the creative,” which lionizes a larger share of knowledge workers of the digital economy. As Sarah Brouillette (2009) notes, thinkers from management gurus such as Richard Florida to radical autonomist Marxist theorists such as Paolo Virno “broadly agree that over the past few decades more work has become comparable to artists’ work.” As a kind of practical consciousness, Digital Proudhonism easily spreads through the channels of the so-called “creative class,” its politics and worldview traveling under a host of other endeavors. These initiatives self-consciously seek to realize the ideals of Proudhonism in fields beyond the confines of music and film, with impact in manufacturing, social organization, and finance.
The maker movement is one prominent translation of Digital Proudhonism into a challenge to the contemporary organization of production, with allegedly radical effects on politics and economics. With the advent of new production technologies, such as 3D printers and digital design tools, “makers” can take the democratizing promise of the digital commons into the physical world. Just as digital technology supposedly distributes the means of production of culture across a wider segment of the population, so too will it spread manufacturing blueprints, blowing apart the restrictions of patents the same way Napster tore copyright asunder. “The process of making physical stuff has started to look more like the process of making digital stuff,” claims Chris Anderson (2012), author of Makers: The New Industrial Revolution (25). This has a radical effect: a realization of the goals of socialism via the unfolding of technology and the granting of access. “If Karl Marx were here today, his jaw would be on the floor. Talk about ‘controlling the tools of production’: you (you!) can now set factories into motion with a mouse click” (26). The key to this revolution is the ability of open-source methods to lower costs, thereby fusing the roles of inventor and entrepreneur (27).
Anderson’s “new industrial revolution” is one of a distinctly Proudhonian cast. Digital design tools are “extending manufacturing to a hugely expanded population of producers—the existing manufacturers plus a lot of regular folk who are becoming entrepreneurs” (41). The analogy to the rise of remix culture and amateur production lionized by Lessig is deliberate: “Sound familiar? It’s exactly what happened with the Web” (41). Anderson envisions the maker movement to be akin to the nineteenth century petty producers represented by Proudhon’s views: Cottage industries “were closer to what a Maker-driven New Industrial Revolution might be than are the big factories we normally associate with manufacturing” (49). Anderson’s preference for the small producer over the large factory echoes Proudhon. The subject of this revolution is not the proletarian at work in the large factory, but the artisan who owns their own tools.
A more explicitly radical perspective comes from the avowedly Proudhonist Center for a Stateless Society (C4SS), a “left market anarchist think tank and media center” deeply conversant in libertarian and so-called anarcho-capitalist economic theory. As with Anderson, C4SS subscribes to the techno-utopian potentials for a new arrangement of production driven by digital technology, which has the potential to reduce prices on goods, making them within the reach of anyone (once again, music piracy is held up as a precursor). However, this potential has not been realized because “economic ruling classes are able to enclose the increased efficiencies from new technology as a source of rents mainly through artificial scarcities, artificial property rights, and entry barriers enforced by the state” (Carson 2015a). Monopolies, enforced by the state, have “artificially” distorted free market transactions.
These monopolies, in the form of intellectual property rights, are preventing a proper Proudhonian revolution in which everyone would control their own individual production process. “The main source of continued corporate control of the production process is all those artificial property rights such as patents, trademarks, and business licenses, that give corporations a monopoly on the conditions under which the new technologies can be used” (Carson 2015a). However, once these artificial monopolies are removed, corporations will lose their power and we can have a world of “small neighborhood cooperative shops manufacturing for local barter-exchange networks in return for the output of other shops, of home microbakeries and microbreweries, surplus garden produce, babysitting and barbering, and the like” (Carson 2015a).
This revolution is a quiet one, requiring no strikes or other confrontations with capitalists. Instead, the answer is to create this new economy within the larger one, and hollow it out from the inside:
Seizing an old-style factory and holding it against the forces of the capitalist state is a lot harder than producing knockoffs in a garage factory serving the members of a neighborhood credit-clearing network, or manufacturing open-source spare parts to keep appliances running. As the scale of production shifts from dozens of giant factories owned by three or four manufacturing firms, to hundreds of thousands of independent neighborhood garage factories, patent law will become unenforceable. (Carson 2015b)
As Marx pointed out long ago, such petty producer fantasies of individually owned and operated manufacturing ironically rely upon the massive amounts of surplus generated from proletarians working in large-scale factories. The devices and infrastructures of the internet itself, as described by Nick Dyer-Witheford (2015) in his appropriately titled Cyber-Proletariat, are an obvious example. But proletarian labor also appears in the Digital Proudhonists’ own utopian fantasies. Anderson, describing the change in innovation wrought by the internet, describes how his grandfather’s invention of a sprinkler system would have gone differently. “When it came time to make more than a handful of his designs, he wouldn’t have begged some manufacturer to license his ideas, he would have done it himself. He would have uploaded his design files to companies that could make anything from tens to tens of thousands for him, even drop-shipping them directly to customers” (15). These “companies” of course are staffed by workers very different from “makers,” who work in facilities of mass production. Their labor is obscured by an influential ideology of artisans who believe themselves reliant on nothing but a personal computer and their own creativity.
A recent Guardian column by Paul Mason, anti-capitalist journalist and author of the techno-optimistic Postcapitalism serves as a further example. Mason (2016) argues, similarly to the C4SS, that intellectual property is the glue holding together massive corporations, and the key to their power over production. Simply by giving up on patents, as recommended by Anderson, Proudhonists will outflank capitalism on the market. His example is the “revolutionary” business model of the craft brewery chain BrewDog, who “open-sourced its recipe collection” by releasing the information publicly, unlike its larger corporate competitors. For Mason, this is an astonishing act of economic democracy: armed with BrewDog’s recipes, “All you would need to convert them from homebrew approximations to the actual stuff is a factory, a skilled workforce, some raw materials and a sheaf of legal certifications.” In other words, all that is needed to achieve postcapitalism is capitalism precisely as Marx described it.
The pirate fantasies of subverting monopolies extend beyond the initiatives of makers. The Digital Proudhonist belief in revolutionary change rooted in individual control of production and exchange on markets liberated from incumbents such as corporations and the state drives much of the innovation on the margins of tech. A recent treatise on the digital currency Bitcoin lauds Napster’s ability to “cut out the middlemen,” likening the currency to the file sharing technology (Kelly 2014, 11). “It is a quantum leap in the peer-to-peer network phenomenon. Bitcoin is to value transfer what Napster was to music” (33). Much like the advocates of digital currencies, Proudhon believed that state control of money was an unfair manipulation of the market, and sought to develop alternative currencies and banks rooted in labor-time, a belief that Marx criticized for its misunderstanding of the role of abstract labor in production.
In this way, Proudhon and his beliefs fit naturally into the dominant ideologies surrounding Bitcoin and other cryptocurrencies: that economic problems stem from the conspiratorial manipulation of “fiat” currency by national governments and financial organizations such as the Federal Reserve. In light of recent analyses that suggest that Bitcoin functions less as a means of exchange than as a sociotechnical formation to which an array of faulty right-wing beliefs about economics adheres (Golumbia 2016), and the revelation that contemporary fascist groups rely on Bitcoin and other cryptocurrency to fund their activities (Ebner 2018), it is clear that Digital Proudhonism exists comfortably beside the most reactionary ideologies. Historically, this was true of Proudhon’s own work as well. As Zeev Sternhell (1996) describes, the early twentieth-century French political organization the Cercle Proudhon were captivated by Proudhon’s opposition to Marxism, his distaste for democracy, and his anti-Semitism. According to Sternhell, the group was an influential source of French proto-fascist thought.
Alternatives
The goal of this paper is not to question the creativity of remix culture or the maker movement, or to indict their potentials for artistic expression, or negate all their criticisms of intellectual property. What I wish to criticize is the outsized economic and political claims made about it. These claims have an impact on policy, such as Obama’s “Nation of Makers” initiative (The White House Office of the Press Secretary 2016), which draws upon numerous federal agencies, hundreds of schools, as well as educational product companies to spark “a renaissance of American manufacturing and hardware innovation.” But further, like Marx, I not only think Proudhonism rests on incorrect analyses of cultural labor, but that such ideas lead to bad politics. As Astra Taylor (2014) extensively documents in The People’s Platform, for all the exclamations of new opportunities with the end of middlemen and gatekeepers, the creative economy is as difficult as it ever was for artists to navigate, noting that writers like Lessig have replaced the critique of the commodification of culture with arguments about state and corporate control (26-7). Meanwhile, many of the fruits of this disintermediation have been plucked by an exploitative “sharing economy” whose platforms use “peer-to-peer” to subvert all manner of regulations; at least one commentator has invoked Napster’s storied ability to “cut out the middlemen” to describe AirBnB and Uber (Karabel 2014).
Digital Proudhonism and its vision of federations of independent individual producers and creators (perhaps now augmented with the latest cryptographic tools) dominates the imagination of a radical challenge to digital capitalism. Its critiques of the corporate internet have become common sense. What kind of alternative radical vision is possible? Here I believe it is useful to return to the core of Marx’s critique of Proudhon.
Marx saw that the unromantic labor of proletarians, combining varying levels of individual productivity within the factory through machines which themselves are the product of social labor, capitalism’s dynamics create a historically novel form of production—social production—along with new forms of culture and social relations. For Marx ([1867] 1992), this was potentially the basis for an economy beyond capitalism. To attempt to move “back” to individual production was reactionary: “As soon as the workers are turned into proletarians, and their means of labour into capital, as soon as the capitalist mode of production stands on its own feet, then the further socialization of labour and further transformation of the soil and other means of production into socially exploited and, therefore, communal means of production takes on a new form” (928).
The socialization of production under the development of the means of production—the necessity of greater collaboration and the reliance on past labors in the form of machines—gives way to a radical redefinition of the relationship to one’s output. No one can claim a product was made by them alone; rather, production demands to be recognized as social. Describing the socialization of labor through industrialization in Socialism: Utopian and Scientific, Engels ([1880] 2008) states, “The yarn, the cloth, the metal articles that now came out of the factory were the joint product of many workers, through whose hands they had successively to pass before they were ready. No one person could say of them: ‘I made that; this is my product’” (56). To put it in the language of cultural production, there can be no author. Or, in another implicit recognition that the work of today relies on the work of many others, past and present: everything is a remix.
Or instead of a remix, a “vortex,” to use the language of Nick Dyer-Witheford (2015), whose Cyber-Proletariat reminds us that the often-romanticized labor of digital creators and makers is but one stratum among many that makes up digital culture. The creative economy is a relatively privileged sector in an immense global “factory” made up of layers of formal and informal workers operating at the point of production, distribution and consumption, from tantalum mining to device manufacture to call center work to app development. The romance of “DIY” obscures the reality that nothing digital is done by oneself: it is always already a component of a larger formation of socialized labor.
The labor of digital creatives and innovators, sutured as it is to a technical apparatus fashioned from dead labor and meant for producing commodities for profit, is therefore already socialized. While some of this socialization is apparent in peer production, much of it is mystified through the real abstraction of commodity fetishism, which masks socialization under wage relations and contracts. Rather than further rely on these contracts to better benefit digital artisans, a Marxist politics of digital culture would begin from the fact of socialization, and as Radhika Desai (2011) argues, take seriously Marx’s call for “a general organization of labour in society” via political organizations such as unions and labor parties (212). Creative workers could align with others in the production chain as a class of laborers rather than as an assortment of individual producers, and form the kinds of organizations, such as unions, that have been the vehicles of class politics, with the aim of controlling society’s means of production, not simply one’s “own” tools or products. These would be bonds of solidarity, not bonds of market transactions. Then the apparatus of digital cultural production might be controlled democratically, rather than by the despotism of markets and private profit.
_____
Gavin Mueller Gavin Mueller holds a PhD in Cultural Studies from George Mason University. He teaches in the New Media and Digital Culture program at the University of Amsterdam.
[1]The Pirate Bay, the largest and most antagonistic site of the peer-to-peer movement, has founders who identified as libertarian, socialist, and apolitical, respectively, and acquired funding from Carl Lundström, an entrepreneur associated with far-right movements (Schwartz 2014, 142).
_____
Works Cited
Anderson, Chris. 2012. Makers: The New Industrial Revolution. New York: Crown Business.
Barbrook, Richard and Andy Cameron. 1996. “The Californian Ideology.” Science as Culture 6:1. 44-72.
Kelly, Brian. 2014. The Bitcoin Big Bang: How Alternative Currencies Are About to Change the World. Hoboken: Wiley.
Kernfeld, Barry. 2011. Pop Song Piracy: Disobedient Music Distribution Since 1929. Chicago: University of Chicago Press.
Ku, Raymond Shih Ray. 2002. “The Creative Destruction of Copyright: Napster and the New Economics of Digital Technology.” The University of Chicago Law Review 69, no. 1: 263-324.
Lessig, Lawrence. 2004. Free Culture: The Nature and Future of Creativity. New York: Penguin Books.
Lessig, Lawrence. 2008. Remix: Making Art and Commerce Thrive in the New Economy. New York: Penguin.
Marx, Karl. (1847) 1973. The Poverty of Philosophy. New York: International Publishers.
Marx, Karl. (1859) 1911. A Contribution to the Critique of Political Economy. Translated by N.I. Stone. Chicago: Charles H. Kerr and Co.
Marx, Karl. (1865) 2016. “On Proudhon.” Marxists Internet Archive.
Marx, Karl. (1867) 1992. Capital: A Critique of Political Economy, Volume 1. Trans. Ben Fowkes. London: Penguin Books.
Proudhon, Pierre-Joseph. (1840) 2011. “What is Property.” In Property is Theft! A Pierre-Joseph Proudhon Reader, edited by Iain McKay. Translated by Benjamin R. Tucker. Edinburgh: AK Press.
Proudhon, Pierre-Joseph. (1847) 2012. The Philosophy of Poverty: The System of Economic Contradictions. Translated by Benjamin R. Tucker. Floating Press.
Proudhon, Pierre-Joseph. (1851) 2003. General Idea of the Revolution in the Nineteenth Century. Translated by John Beverly Robinson. Mineola, NY: Dover Publications, Inc.
Proudhon, Pierre-Joseph. (1863) 1979. The Principle of Federation. Translated by Richard Jordan. Toronto: University of Toronto Press.
Schwartz, Jonas Andersson. 2014. Online File Sharing: Innovations in Media Consumption. New York: Routledge.
Sinnreich, Aram. 2013. The Piracy Crusade: How the Music Industry’s War on Sharing Destroys Markets and Erodes Civil Liberties. Amherst, MA: University of Massachusetts Press.
Shirky, Clay. 2008. Here Comes Everybody: The Power of Organizing Without Organizations. New York: Penguin.
Sternhell, Zeev. 1996. Neither Right Nor Left: Fascist Ideology in France. Princeton, NJ: Princeton University Press.
Taylor, Astra. 2014. The People’s Platform: Taking Back Power and Culture in a Digital Age. New York: Metropolitan Books.
Vaidhyanathan, Siva. 2005. The Anarchist in the Library: How the Clash Between Freedom and Control is Hacking the Real World and Crashing the System. New York: Basic Books.
Williams, Raymond. 1977. Marxism and Literature. Oxford: Oxford University Press.
Winner, Langdon. 1997. “Cyberlibertarian Myths and The Prospects For Community.” Computers and Society 27:3. 14 – 19.
Without even needing to look at the copyright page, an aware reader may be able to date the work of a technology critic simply by considering the technological systems, or forms of media, being critiqued. Unfortunately, in discovering the date of a given critique one may be tempted to conclude that the critique itself must surely be dated. Past critiques of technology may be read as outdated curios, can be considered as prescient warnings that have gone unheeded, or be blithely disregarded as the pessimistic braying of inveterate doomsayers. Yet, in the case of Lewis Mumford, even though his activity peaked by the mid-1970s, it would be a mistake to deduce from this that his insights are of no value to the world of today. Indeed, when it comes to the “digital turn,” it is a “turn” in the road which Mumford saw coming.
It would be reductive to simply treat Mumford as a critic of technology. His body of work includes literary analysis, architectural reviews, treatises on city planning, iconoclastic works of history, impassioned calls to arms, and works of moral philosophy (Mumford 1982; Miller 1989; Blake 1990; Luccarelli 1995; Wojtowicz 1996). Leo Marx described Mumford as “a generalist with strong philosophic convictions,” one whose body of work represents the steady unfolding of “a single view of reality, a comprehensive historical, moral, and metaphysical—one might say cosmological—doctrine” (L. Marx 1990: 167). In the opinion of the literary scholar Charles Molesworth, Mumford is an “axiologist with a clear social purpose: he wants to make available to society a better and fuller set of harmoniously integrated values” (Molesworth 1990: 241), while Christopher Lehmann-Haupt caricatured Mumford as “perhaps our most distinguished flagellator,” and Lewis Croser denounced him as a “prophet of doom” who “hates almost all modern ideas and modern accomplishments without discrimination” (Mendelsohn 1994: 151-152). Perhaps Mumford is captured best by Rosalind Williams, who identified him alternately as an “accidental historian” (Williams 1994: 228) and as a “cultural critic” (Williams 1990: 44) or by Don Ihde who referred to him as an “intellectual historian” (Ihde 1993; 96). As for Mumford’s own views, he saw himself in the mold of the prophet Jonah, “that terrible fellow who keeps on uttering the very words you don’t want to hear, reporting the bad news and warning you that it will get even worse unless you yourself change your mind and alter your behavior” (Mumford 1979: 528).
Therefore, in the spirit of this Jonah let us go see what is happening in Ninevah after the digital turn. Drawing upon Mumford’s oeuvre, particularly the two volume The Myth of the Machine, this paper investigates similarities between Mumford’s concept of “the megamachine” and the post digital-turn technological world. In drawing out these resonances, I pay particular attention to the ways in which computers featured in Mumford’s theorizing of the “megamachine” and informed his darkening perception. In addition I expand upon Mumford’s concept of “the megatechnic bribe” to argue that, after the digital-turn, what takes place is a move from “the megatechnic bribe” towards what I term “megatechnic blackmail.”
In a piece provocatively titled “Prologue for Our Times,” which originally appeared in The New Yorker in 1975, Mumford drolly observed: “Even now, perhaps a majority of our countrymen still believe that science and technics can solve all human problems. They have no suspicion that our runaway science and technics themselves have come to constitute the main problem the human race has to overcome” (Mumford 1975: 374). The “bad news” is that more than forty years later a majority may still believe that.
Towards “The Megamachine”
The two-volume Myth of the Machine was not Mumford’s first attempt to put forth an overarching explanation of the state of the world mixing cultural criticism, historical analysis, and free-form philosophizing; he had previously attempted a similar feat with his Renewal of Life series.
Mumford originally planned the work as a single volume, but soon came to realize that this project was too ambitious to fit within a single book jacket (Miller 1989, 299). The Renewal of Life ultimately consisted of four volumes: Technics and Civilization (1934), The Culture of Cities (1938), The Condition of Man (1944), and The Conduct of Life (1951)—of which Technics and Civilization remains the text that has received the greatest continued attention. A glance at the nearly twenty-year period encompassed in the writing of these four books should make it obvious that they were written during a period of immense change and upheaval in the world and this certainly impacted the shape and argument of these books. These books fall evenly on opposite sides of two events that were to have a profound influence on Mumford’s worldview: the 1944 death of his son Geddes on the Italian front during World War II, and the dropping of atomic bombs on Hiroshima and Nagasaki in 1945.
The four books fit oddly together and reflect Mumford’s steadily darkening view of the world—a pendulous swing from hopefulness to despair (Blake 1990, 286-287). With the Renewal of Life, Mumford sought to construct a picture of the sort of “whole” which could develop such marvelous potential, but which was so morally weak that it wound up using that strength for destructive purposes. Unwelcome though Mumford’s moralizing may have been, it was an attempt, albeit from a tragic perspective (Fox 1990), to explain why things were the way that they were, and what steps needed to be taken for positive change to occur. That the changes that were taking place were those which, in Mumford’s estimation, were for the worse propelled him to develop concepts like “the megamachine” and the “megatechnic bribe” to explain the societal regression he was witnessing.
By the time Mumford began work on The Renewal of Life he had already established himself as a prominent architectural critic and public intellectual. Yet he remained outside of any distinct tradition, school, or political ideology. Mumford was an iconoclastic thinker whose ethically couched regionalist radicalism, influenced by the likes of Ebenezer Howard, Thorstein Veblen, Peter Kropotkin and especially Patrick Geddes, placed him at odds with liberals and socialists alike in the early decades of the twentieth century (Blake 1990, 198-199). For Mumford the prevailing progressive and radical philosophies had been buried amongst the rubble of World War I and he felt that a fresh philosophy was needed, one that would find in history the seeds for social and cultural renewal, and Mumford thought himself well-equipped to develop such a philosophy (Miller 1989, 298-299). Mumford was hardly the first in his era to attempt such a synthesis (Lasch 1991): by the time Mumford began work on The Renewal of Life, Oswald Spengler had already published a grim version of such a new philosophy (300). Indeed, there is something of a perhaps not-accidental parallel between Spengler’s title The Decline of the West and Mumford’s choice of The Renewal of Life as the title for his own series.
In Mumford’s estimation, Spengler’s work was “more than a philosophy of history” it was “a work of religious consolation” (Mumford 1938, 218). The two volumes of The Decline of the West are monuments to Prussian pessimism in which Spengler argues that cultures pass “from the organic to the inorganic, from spring to winter, from the living to the mechanical, from the subjectively conditioned to the objectively conditioned” (220). Spengler argued that this is the fate of all societies, and he believed that “the West” had entered into its winter. It is easy to read Spengler’s tracts as woebegone anti-technology dirges (Farrenkopf 2001, 110-112), or as a call for “Faustian man” (Western man) to assert dominance over the machine and wield it lest it be wielded against him (Herf 1984, 49-69); but Mumford observed that Spengler had “predicted, better than more hopeful philosophers, the disastrous downward course that modern civilization is now following” (Mumford 1938, 235). Spengler had been an early booster of the Nazi regime, if a later critic of it, and though Mumford criticized Spengler for the politics he helped unleash, Mumford still saw him as one with “much to teach the historian and the sociologist” (Mumford 1938, 227). Mumford was particularly drawn to, and influenced by, Spengler’s method of writing moral philosophy in the guise of history (Miller 1989, 301). And it may well be that Spengler’s woebegone example prompted Mumford to distance himself from being a more “hopeful” philosopher in his later writings. Nevertheless, where Spengler had gazed longingly towards the coming fall, Mumford, even in the grip of the megamachine, still believed that the fall could be avoided.
Mumford concludes the final section of The Renewal of Life, called The Conduct of Life, with measured optimism, noting: “The way we must follow is untried and heavy with difficulty; it will test to the utmost our faith and our powers. But it is the way toward life, and those who follow it will prevail” (Mumford 1951, 292). Alas, as the following sections will demonstrate, Mumford grew steadily less confident in the prospects of “the way toward life,” and the rise of the computer only served to make the path more “heavy with difficulty.”
The Megamachine
The volumes of The Renewal of Life hardly had enough time to begin gathering dust, before Mumford was writing another work that sought to explain why the prophesized renewal had not come. In the two volumes of TheMyth of the Machine Mumford revisits the themes from TheRenewal of Life while advancing an even harsher critique and developing his concept of the “megamachine.” The idea of the megamachine has been taken up for its explanatory potential by many others beyond Mumford in a range of fields, it was drawn upon by some of his contemporary critics of technology (Fromm 1968; Illich 1973; Ellul 1980), has been commented on by historians and philosophers of technology (Hughes 2004; Jacoby 2005; Mitcham 1994; Segal 1994), has been explored in post-colonial thinking (Alvares 1988), and has sparked cantankerous disagreements amongst those seeking to deploy the term to advance political arguments (Bookchin 1995; Watson 1997). It is a term that shares certain similarities with other concepts that aim to capture the essence of totalitarian technological control such as Jacque Ellul’s “technique,” (Ellul 1967) and Neil Postman’s “technopoly” (Postman 1993). It is an idea that, as I will demonstrate, is still useful for describing, critiquing, and understanding contemporary society.
Mumford first gestured in the direction of the megamachine in his 1964 essay “Authoritarian and Democratic Technics” (Mumford 1964). There Mumford argued that small scale technologies which require the active engagement of the human, that promote autonomy, and that are not environmentally destructive are inherently “democratic” (2-3); while large scale systems that reduce humans to mere cogs, that rely on centralized control and are destructive of planet and people, are essentially “authoritarian” (3-4). For Mumford, the rise of “authoritarian technics” was a relatively recent occurrence; however, by “recent” he had in mind “the fourth millennium B.C.” (3). Though Mumford considered “nuclear bombs, space rockets, and computers” all to be examples of contemporary “authoritarian technics” (5) he considered the first examples of such systems to have appeared under the aegis of absolute rulers who exploited their power and scientific knowledge for immense construction feats such as the building of the pyramids. As those endeavors had created “complex human machines composed of specialized, standardized, replaceable, interdependent parts—the work army, the military army, the bureaucracy” (3). In drawing out these two tendencies, Mumford was clearly arguing in favor of “democratic technics,” but he moved away from these terms once he coined the neologism “megamachine.”
Like the Renewal of Life before it, The Myth of the Machine was originally envisioned as a single book (Mumford 1970, xi). The first volume of the two represents something of a rewriting of Technics and Civilization, but gone from Technics and Human Development is the optimism that had animated the earlier work. By 1959 Mumford had dismissed of Technics and Civilization as “something of a museum piece” wherein he had “assumed, quite mistakenly, that there was evidence for a weakening of faith in the religion of the machine” (Mumford 1934, 534). As Mumford wrote The Myth of the Machine he found himself looking at decades of so-called technological progress and seeking an explanation as to why this progress seemed to primarily consist of mountains of corpses and rubble.
With the rise of kingship, in Mumford’s estimation, so too came the ability to assemble and command people on a scale that had been previously unknown (Mumford 1967, 188). This “machine” functioned by fully integrating all of its components to complete a particular goal and “when all the components, political and economic, military, bureaucratic and royal, must be included” what emerges is “the megamachine” and along with it “megatechnics” (188-189). It was a structure in which, originally, the parts were not made of steel, glass, stone or copper but flesh and blood—though each human component was assigned and slotted into a position as though they were a cog. While the fortunes of the megamachine ebbed and flowed for a period, Mumford saw the megamachine as becoming resurgent in the 1500s as faith in the “sun god” came to be replaced by the “divine king” exploiting new technical and scientific knowledge (Mumford 1970: 28-50). Indeed, in assessing the thought of Hobbes, Mumford goes so far as to state “the ultimate product of Leviathan was the megamachine, on a new and enlarged model, one that would completely neutralize or eliminate its once human parts” (100).
Unwilling to mince words, Mumford had started The Myth of the Machine by warning that with the “new ‘megatechnics’ the dominant minority will create a uniform, all-enveloping, super-planetary structure, designed for automatic operation” in which “man will become a passive, purposeless, machine-conditioned animal” (Mumford 1967, 3). Writing at the close of the 1960s, Mumford observed that the impossible fantasies of the controllers of the original megamachines were now actual possibilities (Mumford 1970, 238). The rise of the modern megamachine was the result of a series of historic occurrences: the French revolution which replaced the power of the absolute monarch with the power of the nation state; World War I wherein scientists and scholars were brought into service of the state whilst moderate social welfare programs were introduced to placate the masses (245); and finally the emergence of tools of absolute control and destructive power such as the atom bomb (253). Figures like Stalin and Hitler were not exceptions to the rule of the megamachine but only instances that laid bare “the most sinister defects of the ancient megamachine” its violent, hateful and repressive tendencies (247).
Even though the power of the megamachine may make it seem that resistance is futile, Mumford was no defeatist. Indeed, The Pentagon of Power ends with a gesture towards renewal that is reminiscent of his argument in The Conduct of Life—albeit with a recognition that the state of the world had grown steadily more perilous. A core element of Mumford’s arguments is that the megamachine’s power was reliant on the belief invested in it (the “myth”), but if such belief in the megamachine could be challenged, so too could the megamachine itself (Miller 1989, 156). The Pentagon of Power met with a decidedly mixed reaction: it was selected as a main selection by the Book-of-the-Month-Club and The New Yorker serialized much of the argument about the megamachine (157). Yet, many of the reviewers of the book denounced Mumford for his pessimism; it was in a review of the book in the New York Times that Mumford was dubbed “our most distinguished flagellator” (Mendelsohn 1994, 151-154). And though Mumford chafed at being dubbed a “prophet of doom” (Segal 1994, 149) it is worth recalling that he liked to see himself in the mode of that “prophet of doom” Jonah (Mumford 1979).
After all, even though Mumford held out hope that the megamachine could be challenged—that the Renewal of Life could still beat back The Myth of the Machine—he glumly acknowledged that the belief that the megamachine was “absolutely irresistible” and “ultimately beneficent…still enthralls both the controllers and the mass victims of the megamachine today” (Mumford 1967, 224). Mumford described this myth as operating like a “magical spell,” but as the discussion of the megatechnic bribe will demonstrate, it is not so much that the audience is transfixed as that they are bought off. Nevertheless, before turning to the topic of the bribe and blackmail, it is necessary to consider how the computer fit into Mumford’s theorizing of the megamachine.
The Computer and the Megamachine
Five years after the publication of The Pentagon of Power, Mumford was still claiming that “the Myth of the Machine” was “the ultimate religion of our seemingly rational age” (Mumford 1975, 375). While it is certainly fair to note that Mumford’s “today” is not our today, it would be foolhardy to merely dismiss the idea of the megamachine as anachronistic moralizing. And to credit the megamachine for its full prescience and continued utility, it is worth closely reading the text to consider the ways in which Mumford was writing about the computer—before the digital turn.
Writing to his friend, the British garden city advocate Frederic J. Osborn, Mumford noted: “As to the megamachine, the threat that it now offers turns out to be even more frightening, thanks to the computer, than even I in my most pessimistic moments had ever suspected. Once fully installed our whole lives would be in the hands of those who control the system…no decision from birth to death would be left to the individual” (M. Hughes 1971, 443). It may be that Mumford was merely engaging in a bit of hyperbolic flourish in referring to his view of the computer as trumping his “most pessimistic moments,” but Mumford was no stranger (or enemy) of pessimistic moments. Mumford was always searching for fresh evidence of “renewal,” his deepening pessimism points to the types of evidence he was actually finding. In constructing a narrative that traced the origins of the megamachine across history Mumford had been hoping to show “that human nature is biased toward autonomy and against submission to technology,” (Miller 1990, 157) but in the computer Mumford saw evidence pointing in the opposite direction.
In assessing the computer, Mumford drew a contrast between the basic capabilities of the computers of his day and the direction in which he feared that “computerdom” was moving (Mumford 1970, plate 6). Computers to him were not simply about controlling “the mechanical process” but also “the human being who once directed it” (189). Moving away from historical antecedents like Charles Babbage, Mumford emphasized Norbert Wiener’s attempt to highlight human autonomy and he praised Wiener’s concern for the tendency on the part of some technicians to begin to view the world only in terms of the sorts of data that computers could process (189). Mumford saw some of the enthusiasm for the computer’s capability as being rather “over-rated” and he cited instances—such as the computer failure in the case of the Apollo 11 moon landing—as evidence that computers were not quite as all-powerful as some claimed (190). In the midst of a growing ideological adoration for computers, Mumford argued that their “life-efficiency and adaptability…must be questioned” (190). Mumford’s critiquing of computers can be read as an attempt on his part to undermine the faith in computers when such a belief was still in its nascent cult state—before it could become a genuine world religion.
Mumford does not assume a wholly dismissive position towards the computer. Instead he takes a stance toward it that is similar to his position towards most forms of technology: its productive use “depends upon the ability of its human employers quite literally to keep their own heads, not merely to scrutinize the programming but to reserve the right for ultimate decision” (190). To Mumford, the computer “is a big brain in its most elementary state: a gigantic octopus, fed with symbols instead of crabs,” but just because it could mimic some functions of the human mind did not mean that the human mind should be discarded (Mumford 1967: 29). The human brain was for Mumford infinitely more complex than a computer could be, and even where computers might catch up in terms of quantitative comparison, Mumford argued that the human brain would always remain superior in qualitative terms (39). Mumford had few doubts about the capability of computers to perform the functions for which they had been programmed, but he saw computers as fundamentally “closed” systems whereas the human mind was an “open” one; computers could follow their programs but he did not think they could invent new ones from scratch (Mumford 1970: 191). For Mumford the rise in the power of computers was linked largely to the shift away from the “old-fashioned” machines such as Babbage’s Calculating Engine—and towards the new digital and electric machines which were becoming smaller and more commonplace (188). And though Mumford clearly respected the ingenuity of scientists like Weiner, he amusingly suggested that “the exorbitant hopes for a computer dominated society” were really the result of “the ‘pecuniary-pleasure’ center” (191). While Mumford’s measured consideration of the computer’s basic functioning is important, what is of greater significance is his thinking regarding the computer’s place in the megamachine.
Whereas much of Technics and Human Development focuses upon the development of the first megamachine, in The Pentagon of Power Mumford turns his focus to the fresh incarnation of the megamachine. This “new megamachine” was distinguished by the way in which it steadily did away with the need for the human altogether—now that there were plenty of actual cogs (and computers) human components were superfluous (258). To Mumford, scientists and scholars had become a “new priesthood” who had abdicated their freedom and responsibility as they came to serve the “megamachine” (268). But if they were the “priesthood” than who did they serve? As Mumford explained, in the command position of this new megamachine was to be found a new “ultimate ‘decision-maker’ and Divine King” and this figure had emerged in “a transcendent, electronic form” it was “the Central Computer” (273).
Writing in 1970, before the rise of the personal computer or the smartphone, Mumford’s warnings about computers may have seemed somewhat excessive. Yet, in imagining the future of a “a computer dominated society” Mumford was forecasting that the growth of the computer’s power meant the consolidation of control by those already in power. Whereas the rulers of yore had dreamt of being all-seeing, with the rise of the computer such power ceased being merely a fantasy as “the computer turns out to be the Eye of the reinstated Sun God” capable of exacting “absolute conformity to his demands, because no secret can be hidden from him, and no disobedience can go unpunished” (274). And this “eye” saw a great deal: “In the end, no action, no conversation, and possibly in time no dream or thought would escape the wakeful and relentless eye of this deity: every manifestation of life would be processed into the computer and brought under its all-pervading system of control. This would mean, not just the invasion of privacy, but the total destruction of autonomy: indeed the dissolution of the human soul” (274-275). The mention of “the human soul” may be evocative of a standard bit of Mumfordian moralizing, but the rest of this quote has more to say about companies like Google and Facebook, as well as about the mass surveillance of the NSA than many things written since. Indeed, there is something almost quaint about Mumford writing of “no action” decades before social media made it so that an action not documented on social media is of questionable veracity. While the comment regarding “no conversation” seems uncomfortably apt in an age where people are cautioned not to disclose private details in front of their smart TVs and in which the Internet of Things populates people’s homes with devices that are always listening.
Mumford may have written these words in the age of large mainframe computers but his comments on “the total destruction of autonomy” and the push towards “computer dominated society” demonstrate that he did not believe that the power of such machines could be safely locked away. Indeed, that Mumford saw the computer as an example of an “authoritarian technic” makes it highly questionable that he would have been swayed by the idea that personal computers could grant individuals more autonomy. Rather, as I discuss below, it is far more likely that he would have seen the personal computer as precisely the sort of democratic seeming gadget used to “bribe” people into accepting the larger “authoritarian” system. As it is precisely through the placing of personal computers in people’s homes, and eventually on their persons, that the megamachine is able to advance towards its goal of total control.
The earlier incarnations of the megamachine had dreamt of the sort of power that became actually available in the aftermath of World War II thanks to “nuclear energy, electric communication, and the computer” (274). And finally the megamachine’s true goal became clear: “to furnish and process an endless quantity of data, in order to expand the role and ensure the domination of the power system” (275). In short, the ultimate purpose of the megamachine was to further the power and enhance the control of the megamachine itself. It is easy to see in this a warning about the dangers of “big data” many decades before that term had entered into common use. Aware of how odd these predictions may have sounded to his contemporaries, Mumford recognized that only a few decades earlier such ideas could have been dismissed of as just so much “satire,” but he emphasized that such alarming potentialities were now either already in existence or nearly within reach (275).
In the twenty-first century, after the digital turn, it is easy to find examples of entities that fit the bill of the megamachine. It may, in fact, be easier to do this today than it was during Mumford’s lifetime. For one no longer needs to engage in speculative thinking to find examples of technologies that ensure that “no action” goes unnoticed. The handful of massive tech conglomerates that dominate the digital world today—companies like Google, Facebook, and Amazon—seem almost scarily apt manifestations of the megamachine. Under these platforms “every manifestation of life” gets “processed into the computer and brought under its all-pervading system of control,” whether it be what a person searches for, what they consider buying, how they interact with friends, how they express their likes, what they actually purchase, and so forth. And as these companies compete for data they work to ensure that nothing is missed by their “relentless eye[s].” Furthermore, though these companies may be technology firms they are like the classic megamachines insofar as they bring together the “political and economic, military, bureaucratic and royal.” Granted, today’s “royal” are not those who have inherited their thrones but those who owe their thrones to the tech empires at the heads of which they sit. While the status of these platform’s users, reduced as they are to cogs supplying an endless stream of data, further demonstrates the totalizing effects of the megamachine as it coordinates all actions to serve its purposes. And yet, Google, Facebook, and Amazon are not the megamachine, but rather examples of megatechnics; the megamachine is the broader system of which all of those companies are merely parts.
Though the chilling portrait created by Mumford seems to suggest a definite direction, and a grim final destination, Mumford tried to highlight that such a future “though possible, is not determined, still less an ideal condition of human development” (276). Nevertheless, it is clear that Mumford saw the culmination of “the megamachine” in the rise of the computer and the growth of “computer dominated society.” Thus, “the megamachine” is a forecast of the world after “the digital turn.” Yet, the continuing strength of Mumford’s concept is based not only on the prescience of the idea itself, but in the way in which Mumford sought to explain how it is that the megamachine secures obedience to its strictures. It is to this matter that our attention, at last, turns.
From the Megatechnic Bribe to Megatechnic Blackmail
To explain how the megamachine had maintained its power, Mumford provided two answers, both of which avoid treating the megamachine as a merely “autonomous” force (Winner 1989, 108-109). The first explanation that Mumford gives is an explanation of the titular idea itself: “the ultimate religion of our seemingly rational age” which he dubbed ““the myth of the machine” (Mumford 1975, 375). The key component of this “myth” is “the notion that this machine was, by its very nature, absolutely irresistible—and yet, provided that one did not oppose it, ultimately beneficial” (Mumford 1967, 224) —once assembled and set into action the megamachine appears inevitable, and those living in megatechnic societies are conditioned from birth to think of the megamachine in such terms (Mumford 1970, 331).
Yet, the second part of the myth is especially, if not more, important: it is not merely that the megamachine appears “absolutely irresistible” but that many are convinced that it is “ultimately beneficial.” This feeds into what Mumford described as “the megatechnic bribe,” a concept which he first sketched briefly in “Authoritarian and Democratic Technics” (Mumford 1964, 6) but which he fully developed in The Pentagon of Power (Mumford 1970, 330-334). The “bribe” functions by offering those who go along with it a share in the “perquisites, privileges, seductions, and pleasures of the affluent society” so long that is as they do not question or ask for anything different from that which is offered (330). And this, Mumford recognizes, is a truly tempting offer, as it allows its recipients to believe they are personally partaking in “progress” (331). After all, a “bribe” only really works if what is offered is actually desirable. But Mumford warns, once a people opt for the megamachine, once they become acclimated to the air-conditioned pleasure palace of the megatechnic bribe “no other choices will remain” (332).
By means of this “bribe,” the megamachine is able to effect an elaborate bait and switch: one through which people are convinced that an authoritarian technic is actually a democratic one. For the bribe accepts “the basic principle of democracy, that every member of society should have a share in its goods,” (Mumford 1964, 6). Mumford did not deny the impressive things with which people were being bribed, but to see them as only beneficial required, in his estimation, a one-sided assessment which ignored “long-term human purposes and a meaningful pattern of life” (Mumford 1970, 333). It entailed confusing the interests of the megamachine with the interests of actual people. Thus, the problem was not the gadgets as such, but the system in which these things were created, produced, and the purposes for which they were disseminated: the problem was that the true purpose of these things was to incorporate people into the megamachine (334). The megamachine created a strange and hostile new world, but offered its denizens bribes to convince them that life in this world was actually a treat. Ruminating on the matter of the persuasive power of the bribe, Mumford wondered if democracy could survive after “our authoritarian technics consolidates its powers, with the aid of its new forms of mass control, its panoply of tranquilizers and sedatives and aphrodisiacs” (Mumford 1964, 7). And in typically Jonah-like fashion, Mumford balked at the very question, noting that in such a situation “life itself will not survive, except what is funneled through the mechanical collective” (7).
If one chooses to take the framework of the “megatechnic bribe” seriously then it is easy to see it at work in the 21st century. It is the bribe that stands astride the dais at every gaudy tech launch, it is the bribe which beams down from billboards touting the slightly sleeker design of the new smartphone, it is the bribe which promises connection or health or beauty or information or love or even technological protection from the forces that technology has unleashed. The bribe is the offer of the enticing positives that distracts from the legion of downsides. And in all of these cases that which is offered is that which ultimately enhances the power of the megamachine. As Mumford feared, the values that wind up being transmitted across these “bribes,” though they may attempt a patina of concern for moral or democratic values, are mainly concerned with reifying (and deifying) the values of the system offering up these forms of bribery.
Yet this reading should not be taken as a curmudgeonly rejection of technology as such, in keeping with Mumford’s stance, one can recognize that the things put on offer after the digital turn provide people with an impressive array of devices and platforms, but such niceties also seem like the pleasant distraction that masks and normalizes rampant surveillance, environmental destruction, labor exploitation, and the continuing concentration of wealth in a few hands. It is not that there is a total lack of awareness about the downsides of the things that are offered as “bribes,” but that the offer is too good to refuse. And especially if one has come to believe that the technological status quo is “absolutely irresistible” then it makes sense why one would want to conclude that this situation is “ultimately beneficial.” As Langdon Winner put it several decades ago, “the prevailing consensus seems to be that people love a life of high consumption, tremble at the thought that it might end, and are displeased about having to clean up the messes that technologies sometimes bring” (Winner 1986, 51), such a sentiment is the essence of the bribe.
Nevertheless, it seems that more thought needs to be given to the bribe after the digital turn, the point after which the bribe has already become successful. The background of the Cold War may have provided a cultural space for Mumford’s skepticism, but, as Wendy Hui Kyong Chun has argued, with the technological advances around the Internet in the last decade of the twentieth century, “technology became once again the solution to political problems” (Chun 2006, 25). Therefore, in the twenty-first century it is not merely about bribery needing to be deployed as a means of securing loyalty to a system of control towards which there is substantial skepticism. Or, to put it slightly differently, at this point there are not many people who still really need to be convinced that they should use a computer. We no longer need to hypothesize about “computer dominated society,” for we already live there. After all, the technological value systems about which Mumford was concerned have now gained significant footholds not only in the corridors of power, but in every pocket that contains a smart phone. It would be easy to walk through the library brimming with e-books touting the wonders of all that is digital and persuasively disseminating the ideology of the bribe, but such “sugar-coated soma pills”—to borrow a turn of phrase from Howard Segal (1994, 188)—serve more as examples of the continued existence of the bribe than as explanations of how it has changed.
At the end of her critical history of social media, José Van Dijck (Van Dijck 2013, 174) offers what can be read as an important example of how the bribe has changed, when she notes that “opting out of connective media is hardly an option. The norm is stronger than the law.” On a similar note, Laura Portwood-Stacer in her study of Facebook abstention portrays the very act of not being on that social media platform as “a privilege in itself” —an option that is not available to all (Portwood-Stacer 2012, 14). In interviews with young people, Sherry Turkle has found many “describing how smartphones and social media have infused friendship with the Fear of Missing Out” (Turkle 2015, 145). Though smartphones and social media platforms certainly make up the megamachine’s ecosystem of bribes, what Van Dijck, Portwood-Stacer, and Turkle point to is an important shift in the functioning of the bribe. Namely, that today we have moved from the megatechnic bribe, towards what can be called “megatechnic blackmail.”
Whereas the megatechnic bribe was concerned with assimilating people into the “new megamachine,” megatechnic blackmail is what occurs once the bribe has already been largely successful. This is not to claim that the bribe does not still function—for it surely does through the mountain of new devices and platforms that are constantly being rolled out—but, rather, that it does not work by itself. The bribe is what is at work when something new is being introduced, it is what convinces people that the benefits outweigh any negative aspects, and it matches the sense of “irresistibility” with a sense of “beneficence.” Blackmail, in this sense, works differently—it is what is at work once people become all too aware of the negative side of smartphones, social media, and the like. Megatechnic blackmail is what occurs once, as Van Dijck put it, “the norm” becomes “stronger than the law” as here it is not the promise of something good that draws someone in but the fear of something bad that keeps people from walking away.
This puts the real “fear” in the “fear of missing out” which no longer needs to promise “use this platform because it’s great” but can instead now threaten “you know there are problems with this platform, but use it or you will not know what is going on in the world around you.” The shift from bribe to blackmail can further be seen in the consolidation of control in the hands of fewer companies behind the bribes—the inability of an upstart social network (a fresh bribe) to challenge the social network is largely attributable to the latter having moved into a blackmail position. It is no longer the case that a person, in a Facebook saturated society, has a lot to gain by joining the site, but that (if they have already accepted its bribe) they have a lot to lose by leaving it. The bribe secures the adoration of the early-adopters, and it convinces the next wave of users to jump on board, but blackmail is what ensures their fealty once the shiny veneer of the initial bribe begins to wear thin.
Mumford had noted that in a society wherein the bribe was functioning smoothly, “the two unforgivable sins, or rather punishable vices, would be continence and selectivity” (Mumford 1970, 332) and blackmail is what keeps those who would practice “continence and selectivity” in check. As Portwood-Stacer noted, abstention itself may come to be a marker of performative privilege—to opt out becomes a “vice” available only to those who can afford to engage in it. To not have a smartphone, to not have a Facebook account, to not buy things on Amazon, or use Google, becomes either a signifier of one’s privilege or marks one as an outsider.
Furthermore, choosing to renounce a particular platform (or to use it less) rarely entails swearing off the ecosystem of megatechnics entirely. As far as the megamachine is concerned, insofar as options are available and one can exercise a degree of “selectivity” what matters is that one is still selecting within that which is offered by the megamachine. The choice between competing systems of particular megatechnics is still a choice that takes place within the framework of the megamachine. Thus, Douglas Rushkoff’s call “program or be programmed” (Rushkoff 2010) appears less as a rallying cry of resistance, than as a quiet acquiescence: one can program, or one can be programmed, but what is unacceptable is to try to pursue a life outside of programs. Here the turn that seeks to rediscover the Internet’s once emancipatory promise in wikis, crowd-funding, digital currency, and the like speaks to a subtle hope that the problems of the digital day can be defeated by doubling down on the digital. From this technologically-optimistic view the problem with companies like Google and Facebook is that they have warped the anarchic promise, violated the independence, of cyberspace (Barlow 1996; Turner 2006); or that capitalism has undermined the radical potential of these technologies (Fuchs 2014; Srnicek and Williams 2015). Yet, from Mumford’s perspective such hopes and optimism are unwarranted. Indeed, they are the sort of democratic fantasies that serve to cover up the fact that the computer, at least for Mumford, was ultimately still an authoritarian technology. For the megamachine it does not matter if the smartphone with a Twitter app is used by the President or by an activist: either use is wholly acceptable insofar as both serve to deepen immersion in the “computer dominated society” of the megamachine. And thus, as to the hope that megatechnics can be used to destroy the megamachine it is worth recalling Mumford’s quip, “Let no one imagine that there is a mechanical cure for this mechanical disease” (Mumford 1954, 50).
In this situation the only thing worse than falling behind or missing out is to actually challenge the system itself, to practice or argue that others practice “continence and selectivity” leads to one being denounced as a “technophobe” or “Luddite.” That kind of derision fits well with Mumford’s observation that the attempt to live “detached from the megatechnic complex” to be “cockily independent of it, or recalcitrant to its demands, is regarded as nothing less than a form of sabotage” (Mumford 1970, 330). Minor criticisms can be permitted if they are of the type that can be assimilated and used to improve the overall functioning of the megamachine, but the unforgiveable heresy is to challenge the megamachine itself. It is acceptable to claim that a given company should be attempting to be more mindful of a given social concern, but it is unacceptable to claim that the world would actually be a better place if this company were no more. One sees further signs of the threat of this sort of blackmail at work in the opening pages of the critical books about technology aimed at the popular market, wherein the authors dutifully declare that though they have some criticisms they are not anti-technology. Such moves are not the signs of people merrily cooperating with the bribe, but of people recognizing that they can contribute to a kinder, gentler bribe (to a greater or lesser extent) or risk being banished to the margins as fuddy-duddies, kooks, environmentalist weirdos, or as people who really want everyone to go back to living in caves. The “myth of the machine” thrives on the belief that there is no alternative. One is permitted (in some circumstances) to say “don’t use Facebook” but one cannot say “don’t use the Internet.” Blackmail is what helps to bolster the structure that unfailingly frames the megamachine as “ultimately beneficial.”
The megatechnic bribe dazzles people by muddling the distinction between, to use a comparison Mumford was fond of, “the goods life” and “the good life.” But megatechnic blackmail threatens those who grow skeptical of this patina of “the good life” that they can either settle for “the goods life” or they can look forward to an invisible life on the margins. Those who can’t be bribed are blackmailed. Thus it is no longer just that the myth of the machine is based on the idea that the megamachine is “absolutely irresistible” and “ultimately beneficial” but that it now includes the idea that to push back is “unforgivably detrimental.”
Conclusion
Of the various biblical characters from whom one can draw inspiration, Jonah is something of an odd choice for a public intellectual. After all, Jonah first flees from his prophetic task, sleeps in the midst of a perilous storm, and upon delivering the prophecy retreats to a hillside to glumly wait to see if the prophesized destruction will come. There is a certain degree to which Jonah almost seems disappointed that the people of Ninevah mend their ways and are forgiven by God. Yet some of Jonah’s frustrated disappointment flows from his sense that the whole ordeal was pointless—he had always known that God would forgive the people of Ninevah and not destroy the city. Given that, why did Jonah have to leave the comfort of his home in the first place? (JPS 1999, 1333-1337). Mumford always hoped to be proven wrong. As he put it in the very talk in which he introduced himself as Jonah, “I would die happy if I knew that on my tombstone could be written these words, ‘This man was an absolute fool. None of the disastrous things that he reluctantly predicted ever came to pass!’ Yes: then I could die happy” (Mumford 1979, 528). But those words do not appear on Mumford’s tombstone.
Assessing whether Mumford was “an absolute fool” and whether any “of the disastrous things that he reluctantly predicted ever came to pass” is a tricky mire to traverse. For the way that one responds to that probably has as much to do with whether or not one shares Mumford’s outlook than with anything particular he wrote. During his lifetime Mumford had no shortage of critics who viewed him as a stodgy pessimist. But what is one to expect if one is trying to follow the example of Jonah? If you see yourself as “that terrible fellow who keeps on uttering the very words you don’t want to hear, reporting the bad news and warning you that it will get even worse unless you yourself change your mind and alter your behavior” (528) than you can hardly be surprised when many choose to dismiss you as a way of dismissing the bad news you bring.
Yet, it has been the contention of this paper, that Mumford should not be ignored—and that his thought provides a good tool to think with after the digital turn. In his introduction to the 2010 edition of Mumford’s Technics and Civilization, Langdon Winner notes that it “openly challenged scholarly conventions of the early twentieth century and set the stage for decades of lively debate about the prospects for our technology-centered ways of living” (Mumford 2010, ix). Even if the concepts from The Myth of the Machine have not “set the stage” for debate in the twenty-first century, the ideas that Mumford develops there can pose useful challenges for present discussions around “our technology-centered ways of living.” True, “the megamachine” is somewhat clunky as a neologism but as a term that encompasses the technical, political, economic, and social arrangements of a powerful system it seems to provide a better shorthand to capture the essence of Google or the NSA than many other terms. Mumford clearly saw the rise of the computer as the invention through which the megamachine would be able to fully secure its throne. At the same time, the idea of the “megatechnic bribe” is a thoroughly discomforting explanation for how people can grumble about Apple’s labor policies or Facebook’s uses of user data while eagerly lining up to upgrade to the latest model of iPhone or clicking “like” on a friend’s vacation photos. But in the present day the bribe has matured beyond a purely pleasant offer into a sort of threat that compels consent. Indeed, the idea of the bribe may be among Mumford’s grandest moves in the direction of telling people what they “don’t want to hear.” It is discomforting to think of your smartphone as something being used to “bribe” you, but that it is unsettling may be a result of the way in which that claim resonates.
Lewis Mumford never performed a Google search, never made a Facebook account, never Tweeted or owned a smartphone or a tablet, and his home was not a repository for the doodads of the Internet of Things. But it is doubtful that he would have been overly surprised by any of them. Though he may have appreciated them for their technical capabilities he would have likely scoffed at the utopian hopes that are hung upon them. In 1975 Mumford wrote: “Behold the ultimate religion of our seemingly rational age—the Myth of the Machine! Bigger and bigger, more and more, farther and farther, faster and faster became ends in themselves, as expressions of godlike power; and empires, nations, trusts, corporations, institutions, and power-hungry individuals were all directed to the same blank destination” (Mumford 1975, 375).
Is this assessment really so outdated today? If so, perhaps the stumbling block is merely the term “machine,” which had more purchase in the “our” of Mumford’s age than in our own. Today, that first line would need to be rewritten to read “the Myth of the Digital” —but other than that, little else would need to be changed.
_____
Zachary Loeb is a graduate student in the History and Sociology of Science department at the University of Pennsylvania. His research focuses on technological disasters, computer history, and the history of critiques of technology (particularly the work of Lewis Mumford). He is a frequent contributor to The b2 Review Digital Studies section.
Alvares, Claude. 1988. “Science, Colonialism, and Violence: A Luddite View” In Science, Hegemony and Violence: A Requiem for Modernity, edited by Ashis Nandy. Delhi: Oxford University Press.
Blake, Casey Nelson. 1990. Beloved Community: The Cultural Criticism of Randolph Bourne, Van Wyck Brooks, Waldo Frank, and Lewis Mumford. Chapel Hill: The University of North Carolina Press.
Bookchin, Murray. 1995. Social Anarchism or Lifestyle Anarchism: An Unbridgeable Chasm. Oakland: AK Press.
Cowley, Malcolm and Bernard Smith, eds. 1938. Books That Changed Our Minds. New York: The Kelmscott Editions.
Ezrahi, Yaron, Mendelsohn, Everett, and Segal, Howard P., eds. 1994. Technology, Pessimism, and Postmodernism. Amherst: University of Massachusetts Press.
Ellul, Jacques. 1967. The Technological Society. New York: Vintage Books.
Ellul, Jacques. 1980. The Technological System. New York: Continuum.
Farrenkopf, John. 2001 Prophet of Decline: Spengler on World History and Politics. Baton Rouge: LSU Press.
Fox, Richard Wightman. 1990. “Tragedy, Responsibility, and the American Intellectual, 1925-1950” In Lewis Mumford: Public Intellectual, edited by Thomas P. Hughes, and Agatha C. Hughes. New York: Oxford University Press.
Fromm, Erich. 1968. The Revolution of Hope: Toward a Humanized Technology. New York: Harper & Row, Publishers.
Fuchs, Christian. 2014. Social Media: A Critical Introduction. Los Angeles: Sage.
Herf, Jeffrey. 1984. Reactionary Modernism: Technology, Culture, and Politics in Weimar and the Third Reich. Cambridge: Cambridge University Press.
Hughes, Michael (ed.) 1971. The Letters of Lewis Mumford and Frederic J. Osborn: A Transatlantic Dialogue, 1938-1970. New York: Praeger Publishers.
Hughes, Thomas P. and Agatha C. Hughes. 1990. Lewis Mumford: Public Intellectual. New York: Oxford University Press.
Hughes, Thomas P. 2004. Human-Built World: How to Think About Technology and Culture. Chicago: University of Chicago Press.
Hui Kyong Chun, Wendy. 2006. Control and Freedom. Cambridge: The MIT Press.
Ihde, Don. 1993. Philosophy of Technology: an Introduction. New York: Paragon House.
Jacoby, Russell. 2005 Picture Imperfect: Utopian Thought for an Anti-Utopian Age. New York: Columbia University Press.
JPS Hebrew-English Tanakh. 1999. Philadelphia: The Jewish Publication Society.
Lasch, Christopher. 1991. The True and Only Heaven: Progress and Its Critics. New York: W. W.Norton and Company.
Luccarelli, Mark. 1996. Lewis Mumford and the Ecological Region: The Politics of Planning. New York: The Guilford Press.
Marx, Leo. 1988. The Pilot and the Passenger: Essays on Literature, Technology, and Culture in the United States. New York: Oxford University Press.
Marx, Leo. 1990. “Lewis Mumford” Prophet of Organicism” In Lewis Mumford: Public Intellectual, edited by Thomas P. Hughes and Agatha C. Hughes. New York: Oxford University Press.
Marx, Leo. 1994. “The Idea of ‘Technology’ and Postmodern Pessimism.” In Does Technology Drive History? The Dilemma of Technological Determinism, edited by Merritt Roe Smith and Leo Marx. Cambridge: MIT Press.
Mendelsohn, Everett. 1994. “The Politics of Pessimism: Science and Technology, Circa 1968.” In Technology, Pessimism, and Postmodernism, edited by Yaron Ezrahi, Everett Mendelsohn, and Howard P. Segal. Amherst: University of Massachusetts Press.
Miller, Donald L. 1989. Lewis Mumford: A Life. New York: Weidenfeld and Nicolson.
Molesworth, Charles. 1990. “Inner and Outer: The Axiology of Lewis Mumford.” In Lewis Mumford: Public Intellectual, edited by Thomas P. Hughes and Agatha C. Hughes. New York: Oxford University Press.
Mitcham, Carl. 1994. Thinking Through Technology: The Path between Engineering and Philosophy. Chicago: University of Chicago Press.
Mumford, Lewis. 1926. “Radicalism Can’t Die.” The Jewish Daily Forward (English section, Jun 20).
Mumford, Lewis. 1934. Technics and Civilization. New York: Harcourt, Brace and Company.
Mumford, Lewis. 1938. The Culture of Cities. New York, Harcourt, Brace and Company.
Mumford, Lewis. 1944. The Condition of Man. New York, Harcourt, Brace and Company.
Mumford, Lewis. 1951. The Conduct of Life. New York, Harcourt, Brace and Company.
Mumford, Lewis. 1954. In the Name of Sanity. New York: Harcourt, Brace and Company.
Mumford, Lewis. 1959. “An Appraisal of Lewis Mumford’s Technics and Civilization (1934).” Daedalus 88:3 (Summer). 527-536.
Mumford, Lewis. 1962. The Story of Utopias. New York: Compass Books, Viking Press.
Mumford, Lewis. 1964. “Authoritarian and Democratic Technics.” Technology and Culture 5:1 (Winter). 1-8.
Mumford, Lewis. 1967. Technics and Human Development. Vol. 1 of The Myth of the Machine. Technics and Human Development. New York: Harvest/Harcourt Brace Jovanovich.
Mumford, Lewis. 1970. The Pentagon of Power. Vol. 2 of The Myth of the Machine. Technics and Human Development. New York: Harvest/Harcourt Brace Jovanovich.
Mumford, Lewis. 1975. Findings and Keepings: Analects for an Autobiography. New York, Harcourt, Brace and Jovanovich.
Mumford, Lewis. 1979. My Work and Days: A Personal Chronicle. New York: Harcourt, Brace, Jovanovich.
Mumford, Lewis. 1982. Sketches from Life: The Autobiography of Lewis Mumford. New York: The Dial Press.
Mumford, Lewis. 2010. Technics and Civilization. Chicago: The University of Chicago Press.
Portwood Stacer, Laura. 2012. “Media Refusal and Conspicuous Non-consumption: The Performative and Political Dimensions of Facebook Abstention.” New Media and Society (Dec 5).
Postman, Neil. 1993. Technopoly: The Surrender of Culture to Technology. New York: Vintage Books.
Rushkoff, Douglas. 2010. Program or Be Programmed. Berkeley: Soft Skull Books.
Segal, Howard P. 1994a. “The Cultural Contradictions of High Tech: or the Many Ironies of Contemporary Technological Optimism.” In Pessimism, and Postmodernism, edited by Yaron Ezrahi, Everett Mendelsohn, and Howard P. Segal. Amherst: University of Massachusetts Press.
Segal, Howard P. 1994b. Future Imperfect: The Mixed Blessings of Technology in America. Amherst: The University of Amherst Press.
Spengler, Oswald. 1932a. Form and Actuality. Vol. 1 of The Decline of the West. New York: Alfred K. Knopf.
Spengler, Oswald. 1932b. Perspectives of World-History. Vol. 2 of The Decline of the West. New York: Alfred K. Knopf.
Spengler, Oswald. 2002. Man and Technics: A Contribution to a Philosophy of Life. Honolulu: University Press of the Pacific.
Srnicek, Nick and Alex Williams. 2015. Inventing the Future: Postcapitalism and a World Without Work. New York: Verso Books.
Turkle, Sherry. 2015. Reclaiming Conversation: The Power of Talk in a Digital Age. New York: Penguin Press.
Turner, Fred. 2006. From Counterculture to Cyberculture: Stewart Brand, The Whole Earth Network and the Rise of Digital Utopianism. Chicago: The University of Chicago Press.
Van Dijck, José. 2013. The Culture of Connectivity. Oxford: Oxford University Press.
Watson, David. 1997. Against the Megamachine: Essays on Empire and Its Enemies. Brooklyn: Autonomedia.
Williams, Rosalind. 1990. “Lewis Mumford as a Historian of Technology in Technics and Civilization.” In Lewis Mumford: Public Intellectual, edited by Thomas P. Hughes and Agatha C. Hughes. New York: Oxford University Press.
Williams, Rosalind. 1994. “The Political and Feminist Dimensions of Technological Determinism.” In Does Technology Drive History? The Dilemma of Technological Determinism, edited by Merritt Roe Smith and Leo Marx. Cambridge: MIT Press.
Winner, Langdon. 1989. Autonomous Technology: Technics-out-of-Control as a Theme in Political Thought. Cambridge: MIT Press.
Winner, Langdon. 1986. The Whale and the Reactor. Chicago: University of Chicago Press.
Wojtowicz, Robert. 1996. Lewis Mumford and American Modernism: Eutopian Themes for Architecture and Urban Planning. Cambridge: Cambridge University Press.
A student’s initiation into mathematics routinely includes an encounter with the Pythagorean Theorem, a simple statement that describes the relationship between the hypotenuse and sides of a right triangle: the sum of the squares of the sides is equal to the square of the hypotenuse, i.e., A2 + B2 = C2. The statement and its companion figure of a generic right triangle are offered as an interchangeable, seamless flow between geometric “things” and numbers (Kline 1980, 11). Among all the available theorems that might be offered as emblematic of mathematics, this one is held out as illustrative of a larger claim about mathematics and the Real. This use suggests that it is what W. J. T. Mitchell would call a “hypericon,” a visual paradigm that doesn’t “merely serve as [an] illustration to theory; [it] picture[s] theory” (1995, 49). Understood in this sense, the Pythagorean Theorem asserts a central belief of Western culture: that mathematics is the voice of an extra-human realm, a realm of fundamental, unchanging truth apart from human experience, culture, or biology. It is understood as more essential than the world and as prior to it. Mathematics becomes an outlier among representational systems because numbers are claimed to be “ideal forms necessarily prior to the material ‘instances’ and ‘examples’ that are supposed to illustrate them and provide their content” (Rotman 2000, 147).[1] The dynamic flow between the figure of the right triangle and the formula transforms mathematical language into something akin to Christian concepts of a prelapsarian language, a “nomenclature of essences, in which word would have reflected thing with perfect accuracy” (Eagle 2007, 184). As the Pythagoreans styled it, the world is number (Guthrie 1962, 256). The image schools the child into the culture’s uncritical faith in the rhetoric of numbers, a sort of everyman’s version of the Pythagorean vision. Whatever the general belief in this notion, the nature of mathematical representations has been a central problematic of mathematics that appears throughout its history. The difference between the historical significance of this problematic and its current manifestation in the rhetoric of “Big Data” illustrates an important cultural anxiety.
Contemporary culture uses the Pythagorean Theorem’s image and formula as a hypericon that not only obscures problematic assumptions about the consistency and completeness of mathematics, but which also misrepresents the consistency and completeness of the material-world relationships that mathematics is used to describe.[2] This rhetoric of certainty, consistency, and completeness continues to infect contemporary political and ideological claims. For example, “Big Data” enthusiasts – venture capitalists, politicians, financiers, education reformers, policing strategists, et al. – often invoke a neo-Pythagorean worldview to validate their claims, claims that rest on the interplay of technology, analysis, and mythology (Boyd and Crawford 2012, 663). What is a highly productive problematic in the 2,500-year history of mathematics disappears into naïve assertions about the inherent “truth” of the algorithmic outputs of mathematically based technologies. When corporate behemoths like Pearson and Knewton (makers of an adaptive learning platform) participate in events such as the Department of Education’s 2012 “Datapalooza,” the claims become totalizing. Knewton’s CEO, Jose Ferreira, asserts, in a crescendo of claims, that “Knewton gets 5-10 million actionable data points per student per day”; and that tagging content “unlocks data.” In his terms, “work cascades out data” that is then subject to the various models the corporation uses to predict and prescribe the future. His claims of descriptive completeness are correct, he asserts, because “everything in education is correlated to everything else” (November 2012). The narrative of Ferreira’s claims is couched in fluid equivalences of data points, mathematical models, and a knowable future. Data become a metonym for not only the real student, but for the nature of learning and human cognition. In a sort of secularized predestination, the future’s origin in perfectly representational numbers produces perfect predictions of students’ performance. Whatever the scale of the investment dollars behind these New Pythagoreans, such claims lose their patina of objective certainty when placed in the history of the West’s struggle with mathematized claims about a putative “real.” For them, predictions are not the outcomes of processes; rather, predictions are revelations of a deterministic reality.[3]
A recent claim for a facial-recognition algorithm that identifies criminals normalizes its claims by simultaneously asserting and denying that “in all cultures and all periods of recorded human history, [is] the belief that the face alone suffices to reveal innate traits of a person” (Wu, Xiaolin, and Xi Zhang 2016, 1) The authors invoke the Greeks:
Aristotle in his famous work Prior Analytics asserted, ‘It is possible to infer character from features, if it is granted that the body and the soul are changed together by the natural affections’ (1)
The authors then remind readers that “the same question has captivated professionals (e.g., psychologists, sociologists, criminologists) and amateurs alike, across all cultures, and for as long as there are notions of law and crime. Intuitive speculations are abundant both in writing . . . and folklore.” Their work seeks to demonstrate that the question yields to a mathematical model, a model that is specifically a non-human intelligence: “In this section, we try to answer the question in the most mechanical and scientific way allowed by the available tools and data. The approach is to let a machine learning method explore the data and reveal the most discriminating facial features that tell apart criminals and non-criminals” (6). The rhetoric solves the problem by asserting an unchanging phenomenon – the criminal face – by invoking a mathematics that operates via machine learning. Problematic crimes such as “DWB” (driving while black) disappear along with history and social context.
Such claims rest on confused and contradictory notions. For the Pythagoreans, mathematics was not a representational system. It was the real, a reality prior to human experience. This claim underlies the authority of mathematics in the West. But simultaneously, it effectively operates as a response to the world, i.e., it is a re-presentation. As re-presentational, it becomes another language, and like other languages, it is founded on bias, exclusions, and incompleteness. These two notions of mathematics are resolved by seeing the representation as more “real” than the multiply determined events it re-presents. Nonetheless, once we say it re-presents the real, it becomes just another sign system that comes after the real. Often, bouncing back and forth between its extra-human status and its representational function obscures the places where representation fails or becomes an approximation. To data fetishists, “data” has a status analogous to that of “number” in the Pythagorean’s world. For them, reality is embedded in a quasi-mathematical system of counting, measuring, and tagging. But the ideological underpinnings, pedagogical assumptions, and political purposes of the tagging go unremarked; to do so would problematize the representational claims. Because the world is number, coders are removed from the burden of history and from the responsibility to examine the social context that both creates and uses their work.
The confluence of corporate and political forces validates itself through mathematical imagery, animated graphics, and the like. Terms such as “data-driven” and “evidence-based” grant the rhetoric of numbers a power that ignores its problematic assumptions. There is a pervasive refusal to recognize that data are artifacts of the descriptive categories imposed on the world. But “Big Data” goes further; the term is used in ways that perpetuate the antique notion of “number” by invoking numbers as distillations of certainty and a knowable universe. “Number” becomes decontextualized and stripped of its historical, social, and psychological origins. Because the claims of Big Data embed residual notions about the re-presentational power of numbers, and about mathematical completeness and consistency, they speak to such deeply embedded beliefs about mathematics, the most fundamental of which is the Pythagorean claim that the world is number. The point is not to argue whether mathematics is formal, referential, or psychological; rather, it is to place contemporary claims about “Big Data” in historical and cultural contexts where such issues are problematized. The claims of Big Data speak through a language whose power rests on longstanding notions of mathematics; however, these notions lose some of their power when placed in the context of mathematical invention (Rotman 2000, 4-7).
“Big Data” represents a point of convergence for residual mathematical beliefs, beliefs that obscure cultural frameworks and thus interfere with critique. For example, predictive policing tools are claimed to produce neutral, descriptive acts using machine intelligence. Berk asserts that “if you let the computer just snoop around in the dataset, it finds things that are unexpected by existing theory and works really substantially well to help forecast” (Berk 2011). In this view, Big Data – the numerical real – can be queried to produce knowledge that is not driven by any theoretical or ideological interest. Precisely because the world is presumed to be mathematical, the political, economic, and cultural frameworks of its operation can become the responsibility of the algorithm’s users. To this version of a mathematized real, there is no inherently ethical algorithmic action prior to the use of its output. Thus, the operation of the algorithm is doubly separated from its social contexts. First, the mathematics themselves are conceived as autonomous embodiments of a reality independent of the human; second, the effects of the algorithm – its predictions – are apart from values, beliefs, and needs that create the algorithm. The specific limits of historical and social context do not mathematically matter; the limits are determined by the values and beliefs of the algorithm’s users. The problematics of mathematizing the world are passed off to its customers. Boyd and Crawford identify three interacting phenomena that create the notion of Big Data: technology, analysis, and mythology (2012, 663). The mythological element embodies both dystopian and utopian narratives, and thus how we categorize reality. O’Neil notes that “these models are constructed not just from data but from the choices we make about which data to pay attention to – and which to leave out. Those choices are not just about logistics, profits, and efficiency. They are fundamentally moral” (2016, 218). On one hand, the predictive value depends on the moral, ethical, and political values of the user, a non-mathematical question. On the other hand, this division between the model and its application carves out a special arena where the New Pythagoreans claim that it operates without having to recognize social or historical contexts.
Whatever their commitment to number, the Pythagoreans were keenly aware that their system was vulnerable to discoveries that problematized their basic claim that the world is number. And they protected their beliefs through secrecy and occasionally through violence. Like the proprietary algorithms of contemporary corporations, their work was reserved for a circle of adepts/owners. First among their secrets was the keen understanding that an unnamable point on the number line would represent a rupture in the relationship of mathematics and world. If that relationship failed, with it would go their basis for belief in a knowable world. Their claims arose from within the concrete practices of Greek mathematics. For example, the Greeks portrayed numbers by a series of dots called Monads. The complex ratios used to describe geometric figures were understood to generate the world, and numbers were visualized in arrangements of stones (calculi). A 2 x 2 arrangement of stones had the form of a square, hence the term “square numbers.” Thus, it was a foundational claim that any point or quantity (because monads were conceived as material objects) have a corresponding number. Line segments, circumferences, and all the rest had to correspond to what we still call the “rational numbers”: 1, 2, 3 . . . and their ratios. Thus, the Pythagorean’s great claim – that the world is number – was vulnerable to the discovery of a point on the number line that could not be named as the ratio of integers.
Unfortunately for their claim, such numbers are common, and the great irony of the Pythagorean Theorem lies in the fact that it routinely generates numbers that are not ratios of integers. For example, a right triangle with sides one-unit long has a hypotenuse √2 units long (12 + 12 = C2 i.e., 2 = C2 i.e., √2 = C). Numbers such as √2 contradict the mathematical aspiration toward a completely representational system because they cannot be expressed as a ratio of integers, and hence their status as what are called “ir-rational” numbers.[4] A relatively simple proof demonstrates that they are neither odd nor even; these numbers exist in what is called a “surd” relationship to the integers, that is, they are silent – the meaning of “surd” – about each other. They literally cannot “speak” to each other. To the Pythagoreans, this appeared as a discontinuity in their naming system, a gap that might be the mark of a world beyond the generative power of number. Such numbers are, in fact, a new order of naming precipitated by the limited representational power of the prior naming system based on real numbers. But for the Pythagoreans, to look upon these numbers was to look upon the void, to discover that the world had no intrinsic order. Irrational numbers disrupted the Pythagorean project of mathematizing reality. This deeply religious impulse toward order underlies the aspiration that motivates the bizarre and desperate terminologies of contemporary data fetishists: “data-driven,” “evidence-based,” and even “Big Data,” which is usually capitalized to show the reification of number it desires.
Big Data appeals to a mathematical nostalgia for certainty that cannot be sustained in contemporary culture. O’Neil provides careful examples of how history, social context, and the data chosen for algorithmic manipulation do not – indeed cannot – matter in this neo-Pythagorean world. Like Latour, she historicizes the practices and objects that the culture pretends are natural. The ideological and political nature of the input becomes invisible, especially when algorithms are granted special proprietary status that converts them to what Pasquale calls a “black box” (2016). It is a problematic claim, but it can be made without consequence because it speaks in the language of an ancient mathematical philosophy still heard in our culture,[5] especially in education where the multifoliate realities of art, music, and critical writing are quashed by forces such as the Core Curriculum and its pervasive valorization of standardization. Such strategies operate in fear of the inconsistency and incompleteness of any representational relationship, a fear of epistemological silence that has lurked in the background of Western mathematics from its beginnings. To the Greeks, the irrationals represented a sort of mathematical aphasia. The irrational numbers such as √2 thus obtained emblematic values far beyond their mathematical ones. They inserted an irremediable gap between the world and the “word” of mathematics. Such knowledge was catastrophic – adepts were murdered for revealing the incommensurability of side and diagonal.[6] More importantly, the discovery deeply fractured mathematics itself. The gap in the naming system split mathematics into algebra (numerical) and geometry (spatial), a division that persisted for almost 2,000 years. Little wonder that the Greeks restricted geometry to measurements that were not numerical, but rather were produced through the use of a straightedge and compass. Physical measurement by line segments and circles rather than by a numerical length effectively sidestepped the threat posed by the irrational numbers. Kline notes, “The conversion of all of mathematics except the theory of whole numbers into geometry . . . forced a sharp separation between number and geometry . . . at least until 1600” (1980, 105). Once we recognize that the Pythagorean theorem is a hypericon, i.e., a visual paradigm that visualizes theory, we begin to see its extension into other fundamental mathematical “discoveries” such as Descartes’s creation of coordinate geometry. A deep anxiety about the gap between word and world is manifested in both mathematics as well as in contemporary claims about “Big Data.”
The division between numerical algebra and spatial geometry remained a durable feature of Western mathematics until problematized by social change. Geometry offered an elegant axiomatic system that satisfied the hierarchical impulse of the culture, and it worked in concert with the Aristotelian logic that dominated notions of truth. The Aristotelian nous and the Euclidian axioms seemed similar in ways that justified the hierarchical structure of the church and of traditional politics. They were part of a social fabric that bespoke an extra-human order that could be dis-covered. But with the rise of commercial culture came the need for careful records, computations, risk assessments, interest calculations, and other algebraic operations. The tension between algebra and geometry became more acute and visible. It was in this new cultural setting that Descartes’s work appeared. Descartes’s 1637 publication of La Géométrie confronted the terrors revealed in the irrationals embodied in the geometry/algebra divide by subordinating both algebra and geometry to a more abstract relationship. Turchin notes that Descartes re-unified geometry and arithmetic not by granting either priority or reducing either to the other; rather, in his language “the symbols do not designate number or quantities, but relations of quantities” (Turchin 1977, 196).
Rotman directly links concepts of number to this shifting relationship of algebra and geometry and even to the status of numbers such as zero:
During the fourteenth century, with the emergence of mercantile / capitalism in Northern Italy, the handling of numbers passed . . . to merchants, artisan-scientists, architects . . . for whom arithmetic was an essential prerequisite for trade and technology . . . . The central role occupied by double-entry book-keeping (principle of the zero balance) and the calculational demands of capitalism broke down any remaining resistance to the ‘infidel symbol’ of zero. (1987, 7-8)
The emergence of the zero is an index to these changes, not the revelation of a pre-existing, extra-human reality. Similarly, Alexander’s history of the calculus places its development in the context of Protestant notions of authority (2014, 140-57). He emphasizes that the methodologies of the sciences and mathematics began to serve as political models for scientific societies: “if reasonable men of different backgrounds and convictions could meet to discuss the workings of nature, why could they not do the same in matters that concerned the state?” (2014, 249). Again, in the case of the calculus, mathematics responds to the emerging forces of the Renaissance: individualism, capitalism, and Protestantism. Certainly, the ongoing struggle with irrational numbers extends from the Greeks to the Renaissance, but the contexts are different. For the Greeks, the generative nature of number was central. For 17th Century Europe, the material demands of commercial life converged with religious, economic, and political shifts to make number a re-presentational tool.
The turmoil of that historical moment suggests the turmoil of our own era in the face of global warfare, climate change, over-population, and the litany of other catastrophes we perpetually await.[7] In both cases, the anxiety produces impulses to mathematize the world and thereby reveal a knowable “real.” The current corporate fantasy that the world is a simulation is the fantasy of non-mathematicians (Elon Musk and Sam Altman) to embed themselves in a techno-centric narrative of the power of their own tools to create themselves. While this inexpensive version of Baudrillard’s work might seem sophomoric, it nevertheless exposes the impulse to contain the visceral fear that a socially constructed world is no different from solipsism’s chaos. It seems a version of the freshman student’s claim that “Everything’s just opinion” or the plot of another Matrix film. They speak/act/claim that their construction of meaning is equal to any other — the old claim that Hitler and Mother Teresa are but two equally valid “opinions”. They don’t know that the term/concept is social construction, and their radical notions of the individual prevent them from recognizing the vast scope, depth, and stabilizing power of social structures. They are only the most recent example of how social change exacerbates the misuse of mathematics.
Amid these sorts of epistemic shifts, Renaissance mathematics underwent its own transformations. Within a fifty-year span (1596-1646), Descartes, Newton, and Leibniz are born. Their major works appear, respectively, in 1637, 1666, and 1675, a burst of innovation that cannot be separated from the shifts in education, economics, religion, and politics that were then sweeping Europe. Porter notes that statistics emerges alongside the rising modern state of this era. Managing the state’s wealth required profiles of populations. Such mathematical profiling began in the mid-1600s, with the intent to describe the state’s wealth and human resources for the creation of “sound, well-informed state policy” (Porter 1986, 18). The notion of probabilities, samples, and models avoids the aspirations that shaped earlier mathematics by making mathematics purely descriptive. Hacking suggests that the delayed appearance of probability arises from five issues: 1) an obsession with determinism and personal fatalism; 2) the belief that God spoke through randomization and thus, a theory of the random was impious; 3) the lack of equiprobable events provided by standardized objects, e.g., dice; 4) the lack of economic drivers such as insurances and annuities; and 5) the lack of a workable calculus needed for the computation of probability distributions (Davis and Hersh 1981, 21). Hacking finds these insufficient and suggests that as authority was relocated in nature and not in the words of authorities, this led to the observation of frequencies.[8] Alongside the fierce opposition of the Church to the zero, understood as the absence of God, and to the calculus, understood as an abandonment of material number, the shifting mathematical landscape signals the changes that began to affect the longstanding status of number as a sort of prelapsarian language.
Mathematics was losing its claims to completeness and consistency, and the incommensurables problematized that. Newton and Leibniz “de-problematized” irrationals, and opened mathematics to a new notion of approximation. The central claims about mathematics were not disproved; worse, they were set aside as unproductive conflations of differences between the continuous and the discrete. But because the church saw mathematics as “true” in a fashion inextricable from other notions of the truth, it held a special status. Calculus became a dangerous interest likely to call the Inquisition to action. Alexander locates the central issue as the irremediable conflict between the continuous and the discrete, something that had been the core of Zeno’s paradoxes (2014). The line of mathematical anxieties stretches from the Greeks into the 17th Century. These foundational understandings seem remote and abstract until we see how they re-appear in the current claims about the importance of “Big Data.” The term legitimates its claims by resonating with other responses to the anxiety of representation.
The nature of the hypericon perpetuates the notion of a stable, knowable reality that rests upon a non-human order. In this view, mathematics is independent of the world. It existed prior to the world and does not depend on the world; it is not an emergent narrative. The mathematician discovers what is already there. While this viewpoint sees mathematics as useful, mathematics is prior to any of its applications and independent of them. The parallel to religious belief becomes obvious if we substitute the term “God” for “mathematics”; the notions of a self-existing, self-knowing, and self-justifying system are equally applicable (Davis and Hersh 1981, 232-3). Mathematics and religion share in a fundamental Western belief in the Ideal. Taken together, they reveal a tension between the material and the eternal that can be mediated by specific languages. There is no doubt that a simplified mathematics serves us when we are faced with practical problems such as staking out a rectangular foundation for a house, but beyond such short-term uses lie more consequential issues, e.g., the relation of the continuous and the discrete, and between notions of the Ideal and the socially constructed. These larger paradoxes remain hidden when assertions of completeness, consistency, and certainty go unchallenged. In one sense, the data fetishists are simply the latest incarnation of a persistent problem: understanding mathematics as culturally situated.
Again, historicizing this problem addresses the widespread willingness to accept their totalistic claims. And historicizing these claims requires a turn to established critical techniques. For example, Rotman’s history of the zero turns to Derrida’s Of Grammatology to understand the forces that complicated and paralyzed the acceptance of zero into Western mathematics (1987). He turns to semiotics and to the work of Ricoeur to frame his reading of the emergence of the zero in the West during the Renaissance. Rotman, Alexander, desRaines, and a host of mathematical historians recognize that the nature of mathematical authority has evolved. The evolution lurks in the role of the irrational numbers, in the partial claims of statistics, and in the approximations of the calculus. The various responses are important as evidence of an anxiety about the limits of representation. The desire to resolve such arguments seems revelatory. All share an interest in the gap between the aspirations of systematic language and its object: the unnamable. That gap is iconic, an emblem of its limits and the functions it plays in the generation of novel responses to the threat of an inarticulable void; its history exposes the powerful attraction of the claims made for Big Data.
By the late 1800s, questions of systematic completeness and consistency grew urgent. For example, they appeared in the competing positions of Frege and Hilbert, and they resonated in the direction David Hilbert gave to 20th Century mathematics with his famed 23 questions (Blanchette 2014). The second of these specifically addressed the problem of proving that mathematical systems could be both complete and consistent. This question deeply influenced figures such as Bertrand Russell, Ludwig Wittgenstein, and others.[9] Hilbert’s question was answered in 1931 by Gödel’s theorems that demonstrated the inherent incompleteness and inconsistency of arithmetic systems. Gödel’s first theorem demonstrated that axiomatic systems would necessarily have true statements that could be neither proven nor disproven; his second theorem demonstrated that such systems would necessarily be inconsistent. While mathematicians often take care to note that his work addresses a purely mathematical problem, it nevertheless is read metaphorically. As a metaphor, it connects the problematic relationship of natural and mathematical languages. This seems inevitable because it led to the collapse of the mathematical aspiration for a wholly formal language that does not require what is termed ‘natural’ language, that is, for a system that did not have to reach outside of itself. Just as John Craig’s work exemplifies the epistemological anxieties of the late eighteenth century,[10] so also does Gödel’s work identify a sustained attempt of his own era to demonstrate that systematic languages might be without gaps.
Gödel’s theorems rely on a system that creates specialized numbers for symbols and the operations that relate them. This second-order numbering enabled him to move back and forth between the logic of statements and the codes by which they were represented. His theorems respond to an enduring general hope for complete and consistent mappings of the world with words, and each embeds a representational failure. Craig was interested in the loss of belief in the gospels; Pythagoras feared the gaps in the number line represented by the irrational numbers, and Gödel identified the incompleteness and inconsistency of axiomatic systems. To the dominant mathematics of the early 20th Century, the value of the question to which Gödel addresses himself lies in the belief that an internally complete mathematical map would be the mark of either of two positions: 1) the purely syntactic orderliness of mathematics, one that need not refer to any experiential world (this is the position of Frege, Russell, and Hilbert); or 2) the emergence of mathematics alongside concrete, human experience. Goldstein argues that these two dominant alternatives of the late eighteenth and early twentieth centuries did not consider the aprioricity of mathematics to constitute an important question, but Gödel offered his theorems as proofs that served exactly that idea. His demonstration of incompleteness does not signal a disorderly cosmos; rather, it argues that there are arithmetic truths that lie outside of formalized systems; as Goldstein notes, “the criteria for semantic truth could be separated from the criteria for provability” (2006, 51). This was an argument for mathematical Platonism. Goldstein’s careful discussion of the cultural framework and the meta-mathematical significance of Gödel’s work emphasizes that it did not argue for the absence of any extrinsic order to the world (51). Rather, Gödel was consciously demonstrating the defects in a mathematical project begun by Frege, addressed in the work of Russell and Whitehead, and enshrined by Hilbert as essential for converting mathematics into a profoundly isolated system whose orderliness lay in its internal consistency and completeness.[11] Similarly, his work also directly addressed questions about the a priori nature of mathematics challenged by the Vienna Circle. Paradoxically, by demonstrating that a foundational system – arithmetic – was not consistent and complete, the argument that mathematics was simply a closed, self-referential system could be challenged and opened to meta-mathematical claims about epistemological problems.
Gödel’s work, among other things, argues for essential differences between human thought and mathematics. Gödel’s work has become imbricated in a variety of discourses about representation, the nature of the mind, and the nature of language. Goldstein notes:
The structure of Gödel’s proof, the use it makes of ancient paradox [the liar’s paradox], speaks at some level, if only metaphorically, to the paradoxes in the tale that the twentieth century told itself about some of its greatest intellectual achievements – including, of course, Gödel’s incompleteness theorems. Perhaps someday a historian of ideas will explain the subjectivist turn taken by so many of the last century’s most influential thinkers, including not only philosophers but hard-core scientists, such as Heisenberg and Bohr. (2006, 51)
At the least, his work participated in a major consideration of three alternative understandings of symbolic systems: as isolated, internally ordered syntactic systems, as accompaniments of experience in the material world, or as the a priori realities of the Ideal. Whatever the immensely complex issues of these various positions, Gödel is the key meta-mathematician/logician whose work describes the limits of mathematical representation through an elegant demonstration that arithmetic systems – axiomatic systems – were inevitably inconsistent and incomplete. Depending on one’s aspirations for language, this is either a great catastrophe or an opening to an infinite world of possibility where the goal is to deploy a paradoxical stance that combines the assertion of meaning with its cancellation. This double position addresses the problem of representational completeness.
This anxiety became acute during the first half of the twentieth century as various discourses deployed strategies that exploited this heightened awareness of the intrinsic incompleteness and inconsistency of systematic knowledge. Whatever their disciplinary differences – neurology, psychology, mathematics – they nonetheless shared the sense that recognizing these limits was an opportunity to understand discourse both from within narrow disciplinary practices and from without in a larger logical and philosophical framework that made the aspiration toward completeness quaint, naïve, and unproductive. They situated the mind as a sort of boundary phenomenon between the deployment of discourses and an extra-linguistic reality. In contrast to the totalistic claims of corporate spokesmen and various predictive software, this sensibility was a recognition that language might always fail to re-present its objects, but that those objects were nonetheless real and expressible as a function of the naming process viewed from yet another position. An important corollary was that these gaps were not only a token for the interplay of word and world, but were also an opportunity to illuminate the gap itself. In short, symbol systems seemed to stand as a different order of phenomena than whatever they proposed to represent, and the result was a burst of innovative work across a variety of disciplines.
Data enthusiasts sometimes participate in a discredited mathematics, but they do so in powerfully nostalgic ways that resonate with the amorphous Idealism infused in our hierarchical churches, political structures, aesthetics, and epistemologies. Thus, Big Data enthusiasts speak through the residue of a powerful historical framework to assert their own credibility. For these New Pythagoreans, mathematics remains a quasi-religious undertaking whose complexity, consistency, sign systems, and completeness assert a stable, non-human order that keeps chaos at bay. However, they are stepping into an issue more fraught than simply the misuses and misunderstanding of the Pythagorean Theorem. The historicized view of mathematics and their popular invocation of mathematics diverge at the point that anxieties about the representational failure of languages become visible. We not only need to historicize our understanding of mathematics, but also to identify how popular and commercial versions of mathematics are nostalgic fetishes for certainty, completeness, and consistency. Thus, the authority of algorithms has less to do with their predictive power than their connection to a tradition rooted in the religious frameworks of Pythagoreanism. Critical methods familiar to the humanities – semiotics, deconstruction, psychology – build a sort of critical braid that not only re-frames mathematical inquiry, but places larger question about the limits of human knowledge directly before us; this braid forces an epistemological modesty that is eventually ethical and anti-authoritarian in ways that the New Pythagoreans rarely are.
Immodest claims are the hallmark of digital fetishism, and are often unabashedly conscious. Chris Anderson, while Editor-in-Chief of Wired magazine, infamously argued that “the data deluge makes the scientific method obsolete” (2008). He claimed that distributed computing, cloud storage, and huge sets of data made traditional science outmoded. He asserted that science would become mathematics, a mathematical sorting of data to discover new relationships:
At the petabyte scale, information is not a matter of simple three and four-dimensional taxonomy and order but of dimensionally agnostic statistics. It calls for an entirely different approach, one that requires us to lose the tether of data as something that can be visualized in its totality. It forces us to view data mathematically first and establish a context for it later.
“Agnostic statistics” would be the mechanism that for precipitating new findings. He suggests that mathematics is somehow detached from its contexts and represents the real through its uncontaminated formal structures. In Anderson’s essay, the world is number. This neo-Pythagorean claim quickly gained attention, and then wilted in the face of scholarly response such as that of Pigliucci (2009, 534).
Anderson’s claim was both a symptom and a reinforcement of traditional notions of mathematics that extend far back into Western history. Its explicit notions of mathematics stirred two kinds of anxiety: one reflected a fear of a collapsed social project (science) and the other reflected a desperate hunger for a language – mathematics – that penetrated the veil drawn across reality and made the world knowable. Whatever the collapse of his claim, similar ones such as those of the facial phrenologists continue to appear. Without history – mathematical, political, ideological – “data” acquires a material status much as number did for the Greeks, and this status enables statements of equality between the messiness of reality and the neatness of formal systems. Part of this confusion is a common misunderstanding of the equals sign in popular culture. The “sign” is a relational function, much as the semiotician’s signified and signifier combine to form a “sign.” However, when we mistreat treat the “equals sign” as a directional, productive operation, the nature of mathematics loses its availability to critique. It becomes a process outside of time that generates answers by re-presenting the real in a language. Where once a skeptical Pythagorean might be drowned for revealing the incommensurability of side and diagonal, proprietary secrecy now threatens a sort of legalized financial death for those who violate copyright (Pasquale 2016, 142). Pasquale identifies the “creation of invisible powers” as a hallmark of contemporary, algorithmic culture (2016, 193). His invaluable work recovers the fact that algorithms operate in a network of economic, political, and ideological frameworks, and he carefully argues the role of legal processes in resisting the control that algorithms can impose on citizens.
Pasquale’s language is not mathematical, but it shares with scholars like Rotman and Goldstein an emphasis on historical and cultural context. The algorithm is made accountable if we think of it as an act whose performance instantiates digital identities through powerful economic, political, and ideological narratives. The digitized individual does not exist until it becomes the subject of such a performance, a performance which is framed much as any other performance is framed: by the social context, by repetition, and through embodiment. Digital individuals come into being when the algorithmic act is performed, but they are digital performances because of the irremediable gap between any object and its re-presentation. In short, they are socially constructed. This would be of little import except that these digital identities begin as proxies for real bodies, but the diagnoses and treatments are imposed on real, social, psychological, flesh beings. The difference between digital identity and human identity can be ignored if the mathematized self is isomorphic with the human self. Thus, algorithmic acts entangle the input > algorithm > output sequence by concealing layers of problematic differences: digital self and human self; mathematics and the Real; test inputs and test outputs, scaling, and input and output. The sequence loses its tidy sequential structure when we recognize that the outputs are themselves data and often re-enter the algorithm’s computations by their transfer to third parties whose information returns for re-processing. A somewhat better version of the flow would be data1 > algorithm > output > data2 > algorithm > output > data3. . . . with the understanding that any datum might re-enter the process. The sequence suggests how an object is both the subject of its context and a contributor to that context. The threat of a constricting output looms precisely because there is a decreasing room for what de Certeau calls “le perruque” (1988, 25), i.e, the inefficiencies where unplanned innovation appears. And like any text, it requires a variety of analytic strategies.
We have learned to think of algorithms in directional terms. We understand them as transformative processes that operate upon data sets to create outputs. The problematic relationships of data > algorithm > output become even more visible when we recognize that data sets have already been collected according to categories and processes that embody political, economic, and ideological biases. The ideological origin of the collected data – the biases of the questions posed in order to generate “inputs” – are yet another kind of black box, a box prior to the black box of the algorithm, a prior structure inseparable from the algorithm’s hunger for (using the mathematicians’ language) a domain upon which it can act to produce a range of results. The nature of the algorithm controls what items from the domain (data set) can be used, and on the other hand, the nature of the data set controls what the algorithm has available to act upon and transform into descriptive and prescriptive claims. The inputs are as much a black box as the algorithm itself. Thus, opaque algorithms operate upon opaque data sets (Pasquale 2016, 204) in ways that nonetheless embody the inescapable “politics of large numbers” that is the topic of Desrosières and Naish’s history of statistical reasoning (2002). This interplay forces us to recognize that the algorithm inherits biases, and that then they are compounded by operations within these two algorithmic boxes to become doubly biased outputs. It might be more revelatory to term the algorithmic process as “stimuli” > algorithm > “responses.” Re-naming “input” as “stimuli” emphasizes the selection process that precedes the algorithmic act; re-naming “output” as “response” establishes the entire process as human, cultural, and situated. This is familiar territory to psychology. Digital technologies are texts whose complexity emerges when approached using established tools for textual analysis. Rotman and other mathematicians directly state their use of semiotics. They turn to phenomenology to explicate the reader/writer interaction, and they approach mathematical texts with terms like narrator, self-referential and recursion. Most of all, they explore the problem of mathematical representation when mathematics itself is complicated by its referential, formal, and psychological statuses.
The fetishization of mathematics is a fundamental strategy for exempting digital technologies from theory, history, and critique. Two responses are essential: first, to clarify the nostalgic mathematics at work in the mathematical rhetoric of Big Data and its tools; and second, to offer analogies that step beyond naïve notions of re-presentation to more productive critiques. Analogy is essential because analogy is itself a performance of the anti-representational claim that digital technologies need to be understood as socially constructed by the same forces that instantiate any technology. Bruno Latour frames the problem of the critical stance as three-dimensional:
The critics have developed three distinct approaches to talking about our world: naturalization, socialization and deconstruction . . . . When the first speaks of naturalized phenomena, then societies, subjects, and all forms of discourse vanish. When the second speaks of fields of power, then science, technology, texts, and the contents of activities disappear. When the third speaks of truth effects, then to believe in the real existence of brain neurons or power plays would betray enormous naiveté. Each of these forms of criticism is powerful in itself but impossible to combine with the other. . . . Our intellectual life remains recognizable as long as epistemologists, sociologists, and deconstructionists remain at arm’s length, the critique of each group feeding on the weaknesses of the other two. (1993, 5-6)
Latour then asks, “Is it our fault if the networks are simultaneously real, like nature, narrated, like discourse, and collective like society?” (6). He goes on to assert, “Analytic continuity has become impossible” (7). Similarly, Rotman’s history of the zero finds that the concept problematizes the hope that a “field of entities” exists prior to “the meta-sign which both initiates the signifying system and participates within it as a constituent sign”; he continues, “the simple picture of an independent reality of objects providing a pre-existing field of referents for signs conceived after them . . . cannot be sustained” (1987, 27). Our own approach is heterogeneous; we use notions of fetish, re-presentation, and Gödelian metaphor to try and bypass the critical immunity conferred on digital technologies by the naturalistic mathematical claims that immunize it against critique.
Whether we use Latour’s description of the mutually exclusive methods of talking about the world – naturalization, socialization, deconstruction – or if we use Rotman’s three starting points for the semiotic analysis of mathematical signs – referential, formal, and psychological – we can contextualize the claims of the Big Data fetishists so that the manifestations of Big Data thinking – policing practices, financial privilege, educational opportunity – are not misrepresented as only a mathematical/statistical question about assessing the results of supposedly neutral interventions, decisions, or judgments. If we are confined to those questions, we will only operate within the referential domains described by Rotman or the realm of naturalization described by Latour. The claims of an a-contextual validity violate the consequence of their contextual status by claiming that operations, uses, and conclusions are exempt from the aggregated array of partial theorizations, applied, in this case, to mathematics. This historical/critical application reveals the contradictory world concealed and perpetuated by the corporatized mathematics of contemporary digital culture. However, deploying a constellation of critical methods – historical, semiotic, psychological – prevents the critique from falling prey to the totalism that afflicts the thinking of these New Pythagoreans. This array includes concepts such as fetishization from the pre-digital world of psychoanalysis.
The concept of the fetish has fallen on hard times as the star of psychoanalysis sinks into the West’s neurochemical sea. But its original formulation remains useful because it seeks to address the gap between representational formulas and their objects. For example – drawing on the quintessential heterosexual, male figure who is central to psychoanalysis – the male shoe fetishist makes no distinction between a pair of Louboutins and the “normal” object of his sexual desire. Fenichel asserts (1945, 343) that such fetishization is “an attempt to deny a truth known simultaneously by another part of the personality,” and enables the use of denial. Such explanations may seem quaint, but that is not the point. The point is that within one of the most powerful metanarratives of the past century – psychoanalysis – scientists faced the contorted and defective nature of human symbolic behavior in its approach to a putative “real.” The fetish offers an illusory real that protects the fetishist against the complexities of the real. Similarly, the New Pythagoreans of Big Data offer an illusory real – a misconstrued mathematics – that often paralyzes resistance to their profit-driven, totalistic claims. In both cases, the fetish becomes the “real” while simultaneously protecting the fetishist from contact with whatever might be more human and more complex.
Wired Magazine’s “daily fetish” seems an ironic reversal of the term’s functional meaning. Its steady stream of technological gadgets has an absent referent, a hyperreal as Baudrillard styles it, that is exactly the opposite of the material “real” that psychoanalysis sees as the motivation of the fetish. In lived life, the anxiety is provoked by the real; in digital fetishism, the anxiety is provoked by the absence of the real. The anxiety of absence provokes the frenzied production of digital fetishes. Their inevitable failure – because representation always fails – drives the proliferation of new, replacement fetishes, and these become a networked constellation that forms a sort of simulacrum: a model of an absence that the model paradoxically attempts to fill. Each failure accentuates the gap, thereby accentuating the drive toward yet another digital embodiment of the missing part. Industry newsletters exemplify the frantic repetition required by this worldview. For example, Edsurge proudly reports an endless stream of digital edtech products, each substituting for the awkward, fleshly messiness of learning. And each substitution claims to validate itself via mathematical claims of representation. And almost all fade away as the next technology takes its place. Endless succession.
This profusion of products clamoring to be the “real” object suggests a sort of cultural castration anxiety, a term that might prove less outmoded if we note the preponderance of males in the field who busily give birth to objects with the characteristics of the living beings they seek to replace.[12] The absence at the core of this process is the unbridgeable gap between word and world. Mathematics is especially useful to such strategies because it is embedded in the culture as both the discoverer and validator of objective true/false judgments. These statements are understood to demonstrate a reality that “exists prior to the mathematical act of investigating it” (Rotman 2000, 6). It provides the certainty, the “real” that the digital fetish simultaneously craves and fears. Mathematics short-circuits the problematic question that drives the anxiety about a knowable “real.” The point here is not to revive psychoanalytic thinking, but rather to see how an anxiety mutates and invites the application of critical traditions that themselves embody a response to the incompleteness and inconsistency of sign systems. The psychological model expands into the destabilized social world of digital culture.
The notion of mathematics as a complete and consistent equivalent of the real is a longstanding feature of Western thought. It both creates and is created by the human need for a knowable real. Mathematics reassures the culture because its formal characteristics seem to operate without referents in the real world, and thus its language seems to become more real than any iteration of its formal processes. However, within mathematical history, the story is more convoluted, in part because of the immense practical value of applied mathematics. While semiotic approaches to the history engage and describe the social construction of mathematics, an important question remains about the completeness and consistency of mathematical systems. The history of this concern connects both the technical question and the popular interest in the power of languages – natural and/or mathematical – to represent the real. Again, these are not just technical, expert questions; they leak into popular metaphor because they embody a larger cultural anxiety about a knowable real. If Pythagorean notions have affected the culture for 2500 years, we want to claim that contemporary culture embodies the anxiety of uncertainty that is revealed not only in its mathematics, but also in the contemporary arguments about algorithmic bias, completeness, and consistency.
The nostalgia for a fully re-presentational sign system becomes paired with the digital technologies – software, hardware, networks, query strategies, algorithms, black boxes – that characterize daily life. However, this nostalgic rhetoric has a naïveté that embodies the craving for a stable and knowable external world. The culture often responds to it through objects inscribed with the certainty imputed to mathematics, and thus these digital technologies are felt to satisfy a deeply felt need. The problematic nature of mathematics matters little in terms of personalized shopping choices or customizing the ideal playlist. Although these systems rarely achieve the goal of “knowing what you want before you want it,” we rarely balk at the claim because the stakes are so low. However, where these claims have life-altering, and in some cases life and death implications – education, policing, health care, credit, safety net benefits, parole, drone targets – we need to understand them so they can be challenged, and where needed, resisted. Resistance addresses two issues:
That the traditional mystery and power of number seem to justify the refusal of transparency. The mystified tools point upward to the supposed mysterium of the mathematical realm.
That the genuflection before the mathematical mysterium has an insatiable hunger for illustrations that show the world is orderly and knowable.
Together, these two positions combine to assert the mythological status of mathematics, and set it in opposition to critique. However, it is vulnerable on several fronts. As Pasquale makes clear, legislation – language in action – can begin the demystification; proprietary claims are mundane imitations of the old Pythagorean illusions; outside of political pressure and legislation, there is little incentive for companies to open their algorithms to auditing. However, once pried open by legislation, the wizard behind the curtain and the Automated Turk show their hand. With transparency comes another opportunity: demythologizing technologies that fetishize the re-presentational nature of mathematics.
_____
Chris Gilliard’s scholarship concentrates on privacy, institutional tech policy, digital redlining, and the re-inventions of discriminatory practices through data mining and algorithmic decision-making, especially as these apply to college students.
Hugh Culik teaches at Macomb Community College. His work examines the convergence of systematic languages (mathematics and neurology) in Samuel Beckett’s fiction.
[1] Rotman’s work along with Amir Alexander’s cultural history of the calculus (2014) and Rebecca Goldstein’s (2006) placement of Gödel’s theorems in the historical context of mathematics’ conceptual struggle with the consistency and completeness of systems exemplify the movement to historicize mathematics. Alexander and Rotman are mathematicians, and Goldstein is a logician.
[2] Other mathematical concepts have hypericonic status. For example, triangulation serves psychology as a metaphor for a family structure that pits two members against a third. Politicians “triangulate” their “position” relative to competing viewpoints. But because triangulation works in only two dimensions, it produces gross oversimplifications in other contexts. Nora Culik (pers. comm.) notes that a better metaphor would be multilateration, a measurement of the time difference between the arrival of a signal with at least two known points and another one that is unknown, to generate possible locations; these take the shape of a hyperboloid, a metaphor that allows for uncertainty in understanding multiply determined concepts. Both re-present an object’s position, but each carries implicit ideas of space.
[3]Faith in the representational power of mathematics is central to hedge funds. Bridgewater Associates, a fund that manages more than $150 billion US, is at work building a piece of software to automate the staffing for strategic planning. The software seeks to model the cognitive structure of founder Raymond Dalio, and is meant to perpetuate his mind beyond his death. Dalio variously refers to the project as “The Book of the Future,” “The One Thing,” and “The Principles Operating System.” The project has drawn the enthusiastic attention of many popular publications such as TheWall Street Journal, Forbes, Wired, Bloomberg, and Fortune. The project’s model seems to operate on two levels: first, as a representation of Dalio’s mind, and second a representation of the dynamics of investing.
[4] Numbers are divided into categories that grow in complexity. The development of numbers is an index to the development of the field (Kline, Mathematical Thought, 1972). For a careful study of the problematic status of zero, see Brian Rotman, Signifying Nothing: The Semiotics of Zero (1987). Amir Aczel, Finding Zero: A Mathematician’s Odyssey to Uncover the Origins of Numbers (2015) offers a narrative of the historical origins of number.
[5] Eugene Wigner (1959) asserts an ambiguous claim for a mathematizable universe. Responses include Max Tegmark’s “The Mathematical Universe” (2008) which sees the question as imbricated in a variety of computational, mathematical, and physical systems.
[6]The anxiety of representation characterizes the shift from the literary moderns to the postmodern. For example, Samuel Beckett’s intense interest in mathematics and his strategies – literalization and cancellation – typify the literary responses to this anxiety. In his first published novel, Murphy (1938), one character mentions “Hypasos the Akousmatic, drowned in a mud puddle . . . for having divulged the incommensurability of side and diagonal” (46). Beckett uses detailed references to Descartes, Geulcinx, Gödel, and 17th Century mathematicians such as John Craig to literalize the representational limits of formal systems of knowledge. Andrew Gibson’s Beckett and Badiou provides a nuanced assessment of the mathematics, literature, and culture (2006) in Beckett’s work.
[7]See Frank Kermode, The Sense of an Ending: Studies in the Theory of Fiction with a New Epilogue (2000) for an overview of the apocalyptic tradition in Western culture and the totalistic responses it evokes in politics. While mathematics dealt with indeterminacy, incompleteness, inconsistency and failure, the political world simultaneously saw a countervailing regressive collapse: Mein Kampf in 1925, the Soviet Gulag in 1934; Hitler’s election as Chancellor of Germany in 1933; the fascist bent of Ezra Pound, T. S. Eliot’s After Strange Gods, and D. H. Lawrence’s Mexican fantasies suggest the anxiety of re-presentation that gripped the culture.
[8]Davis and Hersh (21) divide probability theory into three aspects: 1) theory, which has the same status as any other branch of mathematics; 2) applied theory that is connected to experimentation’s descriptive goals; and 3) applied probability for practical decisions and actions.
[9]For primary documents, see Jean Van Heijenoort, From Frege to Gödel: a Source Book in Mathematical Logic, 1879-1931 (1967). Ernest Nagel and James Newman, Gödel’s Proof (1958) explains the steps of Gödel’s proofs and carefully restricts their metaphoric meanings; Douglas Hofstadter, Gödel, Escher, Bach: An Eternal Golden Braid [A Metaphoric Fugue on Minds and Machines in the Spirit of Lewis Carroll] (1980) places the work in the conceptual history that now leads to the possibility of artificial intelligence.
[10] See Richard Nash, John Craige’s Mathematical Principles of Christian Theology. (1991) for a discussion of the 17th Century mathematician and theologian who attempted to calculate the rate of decline of faith in the Gospels so that he would know the date of the Apocalypse. His contributions to calculus and statistics emerge in a context we find absurd, even if his friend, Isaac Newton, found them valuable.
[11]An equally foundational problem – the mathematics of infinity – occupies a similar position to the questions addressed by Gödel. Cantor’s opening of set theory exposes and solves the problems it poses to formal mathematics.
[12]For the historical appearances of the masculine version of this anxiety, see Dennis Todd’s Imagining Monsters: Miscreations of the Self in Eighteenth Century England (1995).
_____
Works Cited
Aczel, Amir. 2015. Finding Zero: A Mathematician’s Odyssey to Uncover the Origins of Numbers. New York: St. Martin’s Griffin.
Alexander, Amir. 2014. Infinitesimal: How a Dangerous Mathematical Theory Shaped the Modern World. New York: Macmillan.
Anderson, Chris. 2008. “The End of Theory.” Wired Magazine 16, no. 7: 16-07.
Beckett, Samuel. 1957. Murphy (1938). New York: Grove.
Berk, Richard. 2011. “Q&A with Richard Berk.” Interview by Greg Johnson. PennCurrent (Dec 15).
Blanchette, Patricia. 2014. “The Frege-Hilbert Controversy.” In Edward N. Zalta, ed., The Stanford Encyclopedia of Philosophy.
boyd, danah, and Crawford, Kate. 2012. “Critical Questions for Big Data.” Information, Communication & Society 15:5. doi 10.1080/1369118X.2012.678878.
de Certeau, Michel. 1988. The Practice of Everyday Life. Translated by Steven Rendall. Berkeley: University of California Press.
Davis, Philip and Reuben Hersh. 1981. Descartes’ Dream: The World According to Mathematics. Boston: Houghton Mifflin.
Desrosières, Alain, and Camille Naish. 2002. The Politics of Large Numbers: A History of Statistical Reasoning. Cambridge: Harvard University Press.
Eagle, Christopher. 2007. “‘Thou Serpent That Name Best’: On Adamic Language and Obscurity in Paradise Lost.” Milton Quarterly 41:3. 183-194.
Fenichel, Otto. 1945. The Psychoanalytic Theory of Neurosis. New York: W. W. Norton & Company.
Gibson, Andrew. 2006. Beckett and Badiou: The Pathos of Intermittency. New York: Oxford University Press.
Goldstein, Rebecca. 2006. Incompleteness: The Proof and Paradox of Kurt Gödel. New York: W.W. Norton & Company.
Guthrie, William Keith Chambers. 1962. A History of Greek Philosophy: Vol.1 The Earlier Presocratics and the Pythagoreans. Cambridge: Cambridge University Press.
Hofstadter, Douglas. 1979. Gödel, Escher, Bach: An Eternal Golden Braid; [a Metaphoric Fugue on Minds and Machines in the Spirit of Lewis Carroll]. New York: Basic Books.
Kermode, Frank. 2000. The Sense of an Ending: Studies in the Theory of Fiction with a New Epilogue. New York: Oxford University Press.
Kline, Morris. 1990. Mathematics: The Loss of Certainty. New York: Oxford University Press.
Latour, Bruno. 1993. We Have Never Been Modern. Translated by Catherine Porter. Cambridge: Harvard University Press.
Mitchell, W. J. T. 1995. Picture Theory: Essays on Verbal and Visual Representation. Chicago: University of Chicago Press.
Nagel, Ernest and James Newman. 1958. Gödel’s Proof. New York: New York University Press.
Office of Educational Technology at the US Department of Education. “Jose Ferreria: Knewton – Education Datapalooza”. Filmed [November 2012]. YouTube video, 9:47. Posted [November 2012]. https://youtube.com/watch?v=Lr7Z7ysDluQ.
O’Neil, Cathy. 2016. Weapons of Math Destruction. New York: Crown.
Pasquale, Frank. 2016. The Black Box Society: The Secret Algorithms that Control Money and Information. Cambridge: Harvard University Press.
Pigliucci, Massimo. 2009. “The End of Theory in Science?”. EMBO Reports 10, no. 6.
Porter, Theodore. 1986. The Rise of Statistical Thinking, 1820-1900. Princeton: Princeton University Press.
Rotman, Brian. 1987. Signifying Nothing: The Semiotics of Zero. Stanford: Stanford University Press
Rotman, Brian. 2000. Mathematics as Sign: Writing, Imagining, Counting. Stanford: Stanford University Press.
Tegmark, Max. 2008. “The Mathematical Universe.” Foundations of Physics 38 no. 2: 101-150.
Todd, Dennis. 1995. Imagining Monsters: Miscreations of the Self in Eighteenth Century England. Chicago: University of Chicago Press.
Turchin, Valentin. 1977. The Phenomenon of Science. New York: Columbia University Press.
Van Heijenoort, Jean. 1967. From Frege to Gödel: A Source Book in Mathematical Logic, 1879-1931. Vol. 9. Cambridge: Harvard University Press.
Wigner, Eugene P. 1959. “The Unreasonable Effectiveness of Mathematics in the Natural Sciences.” Richard Courant Lecture in Mathematical Sciences delivered at New York University, May 11. Reprinted in Communications on Pure and Applied Mathematics 13:1 (1960). 1-14.
Wu, Xiaolin, and Xi Zhang. 2016. “Automated Inference on Criminality using Face Images.” arXiv preprint: 1611.04135.
Is there, was there, will there be, a digital turn? In (cultural, textual, media, critical, all) scholarship, in life, in society, in politics, everywhere? What would its principles be?
The short prompt I offered to the contributors to this special issue did not presume to know the answers to these questions.
That means, I hope, that these essays join a growing body of scholarship and critical writing (much, though not by any means all, of it discussed in the essays that make up this collection) that suspends judgment about certain epochal assumptions built deep into the foundations of too much practice, thought, and even scholarship about just these questions.
In “The New Pythagoreans,” Chris Gilliard and Hugh Culik look closely at the long history of Pythagorean mystic belief in the power of mathematics and its near-exact parallels in contemporary promotion of digital technology, and especially surrounding so-called big data.
In “From Megatechnic Bribe to Megatechnic Blackmail: Mumford’s ‘Megamachine’ after the Digital Turn,” Zachary Loeb asks about the nature of the literal and metaphorical machines around us via a discussion of the 20th century writer and social critic (and) Lewis Mumford’s work, one of the thinkers who most fully anticipated the digital revolution and understood its likely consequences.
In “Digital Proudhonism,” Gavin Mueller writes that “a return to Marx’s critique of Proudhon will aid us in piercing through the Digital Proudhonist mystifications of the Internet’s effects on politics and industry and reformulate both a theory of cultural production under digital capitalism as well as radical politics of work and technology for the 21st century.”
In “Mapping Without Tools: What the Digital Turn Can Learn from the Cartographic Turn.” Tim Duffy pushes back “against the valorization of ‘tools’ and ‘making’ in the digital turn, particularly its manifestation in digital humanities (DH), by reflecting on illustrative examples of the cartographic turn, which, from its roots in the sixteenth century through to J.B. Harley’s explosive provocation in 1989 (and beyond) has labored to understand the relationship between the practice of making maps and the experiences of looking at and using them. By considering the stubborn and defining spiritual roots of cartographic research and the way fantasies of empiricism helped to hide the more nefarious and oppressive applications of their work, I hope to provide a mirror for the state of the digital humanities, a field always under attack, always defining and defending itself, and always fluid in its goals and motions.”
Joseph Erb, Joanna Hearne, and Mark Palmer with Durbin Feeling, in “Origin Stories in the Genealogy of Cherokee Language Technology,” argue that “the surge of critical work in digital technology and new media studies has rarely acknowledged the centrality of Indigeneity to our understanding of systems such as mobile technologies, major programs such as Geographic Information Systems (GIS), digital aesthetic forms such as animation, or structural and infrastructural elements of hardware, circuitry, and code.”
In “Artificial Saviors,” tante connects the pseudo-religious and pseudo-scientific rhetoric found at a surprising rate among digital technology developers and enthusiasts: “When AI morphed from idea or experiment to belief system, hackers, programmers, ‘data scientists,’ and software architects became the high priests of a religious movement that the public never identified and parsed as such.”
In “The Endless Night of Wikipedia’s Notable Woman Problem,” Michelle Moravec “takes on one of the ‘tests’ used to determine whether content is worthy of inclusion in Wikipedia, notability, to explore how the purportedly neutral concept works against efforts to create entries about female historical figures.”
In “The Computational Unconscious,” Jonathan Beller interrogates the “penetration of the digital, rendering early on the brutal and precise calculus of the dimensions of cargo-holds in slave ships and the sparse economic accounts of ship ledgers of the Middle Passage, double entry bookkeeping, the rationalization of production and wages in the assembly line, and more recently, cameras and modern computing.”
In “What Indigenous Literature Can Bring to Electronic Archives,” Siobhan Senier asks, “How can the insights of the more ethnographically oriented Indigenous digital archives inform digital literary collections, and vice versa? How do questions of repatriation, reciprocity, and culturally sensitive contextualization change, if at all, when we consider Indigenous writing?”
The digital turn is associated with considerable enthusiasm for the democratic or even emancipatory potential of networked computing. Free, libre, and open source (FLOSS) developers and maintainers frequently endorse the claim that the digital turn promotes democracy in the form of improved deliberation and equalized access to information, networks, and institutions. Interpreted in this way, democracy is an ethical practice rather than a form of struggle or contestation. I argue that this depoliticized conception of democracy draws on commitments—regarding personal autonomy, the ethics of intersubjectivity, and suspicion of mass politics—that are also present in recent strands of liberal political thought. Both the rhetorical strategies characteristic of FLOSS as well as the arguments for deliberative democracy advanced within contemporary political theory share similar contradictions and are vulnerable to similar critiques—above all in their pathologization of disagreement and conflict. I identify and examine the contradictions within FLOSS, particularly those between commitments to existing property relations and the championing of individual freedom. I conclude that, despite the real achievements of the FLOSS movement, its depoliticized conception of democracy is self-inhibiting and tends toward quietistic refusals to consider the merits of collective action or the necessity of social critique.
John Pat Leary, in “Innovation and the Neoliberal Idioms of Development,” “explores the individualistic, market-based ideology of ‘innovation’ as it circulates from the English-speaking first world to the so-called third world, where it supplements, when it does not replace, what was once more exclusively called ‘development.’” He works “to define the ideology of ‘innovation’ that undergirds these projects, and to dissect the Anglo-American ego-ideal that it circulates. As an ideology, innovation is driven by a powerful belief, not only in technology and its benevolence, but in a vision of the innovator: the autonomous visionary whose creativity allows him to anticipate and shape capitalist markets.”
Annemarie Perez, in “UndocuDreamers: Public Writing and the Digital Turn,” writes of a “paradox” she finds in her work with students who belong to communities targeted by recent immigration enforcement crackdowns and the default assumptions about “open” and “public” found in so much digital rhetoric: “My students should write in public. Part of what they are learning in Chicanx studies is about the importance of their voices, of their experiences and their stories are ones that should be told. Yet, given the risks in discussing migration and immigration through the use of public writing, I wonder how I as an instructor should either encourage or discourage students from writing their lives, their experiences as undocumented migrants, experiences which have touched, every aspect of their lives.”
Gretchen Soderlund, in “Futures of Journalisms Past (or, Pasts of Journalism’s Future),” looks at discourses of “the future” in journalism from the 19th and 20th centuries, in order to help frame current discourses about journalism’s “digital future,” in part because when “when it comes to technological and economic speedup, journalism may be the canary in the mine.”
In “The Singularity in the I790s: Toward a Prehistory of the Present With William Godwin and Thomas Malthus,” Anthony Galluzzo examines the often-misunderstood and misrepresented writings of William Godwin, and also those of Thomas Malthus, to demonstrate how far back in English-speaking political history go the roots of today’s technological Prometheanism, and how destructive it can be, especially for the political left.