This essay has been peer-reviewed by the boundary 2 editorial collective.
Paris is quiet and the good citizens are content.
—Napoleon Bonaparte’s first message delivered by optical telegraph after seizing power in 1799
Maybe it should not surprise us that among the first messages of consequence sent by telegraph is an exacting statement about the silent satisfaction of the “people.”[1] Contrasted with Samuel Morse’s portentously open-ended “What hath God wrought?”, the cool administrative authority of Napoleon Bonaparte’s dispatch is rendered all the more acute. A republican concern for “contentment” —the Enlightenment affection for happiness even—is condensed into a secure, omniscient, and unambiguous transmission meant for deferred public consumption. It is a mode we might call “executive”—political information traveling the secure means of public power, assuming public consent as confirmation of its administrative reach; fitting that Napoleon inherited this state technology from the republic’s ruling Directoire Exécutif (Executive Directory). The victory message encodes and relays, rather than prints and broadcasts, the certainty of order and noiselessness, presuming authority over a defined geography.
That certainty, in turn, can be understood as a product of the telegraphic process itself, a new, though not completely un-theorized, thing.[2] Prior to the telegraph, during the period of the republican nationalizing of print, a wave of information networks sought to align the aggregate of public discourse with the apparatus of state power. Napoleon’s telegraphy, however, represents a point of inflection away from this aggregation toward state power, inviting us to think about why the telegraph caused a change in republican governance. Consider that republican theories of oration and writing imagined an expansive and egalitarian field of persuasion and critique that, like a self-leveling body of water, would spread just behind, or ahead of, the nation itself. Coextensive with the information springs that diffuse its imagined selves, the new republic guaranteed a check on the balance of power between the public and civil authority. But, framed by republican ideology and the necessities of war, it became apparent during the transformations of Napoleonic statecraft that information would be both critical to public welfare and increasingly restricted within its compass. Networks of information would no longer be isomorphic with public self-conceptions—if they ever were. Republicanism’s sphere of mediation, an idealized reflecting surface, also started to become a linear network of self-locking channels—dark chambers and webs as opposed to mirrors and funnels. To be clear, telegraphy did not supplant these forms of mediation and political circuitry; rather, it added to the range of techniques available to the new republican state.
When we begin to consider the semaphoric landscape of national communications, we see the various ways the republic of letters was not so much a network of transparent arguments and narratives about national integration or even regional identity, or a stage upon which the powerful exercised their arguments anonymously in service to the public good. During the early Federal period of the 1790s, it was increasingly a republic of inverted publicity, with the relays, compressions, and decompressions of texts serving to disrupt the economic and informational systems of print, signal-making, and manuscript. And this is true not only of everyday signal-making within networked New England (cf. Matt Cohen), but also of the highest and most canonical of American Enlightenment thinkers and politicians. Understanding the technology as a transatlantic republican transfer, we can begin to understand the republic of letters in its full range of mediated symbolic economies.
*
The chambering effect of the earliest semaphore telegraphs was architecturally visible in the structure of the telegraph stations themselves, and in the patterns such structures made upon the geography of the modern national republic. Figure 1 shows a Binary Panel telegraph from 1794, which diagrammatically charges each space with discrete intelligence, at once connected to and sealed off from an integrated whole.
Figure 1: Binary panel telegraph. From: Louis Figuier, Les Merveilles de la science, 1867–1891, Tome 2.djvu, Image 21.m
Designed as a set of relational binaries, the networked physical space combines autonomous coding with a necessary extension of that space into repeated linkages. Each linkage requires privileged access to interpretation and to the overall design of the coding structure. Individually and taken together, these stations signaled an open-air puzzle that invaded public space with private meanings about public things. Under such conditions—these coded assemblages—readers and writers form a publically private sphere, or what we might call a crypto-public.
That geometry of segmentation and networking repeated itself at larger scales. Indeed, such segmenting is evident in the regularized rhizome or web that becomes the French station map from 1792 until 1852 (see Figure 2). At the risk of over-metaphorizing the pattern, this webbed rhizome forms a networked humanoid (the King’s ghosted body?) in its telegraph system, revealing in the process a kind of early infographic of modern power.
Figure 2: Chappe telegraph system
Paul Virilio has described the post-Roman segmentarity of territory as essential to the logic of modern states, imposing linearity in its militarized necessity. Perhaps that is discernible here with the return of an updated Roman militarism, very much in keeping with an echoed Napoleonic neoclassical and royal poetics, but here with the modern executive as its austere figurative counterpart. The map reveals the telos of this transformation’s capacity to make meaning visibly transportable across the French topography and adherent to the needs of the new republican executive.[3]
We can begin to evaluate this transformation, particularly visible in the 1790s, by asking questions not just of the broader shapes of networking and mediation, but of our dispositions as readers of both ambiguity and precision. The new telegraphic sphere transformed the methods of reading itself, taking the rage for cryptography and ciphers into a more systematic and pervasive (because more public and open) mode of coded transmission. Words, clauses, phrases, and sentences (which can helpfully be classified as “intelligence,” “information,” “data,” or “plaintext”) were condensed into graphemes, and then sequenced for reception that resists confusion; we might even think of it as among the earliest forms of pure information. And yet we should remember that the pure informatics of the telegraph does not exclude confusion from the structure of transmission—indeed, confusion (as with all cryptography) is a feature, not a bug. Confusion fills the role of the sealed envelope in open field communication like telegraphy, keeping the visible illegible, but packaged nonetheless. The conversion of plaintext into a smoother temporal package, able to splice distance, divides legibility into the indecipherable and the solvable. Depending on one’s access to the network of interpretive competence, one’s intention and position in the signifying relay, what appears opaque can be perfectly legible.
Telegraphy’s early binary hermeneutics had its sources in a worry about the degrading of meaning across distance and about the sufficiency of human agency itself. As a result of perceptual and interpretive decay, uncertainty emerges as an unfortunate consequence of the limitations of human acuity, a crippling nearsightedness. Telegraphy thus demanded a larger perceptual apparatus that could be prosthetically attached to the human eye, and it is given form both in the structures of the transponder stations that were built during the period of Napoleonic republicanism, and in the political state that necessitates and organizes those stations. The contingencies of visibility made signification’s emblems layered and rich, and they can be read as meaningful artifacts in both their patterns of projective mechanics and their metaphors for explaining political effects.
The quest for a secure broadcast meant rethinking the terms of internal signification, transmission and external reception—a reclassifying of the perceptual mechanics of communication. Some of that work had already been theorized as a merging of the “sudden” with the “certain.” In 1684, Robert Hooke hailed the “certainty” of a proposed telegraphic “intelligence” in a “Discourse to the Royal Society. . . shewing a Way how to communicate one’s Mind at great Distances”: “That. . . [is] a Method of discoursing at a Distance, not by Sound, but by Sight. . . . ’tis possible to convey Intelligence from any one high and eminent Place, to any other that lies in Sight of it. . . in as short a Time almost, as a Man can write what he would have sent, and as suddenly to receive an Answer” (Hooke 1964, 142). Suddenness is Hooke’s word for automatism, or perhaps “telepresence.” Data imprints itself onto a computational, agent-less consciousness, forgoing faculties of judgment and interest that delay immediate understanding and make demands on memory. Distance and recall become what I would call negative fields of exchange, and only important to the degree that they are made to recede.[4] It is the origin of text as pure data rather than argumentation or even “thought”—non-conversational, foreclosing dialogue and memory, a text that is constrained and defined by its linear velocity, particularly well-suited to the purposes of the executive state. That executive state stood to recover what its nationalized representational technology might dissolve in the circuitry.
That executive reclamation, at the dawn of the constitutional era of republics, is driven by the semiotics of technological migrations and deployments. I assemble this story out of disparate fixtures that may seem wholly apart. That is, it may seem this essay is bifurcated nationally and conceptually—France and America, telegraphic structures and telegraphic metaphors—but I want to contend that the two are essential to understanding each function. While this essay is largely about French telegraphes and then American “telegraphs,” it is really about how the new world of executive informatics in the transatlantic republican era came to recognize itself as a political technology and, in so doing, made room for a new kind of communicative regime.
*
Edward Cave’s London-based Gentleman’s Magazine in 1794 ran a long item about the deployment of the semaphore telegraph in France, a piece of reportage which was reprinted in US periodicals not long after. What is particularly telling about Cave’s piece is its appending to the report a quotation from Ovid, “Fas est ab hoste doceri [It’s right to learn from one’s enemy].” Cave signals to his readers that the import of the technological breakthrough is as much in its usefulness in defining enemies as in its power as a means of communication.
Mr. Urban Sept 11.
The telegraph was originally the invention of William Amontons. . . [who] contracted such a deafness as obliged him to renounce all communications with mankind. . . . This philosopher also first pointed out a method to acquaint people at a great distance, and in a very little time, with whatever one pleased. This method was as follows: let persons be placed in several stations, at such distances from each other, that, by the help of a telescope, a man in one station may see a signal made by the next before him; he immediately repeats this signal, which is again repeated through all the intermediate stations. This, with considerable improvements, has been adopted by the French, and denominated a Telegraphe; and, from the utility of the invention, we doubt not but it will be soon introduced in this country. Fas est ab hoste doceri. (Cave 1794)
And so indeed the French enemy speaks of telegraphy as the key to republican governmentality: in this iteration, the invention is quickly attached to the purposes of a voracious political program that is, in turn, alleged to be in the service of shadowy power-hungry persons.
To Cave’s account is appended a much-reprinted copy of the notorious Jacobin Bertrand Barere’s report to the French Convention of August 15, 1794, which describes the telegraph’s value to the new Republic:
By this invention the remoteness of distances almost disappear; and all the communications of correspondence are effected with the rapidity of the twinkling of an eye. The operations of Government can be…facilitated by this contrivance, and the unity of the Republick can be the more consolidated by the speedy communication with all its parts. The greatest advantage which can be derived from this correspondence is. . . its object shall only be known to certain individuals. (Cave 1794, 815–16)
Barere’s advertisement in the English-speaking press for the new technology is decidedly political but blind to its ideological contradictions—“certain” individuals who make use of certainty, executives, seeking the unity of its dispersed publics. The telegraph is immediately conceived in such quarters in executive mode, exclusive to what Friedrich Kittler (2010) in Optical Media calls a “militant elite” (74), despite the radical democratic alignment of the partisans who are accused of deploying it.[5]
The capacity to make information a spectacular but exclusive part of modern control, and a certain kind of public comfort, was immediately evident to all who had a stake in political power. Consider the telegraph in Figure 3, which Claude and Ignace Chappe perfected as a synchronized pendulum system between 1790 and 1791.[6] Despite its anachronism as a depiction (c. 1868) it is a useful rendering of the original system as first used on March 2, 1791, with synchronized towers in Brulon and Parce. In the distance (16 kilometers) is the synced analog clock in Brulon, distinguished by the popping face of phosphorous white set on the edge of a calm green horizon.
A modified clock face in guillotine-like frame is the inaugural emblem of telegraphic coding—here, divided temporality looms over all aspects of its earliest methods. More precisely, the clock face is a dead metaphor, no hands inexorably rotating through a fixed template. It has become, instead, an analogic key for the alphabet (the severed head still speaking) divided into sixteen points that indicate letters through serial combinations of position and sound. The reliance upon numerals underscores the medium’s crypto-publicity, even as it bespeaks time as something to be erased or repositioned. Temporality has been transferred from the “clock” to the spaces between, with the linearity of time made literal in its acoustical and graphical run over space, the natural world modernized by technical structures; indeed, now clocks could be made to “speak” to one another, from station to station, over the mute countryside. The speech of such machines across backcountries was—even as early as 1795 in the following press report on the surrender of Conde-sur-Lescaut (see Figure 4)—referred to as “lines,” well before the more obvious metonym deriving from the cables and wiring of electronic telegraphy.
Figure 4: Press report on the surrender of Conde-sur-Lescaut. Source: Readex (Readex.com), a division of NewsBank, inc.
The stations were not meant for public squares, as in the somewhat fanciful context in the rendering above (see Figure 3), but more like the austere passive-voiced reportorial description here (see Figure 4). Indeed, the public should be imagined as a potential decoder in the countryside, waiting passively or aggressively beneath the flow of information, a confusion of unseen seers unable to see—“copying the words so expeditiously, and for throwing such a body of light as to make them visible. . . does not yet appear”. There is not only a deliberate confusion engendered by the encrypted signals, but also a confusion that arises from the non-human agency of transponders—bodies of light and bodies of human copiers are effectively the same.
In fact, that initial puzzlement or wonderment about the grafting of data over demographics was soon turned to distrust of the elitist, undemocratic origins and ends of executive transponders. This disorientation—the national imaginary threatened by encrypted units of time and space, or what Patricia Crain (2003) describes as “paranoia” about the semaphore telegraph’s “seemingly self-authorized power”—is not discernible in any of the early transatlantic accounts (67). But we know that the successor to Chappe’s panel telegraph was destroyed twice by mobs who apparently feared that the telegraph was being used to aid royalist forces within France.
Barely two years later, after the synchronized pendulum and then binary panels, the French had established the first telegraph line, an optical semaphore, that used a four-part armature, the angles of which could be regulated (see Figure 5).
Figure 5: French semaphore telegraph.
On view here is a muted sociability, with blackened windows in the tower, the crowd absent from a landscape that suggests its utilitarian purpose with diagrammatic rigor. A single man stands with a horse, back to us, presumably part of the postal system, ready to dispatch an end note. The clock face has been replaced by a purely angular semiotics, one whose capacity for coded permutations was multiplied greatly, with sixty-three possibilities. In the armature of the new semaphore, time of transmission is squeezed into smaller intervals, smoothed out and compressed, abbreviation disguised as executive technique. Another way to put it is that executive communication via telegraph was “steganographic,” a term borrowed from diplomatic ciphering’s system for managing confusion. Steganography provided for the polity to be policed and protected via codes.
*
Steganography, the communicative science of secrecy, is one way telegraphy relayed across the Atlantic, before the infamous nineteenth-century story of the heroic laying of the cable. For instance, Benjamin Franklin Bache’s 1797 English-French reprinting of P. R. Wouve’s fragmentary Tableau Syllabique et Steganographique/A Syllabical and Steganographical Table figures encryption as a form of telegraphy, stressing the way steganography guarantees the “safety of the secret” (Wouve 1787, repr.1797).
Figure 6: Title page of French steganography text, published in Philadelphia.
Part of a mini-genre of ciphering and cryptography tables in the 1790s, the manual assembled methods for encrypting information numerically so that it could travel publicly without detection as to alphabetic meaning. As Wouve’s (1787, repr. 1797) manual puts it, “Experience will soon bring to a general demonstration, that the method here employed, is not only conducive to secure the secret of any kind of written intercourse, but is even advantageously applicable to all telegraphical purposes, as well as to all sorts of reconnoitering signals, either for the sea or land service.” The word “telegraphical” here, in its adjectival charge, tropes automatism and brevity, as the republic begins to experiment with abbreviation and secrecy as civic activities.
In 1793, about four months after George Washington delivered the shortest inaugural address in American history (a speech entirely given over to a brief of republican executive caution: depose me if I abuse power), several newspapers reported the demonstration of Chappe’s telegraph as part of a nationalized system. The reportage picked up considerably with news of its instrumentality in the French capture of Conde-sur-L’Escaut from the Austrians in 1794. But rather than the lightning of Napoleonic informatics, telegraphy actually crossed the Atlantic in the slow boat of metaphor, as a second-order signal about easily-relayed public information, rather than a viable state technology of applied steganography. As with other new technologies finding their places within uneven national development, telegraphy and steganography became devices for conceiving a new cultural and political capability, rather than a story about what the device itself does. In the United States, telegraphy became a commercial curiosity inserted into print culture in a way that was either part of an enthusiastic endorsement of, or deep suspicion about, republicanism and its aggressive mutations in Jacobin and early-Napoleonic France. What we notice most is the specter of a technology being used quite openly to ideological ends—republicans praising the possibility of the telegraphic state and Federalists decrying the militarized appropriation of its capacity for public information. Both postures were paranoid, but not entirely unwarranted.
Telegraphy entered the burgeoning print press of the early United States as a metaphor about how dissemination of news across the republic might take place—“telegraphing,” as Barere made clear, keeps information current even at great distances, to the benefit of executive and public alike. In the process of metaphorical transposition, the military was quickly conflated with the political. It joined other technophilic tropes of distance-, time-, or mechanical-conquest in newspapers that incorporated words like “telescope,” “time piece,” “balance,” and “orrery” into their nameplates. This needful over-reach in the naming of newspapers even became the object of satire. On the millennial date of January 1, 1800, the Massachusetts Mercury made sport of the practice of media metaphors in a poem, targeting competitors like The Aurora, The Argus, and The Bee:
That ‘RORA’s’ cloudy, ‘ARGUS’ squints;
That the pert ‘BEE’’s a worthless drone,
Sans sting or honey of his own.
But dropping anger, loud you laugh,
At signal false of ‘TELEGRAPH.’
Despite the manifest epidemic of journalistic over-selling, the telegraphic brand in the US preserved a naïve belief in the transparency of print messaging—it was the same, only faster, and so its use in key parts of the new republic was reassuringly not ambiguous or threatening. As long as it was subsumed within print networks, the telegraph was only indirectly threatening to those who feared republicanism and its radical political agenda derived from revolutionary militancy.
An assortment of rural towns and cities outside the northern periphery had papers that adopted the title. Charleston, SC, was North America’s first. With only four issues of the Telegraph, published from March 16, 1795 to March 20, 1795, and a variant title of The Telegraph and Charleston Daily Advertiser: a proposed Baltimore Telegraphe was advertised in the Aurora General Advertiser in February 1795, but never published. Perhaps the most infamous and influential early “telegraph” (and in many ways an anomaly to the regional and demographic rule at work here) was Boston’s arch-Republican twice weekly, Constitutional Telegraph.[7] Aside from Boston’s Constitutional Telegraph, what one notices is that the majority of “telegraphs” were removed from the metropolitan centers, either in suburbs or frontier towns, across regions north to south, mid-Atlantic to west. Examples include: the Telegraph in Georgetown, KY[8]; the American Telegraphe in Newfield, CT, with a variant of American Telegraphe & Fairfield County Gazette[9]; and Maine’s Wiscasset Telegraph.[10]
The use of the technologized press metaphor tended to be a signal about the party affiliations of the printer, as was the case of Brookfield, Massachusetts’s The Moral and Political Telegraphe.[11] It was produced as a successor to the venerable Isaiah Thomas’s Worcester Intelligencer by Thomas’s apprentice Elisha H. Waldo. Upon taking it over for its run as the Telegraphe, Waldo shifted the paper’s allegiance to republican France, which, as a front-page report of an anti-Jacobin conspiracy put it, had “warm American friends.” There is something fitting about the great printer’s pupil paradigmatically adopting the telegraphic metaphor to politically realign and modernize the news. Allegiance, whether changed or emphasized, moral or political, was publicized by subtle orthographic signals: Boston’s The Constitutional Telegraph changed its spelling to the French in 1802, becoming The Constitutional Telegraphe (see Figures 7 and 8). After the War of 1812, and, arguably, the onset of a Napoleonic executive posture in US political culture, the telegraph secures its place firmly in a variety of nameplates. For instance, the Hillsboro Telegraph[12]; the Columbian Telegraph[13]; the Rochester Telegraph[14]; the Telegraph[15]; and the American Telegraph published in Brownsville, PA.[16]
Figure 7: The English Constitutional Telegraph. Source: Readex (Readex.com), a division of NewsBank, inc.
And the Constitutional Telegraphe, three years later transformed with the French spelling and the addition of a flourish of an iconic masthead (see Figure 8):
Figure 8: The French Constitutional Telegraphe, three years later. Source: Readex (Readex.com), a division of NewsBank, inc.
The press motif of a telegraphed republic had a life outside the means of bringing both news and a politicized Napoleonic semiotics to the suburbs and rural outposts of the United States. Indeed its metaphorical purposes did cultural, and even literary, heavy lifting beyond the journalistic. It was both an emblem of progressive inventiveness in pro-French propaganda and a warning about dystopic ruination in Federalist satire. Of the pro-French variety, telegraphy became a glorious artifact and a feature of historical prophecy: Virginia republican St. George Tucker’s 1794 replica model/homage to Chappe’s telegraph was soon after included in Peale’s Museum. And the Aurora/Gazette of 1795 hails the telegraph bringing news at the speed of “lightning” of the “disgrace of our enemies” and the “towering flights of our glories,” positing the conflation of the technology with the triumphantly political. English historian John Gifford produced a grand hagiography called The History of France (1792), praising the telegraph and Napoleon’s military genius that was re-published by the Aurora’s republican printer and editor, William Duane.
Of the latter, anti-Jacobin genre: John Lowell Jr.’s The Antigallican [sic]; or, The Lover of His Own Country: In a Series of Pieces Partly Heretofore Published and Partly New, Wherein French Influence, and False Patriotism, Are Fully and Fairly Displayed,(1797), retails the purported schemes in which Jacobins were wont to use technology to bypass rational debate and aim straight for the passions.
By pompous professions of their own purity, and by an over zealous crimination of their opposers, the Jacobins always aim at exciting the passions of the people, before their understandings have opportunity to examine into the truth. They know that public clamor is like a torrent, which in its destructive course, sweeps away every vestige of human wisdom. . . . By exciting it therefore they hope to overwhelm the monument of law, order and public authority. Thus in the case of the treaty with Great Britain, no arts, no intrigues, no falsehoods were omitted, to excite the prejudice and inflame the passions of the people. . . . They distributed it with the rapidity of the telegraph, and promoted instant discussions of it in illegal assemblies…proud ambition leagued with stupid folly. (32)
Lowell’s “telegraph” is a symbol, part of a political jeremiad meant to appropriately dispose the public to a threat. It is accurate insofar as it presages President James Madison’s rather overt preying on public passions fifteen years later, during the War of 1812. It is also a moment that arguably ushered in the new age of war-making as executive action, when the telegraph’s secretive modalities do indeed feed a politics that is really neither republican nor Federalist, but executive. And while Lowell might have been justified in worrying the Democratic–Republican tilt to France, he need not have fretted the communicative means to that end because the telegraph’s military mode was similarly disdainful of the public and its passions; passions could be political, executive telegraphy was merely strategic.
*
The realization of the telegraph as a militarized national project took some time, but of course it did happen just early enough for the United States to weigh in formally against British anti-Bonapartism. But the actual building of telegraph networks stayed relatively modest in the first decade of the nineteenth century. It is first mentioned as a possible means of “dart[ing] information” in hypothetical land and naval maneuvers in John Dickinson’s 1798 pro-French tract, A Caution: or, Reflections on the Present Contest between France and Great-Britain: “If intelligence from places more remote is wanted, telegraphs can dart it, with any requisite velocity.” Not long after, Jonathan Grant of Belchertown, MA, obtained a patent for an improved semaphore telegraph in 1800 and ran a line between Martha’s Vineyard and Boston (see O’Rielly 1869, 262–69).
In 1813, in the midst of North America’s extension of Napoleonic war, civil engineer Christopher Colles reverse-engineered the telegraphic metaphor and published Description of the Numerical Telegraph for Communicating Unexpected Intelligence by Figures, Letters, Words, and Sentences (see also O’Rielly 1869, 262–69). The word “unexpected” here replays the sense of suddenness and even automatism suggested in Hooke’s plans for a medium that would allow two minds to surpass distance. The condition of war becomes a crystal in the fluid of the saturated medium of political necessity. In the midst of the War of 1812, this is the first attempt to really theorize and diagram a new kind of semaphore telegraph, which would be put to the uses of the American state (see Figure 9). Colles’s plan, first advertised in the Columbian in 1812, was realized in a forty-seven-mile line between New York City and Sandy Hook, NJ.
Figure 9: Christopher Colles’s semaphore telegraph
Colles’s semiotic machine was a hybrid of the semaphore, the ratcheting gear, and the clock face, suggesting for the first time a structure not wedded to the land, in the manner of a house, but a mobile, if clunky, device. Moreover, Colles’s telegraph is DIY-modular, and as with the French telegraphic station map it is comically homologous to the human form—a promising stick figure for the execution and administration of post-Enlightenment “unexpected intelligence.” The austere new agent of meaning is fit for Madisonian executive pragmatism—or bumbling, as the case may be. Unlike the French homology of state power, here the telegraphic pattern is coded at a smaller scale, individualized and repurposed for the public sphere of dispersed American political regionalism.
We are learning in the process of historicizing “new media” that nothing is especially new under certain conditions of modernity (see Pingree and Gitelman 2003), and this is particularly true for telegraphy as a medium for political discourse and organized statecraft. The history of the telegraph is far from discretely composed by the period I have chosen to examine in this essay; understanding a range of phenomena—print culture, republicanism, early American regional allegiances—all resolve to greater clarity in the wake of the telegraph’s semiotic migrations. The hardnosed political deployment of the technology in France, the rhetorical circulation of an idea in the absence of state support in the United States, and then the effects of republican executive theory in the galvanizing moment of war—all form a proto-apparatus for militarized communication.
And there is a deeper connection worth following and remembering that has to do with how cultural memory works and how political techniques maintain critical visibility to the public: During the 1790s, the telegraphic metaphor lodged itself within republican print culture benignly and visibly, removed from (but still encoded with) the terms of militarized informatics. That metaphor of telegraphic power emerged as a discernible self-consciousness about the technological shifts and executive ascendancy that began to fundamentally reorder the print sphere and its publics; but that self-consciousness did not extend to shifts in how executive semiotics reordered political life as a function of its form. The early technologists saw all symbol and nation, but not form or the implications of form. Telegraphy is, as such, part of the story of the undoing of the political order of print and the making of something newly invisible both to the public and to itself. Such technology, in which data is joined to politics, communicates its own “natural” ascendancy.
I’d like to thank Sohinee Roy and Christopher Hanlon for their help in preparing this manuscript for publication.
References
Beauchamp, Ken. 2008. A History of Telegraphy. London: Institution of Engineering and Technology.
Cave, Edward. 1794. Gentleman’s Magazine and Historical Chronicle 64, Part 2: 815-16.
Cohen, Matt. 2009. The Networked Wilderness: Communicating in Early New England. Minneapolis: University of Minnesota Press.
Colles, Christopher. 1813. Description of the Numerical Telegraph for Communicating Unexpected Intelligence by Figures, Letters, Words, and Sentences. Brooklyn, NY: Alden Spooner.
Crain, Patricia. 2003. “Children of Media, Children as Media: Optical Telegraphs, Indian Pupils, and Joseph Lancaster’s System for Cultural Replication.” In New Media: 1740–1915, edited by Lisa Gitelman and Geoffrey B. Pingree, 61–90. Cambridge, MA: MIT Press.
Dickinson, John. 1798. A Caution: or, Reflections on the Present Contest between France and Great-Britain. Philadelphia, PA: Benjamin Franklin Bache.
Giedion, Sigfried. 2014. Mechanisation Takes Command: A Contribution to Anonymous History. Minneapolis: University of Minnesota Press. First published 1948.
Gifford, John. 1792. The History of France. London: C. Lowndes and W. Locke.
Holzmann Gerard J. and Björn Pehrson. 2003. The Early History of Data Networks. Hoboken, NJ: Wiley.
Hooke, Robert. 1967. Philosophical Experiments and Observations of the Late Eminent Dr. Robert Hooke. Edited by William Derham. London: Frank Cass and Co.
Innis, Harold A. 1951 The Bias of Communication. Toronto: University of Toronto Press.
Kittler, Friedrich. 2010. Optical Media: Berlin Lectures 1999. Translated by Anthony Enns. Malden, MA: Polity.
Locke, John. 1689. Essay Concerning Human Understanding. Oxford: Oxford University Press.
Lowell Jr., John. 1797. The Antigallican [sic]; or, The Lover of His Own Country: In a Series of Pieces Partly Heretofore Published and Partly New, Wherein French Influence, and False Patriotism, Are Fully and Fairly Displayed. Philadelphia, PA: William Cobbett.
Manovich, Lev. 2001. The Language of New Media. Cambridge: MIT Press.
O’Rielly, Henry. 1869. The Historical Magazine, Notes and Queries Concerning the Antiquities, History, and Biography of America. Vol. V, Second Series. Morrisania, NY: Henry B. Dawson, 1869.
Petre, F. Loraine. 2003. Napoleon and the Archduke Charles. Whitefish, MT: Kessinger.
Pingree, Geoffrey B., and Lisa Gitelman. 2003. “Introduction: What’s New About New Media?” In New Media, 1740–1915, edited by Lisa Gitelman and Geoffrey B. Pingree, xi–xxii. Cambridge, MA: MIT Press.
Wertheimer, Eric. 2012. “Pretexts: Some Thoughts on the Militarization of Print Rationality in the Early Republic.’ Canadian Review of American Studies 42, no. 1: 21–35.
Wouves, P. R. 1787, repr. 1797. Tableau Syllabique et Steganographique/A Syllabical and Steganographical Table. Philadelphia, PA: Benjamin Franklin Bache.
Notes
[1] See Patricia Crain’s (2003) brief but seminal discussion of Napoleon’s message in “Children of Media, Children as Media.” Napoleon’s communiqué of 1799 was not, of course, the first message sent by nationalized telegraph. The very first message came years earlier, in 1794, when Napoleon relayed word of the capture of Quesnoy, an intriguingly precise post-revolutionary report about the surrender of “slaves”: “Austrian garrison of 300 slaves has laid down its arms and surrendered at discretion” (Petre 2003, 65 original emphasis)
[2] Much of my thinking about this set of historically resonant design issues arises from a vein of media theory and history that originates in the work of Harold A. Innis in The Bias of Communication (1951) which begins to connect empire with media history and Sigfried Giedion, whose Mechanisation Takes Command (1948, repr. 2014) articulates a way to think about design as a signifier with deep meaning embedded in social (what he calls “anonymous”) history. I am aware my own terminology here is indebted to the new directions in media history, which take chances with analogies and homologies of form—thus the words “computational” and “binary” in my discussion of the early semaphore telegraph. I view this chance-taking as a way to theorize the semaphore as part of ideologies of representation that are playing out now, in fields like circuit design and social media analysis, but have distinct and identifiable origins in the past. Others in this lineage are of course, Friedrich Kittler (2010) and Lev Manovich, though I am wary of fully agent-less iterations of media history. That wariness is why I am interested in the political figure of the executive within this interpretive and representational network.
[3] No accident it would seem that Napoleon spent part of 1795 in the Department of Topography. It is also worth mentioning that 1795 was also the year he produced the fragments of what would become his novella, Clisson et Eugenie, an example of a remarkably direct style of novelistic narrative.
[4] This is a view of language largely shared by Hooke’s contemporary John Locke in his Essay Concerning Human Understanding (1689); Locke is deeply impatient with the inefficiencies of language as a means to certainty in communication. For further elaborations on the work of Hooke as a key figure in the history of telegraphic representation, see Wertheimer (2012).
[5] Kittler (2010) uses this phrase in referring to Anathasius Kircher’s lanterna magica, pointing out that Kircher developed the projecting device as part of military research (74).
[6] According to Gerard J. Holzmann and Björn Pehrson (2003) in The Early History of Data Networks, the earliest experiments with the concept of the modern telegraph systems occurred in the French navy. Captain Decourrejolles, in 1783, used a coastal network of semaphore stations to transmit enemy positions. In A History of Telegraphy, Ken Beauchamp (2008) has an in-depth description of the development of electrostatic telegraphy, which predated Chappe’s invention, but remained for much of the eighteenth century a parlor spectacle rather than a viable state technology.
[7] Published 276 issues between October 2, 1799 and May 22, 1802.
[8] Published 5 issues between September 25, 1811 and December 22, 1813.
[9] Published 128 issues between April 8, 1795 and December 28, 1796.
[10] Published 41 issues between December 3, 1796 and March 9, 1799, with minor variant titles. It later had the title Lincoln Telegraph, which published 78 issues between April 27, 1820 and October 18, 1821.
[11] Published 67 issues between May 6, 1795 and August 17, 1796.
[12] Published 159 issues between January 1, 1820 and July 13, 1822 in New Hampshire.
[13] Published 11 issues between August 19, 1812 and December 25, 1812 in Norwich, NY.
[14] Published 144 issues between July 7, 1818 and December 26, 1820 in Rochester, NY.
[15] Published 77 issues between January 13, 1813 and July 30, 1814 in Norwich, NY (it was also known as The Telegraph and the Newton Telegraph).
[16] Published 159 issues between November 9, 1814, and March 4, 1818.
Journalists might be chroniclers of the present, but two decades of books, conferences, symposia, interviews, talks, special issues, and end-of-year features on the future of news suggests they are also preoccupied with what lies ahead. Still, few of today’s media workers are as prescient as William T. Stead, the English journalist and amateur occultist who came close to predicting the 1912 Titanic disaster twenty years before he died in it. In his 1893 short story, “From the Old World to the New,” a transatlantic ocean liner collides with an iceberg and erupts in flames, leaving the vessel’s desperate passengers clinging to a sheet of ice. Unlike the Titanic, everyone in the story lives. Two passengers on a nearby ship receive telepathic distress signals. One has haunting visions of the accident in her sleep, and the other finds a written plea for help in the handwriting of a friend travelling aboard the sinking ship. The clairvoyants relay this information to their captain, who steers a perilous course through the icebergs and rescues the shipwrecked passengers. In 1893 wireless telegraphy, the early term for radio, did not yet exist (even if, as an idea, it electrified the Victorian imagination). By the time of the Titanic’s maiden voyage, radio was a standard maritime communication device. The technology helped, but was no panacea: the closest ship to receive the Titanic’s SOS signals arrived too late for Stead and many of his fellow passengers.
Stead was at the forefront of thinking about new technologies as well as his own demise. He also had a keen interest in journalism’s future, one shared by many of today’s news workers. Even people who failed to predict the collision of twentieth-century news models with the Web are now regularly called upon to forecast the profession’s future. Answering the future-of-news question requires experts to project past experience and current knowledge onto a forthcoming period of time. But does this question have a history of its own? Did earlier news workers prognosticate as often and with the same urgency? What anxieties or opportunities provoked past future thought? To answer these questions, I explore some future-oriented predictions, assessments, and directives of nineteenth and twentieth-century reporters, editors, and media entrepreneurs in the United States and England. Their claims about the future of journalism serve as windows into the relationship between technology and news work at different historical moments and offer insights into today’s prognoses.
The Current Crisis
In the U.S., mainstream news agencies have been dealt a series of technological, economic, and political blows that have changed the way news is written, distributed, consumed, funded, and understood. Anxiety about the future can be understood in light of three interrelated challenges to the post-World War II information order: twenty years of digital technological disruption, the 2008 economic crisis, and politically and economically motivated challenges to the industrial news media.
By now it is a truism that screen-based digital technologies have transformed journalism. Newspapers, in particular, have experienced an advertising and readership decline more existentially threatening than the threat posed to print from radio in the 1920s or from television in the 1950s. The net presented a challenge to print media even before it became a major platform for news; in the mid-1990s, Craigslist disrupted the long-standing classified ad revenue streams of daily papers and newspapers (Seamans and Zhu 2013). The incorporation of print news functions into the digital has only intensified since then. Internet saturation in U.S. households is at 84 percent and climbing (Pew Research Center 2015). News consumers are no longer tethered to a small set of news organizations; sixty-two percent read disparate stories they happen across on social media and Twitter feeds and do not subscribe to a single newspaper or news magazine (Gottfried and Shearer 2016).
Newspapers were already on shaky ground when the 2008 financial crisis struck. Economic downturn coupled with technological displacement led to a crisis of near Darwinian proportions for an industry that had seen outsized profit margins for much of the twentieth century. Closures, bankruptcies, and mergers ensued. Historic papers like the Rocky Mountain News and Ann Arbor News shut their doors, and many other dailies and weeklies reverted to web-only formats (Rogers 2009). Over a hundred papers ceased publication between 2004 and 2016 (Barthel 2016). Papers that endured the techno-economic struggles of the 2000s had to rethink the nature of the news enterprise from the ground up, devising survival strategies in a new Mad Max-style advertising and subscriber-depleted media terrain.
Journalism never regained its footing after the financial crisis. As a Pew Research Center study suggests, “2015 might as well have been a recession year” for the traditional news media (Barthel 2016). The study paints a grim picture of the news industry. In 2014 and 2015, the number of print media consumers continued to drop. Even revenue from digital ads fell as advertisers migrated to social media sites like Facebook. And full-time jobs in journalism continued their steady decline: today there are 39 percent fewer positions than there were two decades ago. News consumption also began to shift from personal computers to mobile devices. Readers increasingly access news items on their phones, while standing in line, waiting at red lights, and at other spare moments of the day. In a metric-driven world, mobile news consumption has a silver lining: many sites are receiving more visits than before. However, the average mobile-device reader spends less time with each article than they did on PCs (Barthel 2016). Demand for news exists, albeit in ever-smaller and dislocated chunks.
At the same time, insurgent news entrepreneurs have altered the media field by leveraging weaknesses in the system and taking advantage of emerging technological possibilities. Just as the most successful nineteenth-century “startups” were enabled by new technologies like the steam press that sped up and lowered the cost of printing,[1] today’s media insurgents – people like Matt Drudge, Steve Bannon, the late Andrew Brietbart, and others – moved straight to digital news and data formats without prior institutional baggage. Since initial start-up costs on the Web are low and news production and dissemination is relatively easy, they were able to offer a trimmed-down model of news production that did not require reporting in the strict sense.
Some of these insurgents imagine a future for news unfettered by past or existing structures. They claim they want to take a sledgehammer to old media, but it really serves as their foil. In the current context, the terms old media, establishment media, and mainstream media are thrown around by new media players jockeying for position in a changing media field. The White House is currently engaged in a hostile yet mutually beneficial battle with mainstream news outlets, and it echoes the position that the news media is a liberal monolith that censors alternative positions.[2] At the same time, establishment journalism is enjoying a period of unpredicted growth due to the Trump bubble, and has been reinventing and reimagining itself as the Fourth Estate in the wake of the 2016 election.
Future-of news experts reduce professional and public uncertainty in times of flux (Lowery and Shan, 2016). But it is important to note that not all contemporary observers are worried. The late David Carr, for instance, believed Web startups like Buzzfeed would eventually become more like traditional news outlets. “The first thing they do when they get a little money is hire some journalists,” he said in 2014. He was confident news audiences had an intrinsic desire for quality and that the business end of things would eventually sort itself out.
Similarly, people who express anxieties about the state of journalism are more likely to have experienced journalism as a stable and predictable field, and to have lost something when the old model collapsed. Those who are concerned worry that a digital-age business model will never arise to solve journalism’s funding problem. They worry that automation will replace journalists. They fear ideological bubbles and distracted audiences. They lament eroding legitimacy and credibility in an era of so-called fake news. And they hope prognosticators possess special knowledge or have more crystalline vision than others in the profession. But did past reporters and editors worry about the fate of their profession in the same way?
The Nineteenth Century
In the nineteenth century, journalism was a wide-open, experimental field on both sides of the Atlantic. Literacy rates were climbing. Print technologies had improved. Paper was cheaper to produce than ever before. Newspapers, book publishers, and the public were experiencing the power of mass dissemination. By the second half of the nineteenth century, newspapers’ social standing had improved. Some observers believed they were institutions on the ascent that would eventually play a social role on par with educators, clergy, or government officials.
However, concerns about the accelerated pace of newspaper work, the constant demand for “newness,” and the unremitting imperative to scoop rival papers were refrains in nineteenth-century journalistic commentary. In his biography of Henry Raymond, the journalist and author Augustus Maverick characterized news work in 1840s New York as an unceasing “treadmill”:
Only those who have been placed upon the treadmill of a daily newspaper in New York know the severity of the strain it imposes on the mental and physical powers. ‘There is no cessation,’ one newsman explained. ‘A good newspaper never publishes that which is technically denominated ‘old news,’ – a phrase so significant in journalism as to be invested with untold horrors. All must be daily fresh, daily complete, daily polished and perfect; else the journal falls into disrepute, is distanced by its rivals, and, becoming ‘dull,’ dies. (1870, 220)
I will return to the issue of acceleration later in the paper. For now, it is important to note that perceptions of speedup and fears of being outmoded were embedded in the experience of journalism as early as the 1840s.
Despite journalism’s daily stresses, Maverick felt the quality and legitimacy of papers was on the rise. The press had successfully overcome early-nineteenth century threats to credibility like partisanship and the sensationalism of the penny press, which printed fantastical, fabricated stories like the New York Sun’s Great Moon Hoax. Maverick believed this progress would continue unabated:
Accepting the promise of the Present, the prospect of the Future brightens. For, as men come to know each other better, through the rapid annihilation of time and space, they will be plunged deeper into affairs of trade and finance and commerce, and be burdened with a thousand cares, – and the Press, as the reflector of the popular mind, will then take a broader view, and reach forth towards a higher aim; becoming, even more than now, the living photograph of the time, the sympathetic adviser, the conservator, regulator, and guide of American society. (1870, 358)
Maverick envisioned a future in which the press would both facilitate and temper the social changes wrought by connectivity (changes that he analyzed in his 1858 book on the telegraph).
The same year Maverick predicted a role for the press as guide and advisor in an increasingly complex and interconnected world, William T. Stead began his career as a fledgling reporter. Few journalists tested, challenged, and wielded the power of the press quite like Stead. In his essay “The Future of Journalism” (1887), he envisioned radical and expansive new plans for the press. His own journalistic experiments had convinced him that editors “could become the most permanently influential Englishmen in the Empire.” But to ascend to this level one had to become a “master of the facts – especially the most dominant fact of all, the state of public opinion.” Editors guessed at public opinion, but had no way of gauging it. To remedy this, Stead suggested journalists be allowed twenty-four hour access to everyone “from the Queen downward.” His news workers of the future would be intimately connected to public opinion across the social system. They would have unfettered access to powerful people, which would diminish the unquestioned authority and privacy of the aristocracy.
Since the system Stead imagined would be impossible for one person to manage, it would be held in place by travelers who would preach the importance of journalistic work with a missionary zeal. The travelers would eventually be “entrusted the further and more delicate duty of collecting the opinions of those who form the public opinion of their locality.” Stead was certain the enactment of his plan would result in the greatest “spiritual and educational and governing agency which England has yet seen.”
“The Future of Journalism” demonstrates a keen awareness of print’s power in an era of mass distribution and rapid news diffusion. It was grandiose because it imagined a far greater political role for journalists than they would ever possess. In some respects, though, Stead was a superior prognosticator. In 1887, the communications field was undifferentiated. His journalistic travelers and major-generals would ultimately manifest themselves in the twentieth century as pollsters, social scientists, and public relations specialists. But the editor would not sit at the helm, overseeing these efforts. Instead, journalist/editors would report their findings and beliefs, and serve as conduits in the flow of ideas between these professionals and the public. Despite their inadequacies, Stead’s writings on the future were more prescriptive and imaginative than many of today’s commentaries on the topic.
Twentieth-Century Futures
Nineteenth-century commentators on the news profession lamented acceleration, railed against partisanship, and decried certain forms of sensationalism, but they also believed in progress. This changed in the twentieth century. Frank Munsey’s career began by selling low-cost magazines and pulp fiction. In 1889 he launched the popular general-interest magazine Munsey’s Magazine, and he went on to amass a fortune between 1900 and 1920 purchasing and selling ten different newspapers, including The New York Daily News, The Boston Journal, and The Washington Times. He was a businessman first and journalist second. Munsey’s contemporaries viewed him as journalism’s undertaker: his very appearance on the scene heralded a newspaper’s demise. His contemporary, Oswald Garrison Villard, described him as “a dealer in dailies – little else and little more” (1923, 81).[3]
Munsey’s “Journalism of the Future” appeared in 1903 in Munsey’s Magazine. In it, he suggests that the common editors’ refrain about “lack of good men” misses the real problem. The threat facing journalism is not a lack of well-trained workers, but the size of daily papers. Newspapers, which had been expanding since the 1890s, contained more sections, lengthier features, and larger Sunday editions than ever before. As papers grew, readers became rushed. The problem with news circa 1903 was that there was too much to write about and too much to read. Because they had to absorb so much, readers’ attention was at all all-time low (a concern that resonates with today’s news producers). For Munsey, the solution to the problem of the rushed and inattentive reader lay in condensation and conglomeration. Predicting extreme media consolidation long before it occurred, Munsey speculated that within four years (i.e., by 1907) the entire media field would be whittled down to three or four firms that would publish every newspaper, periodical, magazine, and book:
The journalism of the future will be of a higher order than the journalism of the past or the present. Existing conditions of competition and waste, under individual ownership, make the ideal newspaper impossible. But with a central ownership big enough and strong enough to encompass the whole country, our newspapers can afford to be independent, fearless, and honest. (1903, 830)
For Munsey, consolidation, quality, and independence are linked through the efficiency and scope of large-scale production and the nationalization of mass audiences. He does not foresee problems caused by monopolization or threats to newspapers from radio. He imagines technology only as it relates to its effects on the productive capacity of print news, which he thought was fettered by local ownership.
Writing during World War I, Willard Grosvenor Bleyer, founder of the University of Wisconsin journalism school and advocate of professional training, took a more modest view of journalism’s future. His primary concern was wartime press censorship and the spread of propaganda through semi-official news agencies. However, he considered these developments temporary deviations from the normal function of the press in a democratic society: eventually the profession would return to its pre-war normalcy. “The world war,” he wrote, “has given rise to peculiar problems, none of which, however, seems likely to have permanent effects on our newspapers” (1918, 14). Wartime austerity, especially the high price of paper, posed problems for the news industry. But there was a bright side. People wanted news from Europe, so the higher cost of newspapers had not decreased circulation rates.
Some early-twentieth century observers were concerned about sensationalism and editorial independence or the effects of war on the press, while others worried about the future of democracy in the context of Munsey-wrought newspaper industry mergers. Oswald Villard, writer for TheNation and The NY Evening Post, founder of the American Anti-Imperialist League, and the first treasurer of the National Association for the Advancement of Colored People, argued that consolidation threatened democracy. Most newspapers lacked commercial independence and were beholden to advertisers who limited what they could publish. He was also concerned about the political implications of audience fragmentation: “Not today can one, no matter how trenchant their pen, be in a garret and expect to reach the conscience of a public by seventy millions larger than the America of Garrison and Lincoln.” Villard, however, held out hope that the views of ‘great men’ would find an audience, even if it meant bypassing the press. He did not predict new media forms, but looked back at old ones: “the prophet of the future will make his message heard, if not by a daily, then by a weekly; if not by a weekly, then by pamphleteering in the manner of Alexander Hamilton; if not by pamphleteering then by speech in the market-place” (1923, 315).
After World War II, journalism experienced a period of stability that gave it an aura of permanence, as if media institutions were constants amidst other economic, social, and cultural changes. Future concerns during this period centered on issues of technology and media consolidation. In 1947, for example, the Hutchins Commission on Freedom of the Press predicted that newspapers would soon be sent from FM radio stations to personal facsimile machines. These devices would print, fold, and deposit them in the hands of U.S. householders each morning (34-45). News workers and industry analysts predicted that technologies as diverse as citizens band radio, cable TV, camcorders, and CD ROMS would, for better or worse, alter the production or consumption of news and either enhance or impede democratic processes (Curran 2010a). In the 1980s and 90s, journalists and media critics pointed to the pernicious effects of monopolization in national and regional markets. They feared the one-newspaper town and the absorption of local newspapers by media franchises. Michael Kinsley recalls that, in the pre-Internet period, “at symposia and seminars on the Future of Newspapers, professional worriers used to worry that these monopoly or near-monopoly newspapers were too powerful for society’s good” (2014).
Time, Space, and Journalism
Time is not a natural resource that springs from the Earth, but a cultural and social construct imagined and experienced in multiple ways (Fabien 1983).[4] Some social theorists argue that the sensation of rapid acceleration is a key feature of the modern experience of time (Crary 2013; Rosa 2013). Harmut Rosa, for example, has argued that time compression has reached a point where the hamster wheel or treadmill has become an apt metaphor for modern life. Work speedups and technological immersion are necessary just to maintain social stasis, without the possibility of advancement or breaking free (Rosa 2010). For Rosa and other accelerationists, acceleration leaves you mired in the present, anticipating the future with a sense of dread. The reality is that there is no uniform experience of time; our experience depends upon our position within circuits of information and capital (Sharma 2014). But when it comes to technological and economic speedup, journalism may be the canary in the mine. Reporters like Maverick experienced this treadmill effect as early as the 1840s. In 1918, Francis Leupp described the quickening pace of news work in the electric age:
We must reckon with the progressive acceleration of the pace of our 20th century life generally. Where we walked in the old times we run in these; where we ambled then, we gallop now. In the age of electric power, high explosives, articulated steel frames, in the larger world; of the long-distance telephone, the taxicab, and the card-index, in the narrower. The problem of existence is reduced to terms of time-measurement. (39)
Like Maverick, Leupp experienced the dynamism of modern life and the dual pressures of accuracy and speed in journalism.
It makes sense that journalism would experience the present this way. As the quintessential modern form, news embodies planned obsolescence (Schwartz 1999). Journalism has undergone two centuries of shrinking intervals of newness and relevance: six-months, a week, a day, an hour. With the rise of social media and Twitter, the intervals between news cycles have grown even shorter. In the twentieth century, edition release times and broadcast schedules helped carve the day into identifiable units with firm deadlines. But in a context where news can be posted around the clock and updated every minute, the clock is no longer a structuring device for journalism. Minutes, seconds, and the calendar click-over from one day to the next are the only salient units of time. News stories that were relevant and new last week often seem ancient a week later. A newsworthy event like President Trump pulling out of the Paris climate agreement can feel as distant as the Vietnam War the following week. New communication forms like Twitter coupled with strategies of disinformation and the routinization of scandal shatter perceptions of continuity. What we are experiencing now is not the death of history, as was proclaimed after the fall of the Berlin Wall, but the death of the present. In news, rapid acceleration has amnestic effects, similar to the experience of sleep deprivation.
If the main time/space vectors in journalism used to be deadlines and beats, the latter may also be losing their importance, giving way to a more fluid cut-and-run style of journalism. For example, the Washington Post’s Chris Cilizza suggests that young reporters should not decline stories saying, “that’s not my beat” (2016). Rather, in a context of dwindling opportunities, journalists should pursue any story available, whether or not it fits into the old-fashioned logic of beat work or the range of competence of individual journalists.[5] But while traditional beats may be losing their cogency, reporters must add a new online “beat” to their repertoire that entails close surveillance of social media and online news, a dynamic that some critics have argued creates a house of mirrors effect in the news industry (Reinemann and Baugut 2013).
Technology and Uncertainty in the Professions
Journalism may be the paradigmatic case of a profession imperiled by a new technology, but its concerns about time and technological displacement cannot be generalized to other spheres. Take lawyers, social workers, and physicians. Uncertainty within the legal profession is largely unrelated to the digital. It was caused by the recent financial crisis coupled with the overtraining of new professionals. Jobs for newly minted JDs evaporated during the recession, leading to a decline in the number of law school applicants after 2010. With enrollment down, the future of smaller law schools became uncertain, and many schools lowered admission standards to stay afloat (Olson 2015; Pistone and Horn 2016). The profession has been in crisis, but not because of the Internet, and there is even some evidence that law positions are coming back (Solomon 2015). Uncertainty for social workers began even earlier, when the Clinton administration began dismantling the welfare state. Despite the obvious need for such professionals, government, non-profit, and other social service jobs have seen a quarter-century decline because of deep budgetary cuts that began in the 1990s (Reisch 2013).
Physicians seem least concerned with the future. They worry more about burnout than they do the fate of their profession. The future is typically invoked in discussions about labor shortages and descriptions of new developments at the intersection of medicine and technology. Articles on the future of medicine routinely tout new developments like 3D printers that can form living cells into new organs (Mellgard 2015). Digitalization has changed many aspects of medicine: electronic medical records and charting alters the way nurses and physicians access information, for instance. But it has not led to credible speculation about replacing physicians with bots. Contrast this with some news workers’ worries about replacement by computer programs like Automated Insight’s narrative generation system, Wordsmith. The Associated Press now employs Wordsmith to do its quarterly earnings reports and other stories, and has become so confident in these auto-generated stories that it runs many of them without prior vetting (the rare human-edited AI story is said to have had “the human touch”) (Miller 2015). Nor have drones been proposed as a viable alternative to human physicians, as they have been for newsgatherer/photojournalists (Etzler 2016).[6]
In none of these other cases is technology the primary motor of destabilization. The character of future angst in the professions, therefore, is occupation dependent. And journalism, it seems, is uniquely sensitive and vulnerable to technology. Every widely-adopted communications technology – the steam press, radio, the net – has restructured news and led to audience expansion or contraction. In this sense, there is nothing new to journalist’s dependence on and transformation by technologies. The one constant is that journalists work in a field of technological contingency.
Conclusion: Euphoria and Dysphoria in Journalism
Visions of the future are also statements about the present. Political and economic conditions, labor concerns, and beliefs about the nature of time are contained within predictive thought. The future of journalism has been asked when a number of possibilities are on the table and when fewer options are imaginable. Sometimes predictions are made when a journalist has a stake in seeing a particular vision enacted. There was no social stasis or treadmill for Munsey, who saw conglomeration as the key to good journalism, or for Stead, who imagined himself as the heroic journalist proselytizer. Both saw themselves as leaders of the free world. Feelings of euphoria and dysphoria, therefore, come and go and are not unique to one era. Nineteenth-century journalists like Stead and Maverick imagined their field’s future and the journalist’s future roles in society. Both were “feeling it,” riding high on the wave of mechanization.
William T. Stead, 1909 (image source: Wikipedia and GIPHY
Social roles are also embedded within occupational visions of the future. Will tomorrow’s journalists be tellers of truth, interpreters of data, shapers of public opinion, informers of policy makers, imaginers of social utopias? Some commentators insist that news must change to remain relevant in the digital age. In a world of abundant facts, reporters should be master interpreters, explaining the “what” and “how” to the public rather than reciting basic information (Cilizza 2016; Stephens 2014). As older models of journalism become outmoded, either by the Web or by computer programs, the hope is that professional journalists will find a niche explaining events. A similar impulse lies behind data-driven journalism, but in this case the journalists refashion themselves as computer workers, scraping the Web for reams of data, interpreting it, and presenting it to audiences in visually and narratively compelling ways. In solutions-based journalism, the reporter is a meta-social worker or public policy specialist, proposing potential solutions to local social problems based on what other locales have found successful.
There is also an emerging patronage system in which billionaires, foundations, and small donations prop up capital-intensive journalistic forms like investigative journalism. This is a good stopgap measure, and much of the work that has been supported by tech giants like Jeff Bezos, Pierre Omidyar, and others has typically been of high quality. But it begs the question: can journalists write exposés today about the very people and their tech companies who are sponsoring our journalism the way the Ida Tarbell wrote about Standard Oil?
The social roles future of news experts imagine might come to pass, but not always in the way they expect. Stead’s call for government by journalism, for instance, is certainly embodied in a figure like Breitbart’s Steve Bannon. Although Stead would disagree with his political vision and journalistic practices, Bannon is also “feeling it,” envisioning a future of infinite possibilities.
Occupational forecasting serves both psychological and pragmatic ends: it reduces anxieties at the same time that it identifies trends to guide present-day action. Because the future is speculative and can only be imagined or modeled, not recreated from memory, artifact, or written record, prediction-based advice runs a high risk of misdirection. We can safely assume that prognosticators will not determine the actual future of journalism. If Stead were really clairvoyant, the Titanic would have been spared and journalism saved. As Robert Heilbroner suggests, prediction is an exercise in futility. It is better to “ask whether it is imaginable… to exercise effective control over the future-shaping forces of Today” (1995, 95). It is only in this sense that discussions of the future and the social experiments they generate do, in fact, transform the field.
_____
Gretchen Soderlund is Associate Professor of Media History in the University of Oregon’s School of Journalism and Communication. She is the author of Sex Trafficking, Scandal, and the Transformation of Journalism, 1885-1917 (University of Chicago Press) and editor of Charting, Tracking, and Mapping: New Technologies, Labor, and Surveillance, a special issue of Social Semiotics. Her articles have appeared in such journals as American Quarterly, Feminist Formations, The Communication Review, Humanity, and Critical Studies in Media Communication.
The author would like to thank Patrick Jones for his comments on an earlier draft of this essay.
_____
Notes
[1] The tremendous success of nineteenth-century self-made owner-editors like Benjamin Day or S.S. McClure can be attributed to innovations in content and funding models. In the 1830s, Day lowered the cost of his newspaper to only a penny, making it affordable to more New Yorkers, and made up for the decreased revenue by selling more advertising space. McClure did the same thing for magazines in the 1890s, selling his publication for a nickel instead of the standard quarter while increasing ad revenue. In doing so, both took advantage of untapped opportunities to reshape the news field in their respective eras.
[2] Before the 2016 election, this rhetoric united the libertarian left and the right. In a 2014 interview on Democracy Now that, not coincidentally, got positive play in the rightwing media, Glenn Greenwald lambasted Washington Post editors as, “old-style, old-media, pro-government journalists… the kind who have essentially made journalism in the U.S. neutered and impotent and obsolete” (Watson 2014).
[3] Villard also said of Munsey: “There is not a drop of the reformer’s blood in him; there is in him nothing that cries out in pain in response to the travails of multitudes” (1923, 72).
[4] The representational features of future thought are also culturally and historically specific (Rosenberg and Harding 2005).
[5] This more mobile, targeted approach to news production with fewer fixed duties or beats may offer a more varied work experience. But it has labor implications as well: it edges toward freelancing and it may be difficult to say no for reasons beyond beats. Further, reporters may find themselves over their heads in reporting on topics around which they can claim no expertise.
[6] Indeed, the FAA changed its policy on August 29, 2016 so that journalists do not need pilot’s licenses to fly drones, which will precipitate the increased use of the tool in the future (Etzler 2016).
Lowrey, Wilson and Zhou Shan. 2016. “Journalism’s Fortune Tellers: Constructing the Future of News.” Journalism. 1-17.
Maverick, Augustus. 1870. Henry J. Raymond and the New York Press for Thirty Years: Progress of American Journalism from 1840 to 1870. Hartford, CT: A.S. Hale and Company.
Reinemann, Carsten and Philip Baugut. 2014. “German Political Journalism Between Change and Stability.” In Raymond Kuhn and Rasmus Kleis Nielson, eds., Political Journalism in Transition: Western Europe in a Comparative Perspective. New York: Palgrave Macmillan.
Reisch, Michael. 2013. “Social Work Education and the Neo-liberal Challenge: The U.S. Response to Increasing Global Inequality.” Social Work Education 32. 715-733.
Rescher, Nicholas. 1998. Predicting the Future: An Introduction to the Theory of Forecasting. Albany, NY: State University of New York Press.
Rosa, Harmut. 2010. “Full Speed Burnout? From the Pleasures of the Motorcycle to the Bleakness of the Treadmill: The Dual Face of Social Acceleration.” International Journal of Motorcycle Studies 6:1.
Rosa, Harmut. 2013. Social Acceleration – A New Theory of Modernity. New York: Columbia University Press.
Rosenberg, Daniel & Sandra Harding. 2005. In Daniel Rosenberg and Sandra Harding, eds., “Introduction: Histories of the Future.” Histories of the Future. Durham, NC:Duke University Press.
Seamans, Robert & Feng Zhu. 2013. “Responses to Entry in Multi-Sided Markets: The Impact of Craigslist on Local Newspapers.” Management Science 60. 476-493.
Sharma, Sarah. 2014. In the Meantime: Temporality and Cultural Politics. Durham, NC: Duke University Press.
Schwartz, Vanessa. 1999. Spectacular Realities: Early Mass Culture in Fin-de-Siécle Paris. Oakland, CA: University of California Press.
The supposition [is] that higher education and schooling in general serve a democratic society by nourishing hearty citizenship.
– Richard Ohmann (2003)
What are the risks of writing in public in this digital age? Of being a “speaking” subject in the world of public cyberspace? Physical and legal risks are discussed in work such as Nancy Welch’s (2005) recounting of her student’s encounter with the police for literally posting her poems where bills or poems were not meant to be posted. Weisser recounts a “hallway conversation” about public writing as “shared work, shared successes, and, occasionally, shared commiseration” (2002, xii). Likewise, in writing about blogging in the classroom, Charles Tryon writes about the way blogging with interactions from the public provokes “conversations” about the “relationship between writing and audience,” one that can, at times, be uncomfortable (2006, 130). There is an assumption that when discussing the “risks” of writing in public here in the United States, we instructors are discussing the risks of exercising the rights of citizenship, of first amendment disagreement and discord. Yet the assumption that the speaking subject has first amendment rights, that they possess or can express citizenship, is one which nullifies the risks some students face when they write in public, especially in digital spaces where the audience can be a vast everyone. What is the position of one who writes in public literally without the possibility of citizenship? In the absence of US citizenship, their taking the position of subject, and offering testimony about their situation, protesting it as unjust can provoke not simply abuse, which is disturbing enough, but to threats of legal action against them. Public writing opens them and their families up to threats of reporting, detainment and possible deportation by the government. Given these very real risks, I question whether from a Chicanx studies pedagogy we should be advocating for and instructing our students to express their thoughts on their positions, on their lives, in public.[1] This question feels especially urgent when, given the digital turn, to write in “public” can mean a single tweet results in huge consequences, from public humiliation to the horror of doxxing. To paraphrase Eileen Medeiros, who writes about these risks in another context, “was it all worth a few articles and essays” or, to make it more contemporary, is the risk worth a few blog posts or ‘zines? (2007, 4).
This said, I was and am convinced about the power and efficacy of having students write in public, especially for Chicanx studies classrooms. Faced with the possibilities offered by the Internet and their effects on the Chicanx studies classroom, my response has been enthusiasm for the electronic, for electronic writing, of their making our discourse public. Chicanx pedagogy is, in part, based on a repudiation of top-down instruction. As a pedagogy, public writing instead advocates bringing the community into the classroom and the classroom into the community. Blogging is an effective way to do this. Especially given the relative lack of Chicanx digital voices on the ‘net, I yearn for my students to own part of the Internet, to be seen and heard. This enthusiasm for having my Chicanx studies students write for the Internet came first out of my final year of dissertation research when I “discovered” that online terms from the Chicano Movement like “Aztlán,” “La Raza” and so on were being used by reactionary racists to (re)define and revise the history of the Chicano Movement as racist and anti-Semitic and were wildly distorting the goals, philosophies, and accomplishments of revolutionary movements. More disturbing, these mis-definitions were well enough linked to appear on the first few pages of search results, inflating their importance and giving them a sense of being “truth” merely by virtue of their being oft repeated. My students’ writings, my thinking went, would change this. Their work, I imagined, would be interventions in the false discourse, changing, via Google hits, what people would find when they entered Chicanx studies terms into their browsers. Besides, I did my undergraduate degree at a university in the midwest without a Chicanx or Latinx studies department. My independent study classes in Chicanx literature were constructed from syllabi for courses I found online. I was, therefore, imagining our public writing being used by people without access to a Chicanx studies classroom to further their own educations.
Public writing, generally defined as writings for an audience beyond the professor and classroom, can be figured in a variety of ways, but descriptions, especially those in the form of learning objectives and outcomes, tend toward a focus on writing centered around social change and the fostering of citizenship. This concept of “citizenship” is often repeated in composition studies as public writing is discussed as advocacy, as service, as an expression of active citizenship. Indeed the public writer has been figured by theorists as expressions of “citizenship” and an exercise in and demonstration of first amendment rights. Public writing is presented as being, as Christian Weisser wrote, the “discourse of public life,” further writing of his pride in being “a citizen in a self-reforming constitutional democracy” (xiv). Public writing is presented as nurturing citizenship and therefore we are encouraged to foster it in our classrooms, especially in the teaching of writing. Weisser also writes of the teaching of public writing as a “shared” classroom experience, sometimes including hardships, between students and instructors.
However, this discussion of “citizenship” and the idea of creating it through teaching to me rather disturbingly echoes the idea of assimilation to the dominant culture, an idea that Chicana/o studies pedagogy resists (Perez 1993, 276). Rather than a somewhat nationalistic goal of creating and fostering “citizenship” Chicana/o studies, especially since the 1980s publication and adoption of Gloria Anzaldúa’s Borderlands, has been for a discourse that “explains the social conditions of subjects with hybrid identities” (Elenes 1997, 359). These hybrid identities and the assumption of the position of subjecthood by those who resist the idea of nation is fraught, especially when combined with public writing. As Anzaldúa writes, “[w]ild tongues can’t be tamed, they can only be cut out” (1987, 76). The responses to Chicanx and Latinx students speaking or writing their truth can be demands for their silence.
My students and my use of public writing via blogging and Twitter was productive through upper division classes I taught on Latina coming of age stories, Chicana feminisms and Chicana/o gothic literature. After four courses taught using blogs created on my WordPress multisite installation with author accounts created for each student, I felt that I had the blogging with students / writing in public / student archiving thing down. My students had always had the option to write pseudonymously, but most had chosen to write under their own names, wanting to create a public digital identity. The blogs were on my domain and identified with their specific university and course. had been contacted by authors (and, in one case, an author’s agent), filmmakers and artists, and other bloggers had linked to our work. My students and I could see we had a small but steady traffic of people accessing student writing with their work being read and seen and, on a few topics, our class pages were on the first pages of a Google search. Therefore, when I was scheduled to teach a broader “Introduction to Chicana/o Studies” course, I decide to apply the same structure: students publicly blogging their writings on a common course blog on issues related to Chicanx studies, to this one hundred level survey course. Although, in keeping with my specialization, the course was humanities heavy with a focus on history, literature and visual art, the syllabus also included a significant amount of work in social science, especially sociology and political science, forming the foundations of Chicanx studies theory. The course engaged a number of themes related to Chicanx social and political identity, focusing a significant amount of work on communities and migrations. The demographics of the course were mixed. In the thirty student class, about half identified as Latina/o. The rest were largely white American, with several European international students.
As we read about migrations, studying and discussing the politics behind both the immigrant rights May Day marches in Los Angeles and the generations of migrations back and forth across the border, movements of people which long pre-dated the border wall, we also discussed the contemporary protest writings and art creations by the UndocuQueer Movement. In the course of class discussion, sometimes in response to comments their classmates were making that left them feeling that undocumented people were being stereotyped, several students self-disclosed that they were either the children of undocumented migrants or were undocumented themselves. These students discussed their experience of not being citizens of the country they had lived in since young childhood, the fear of deportation they felt for themselves or their parents, and its effect on them. The students also spoke of their hopes for a future in which they, and / or their families, could apply for and receive legal status, become citizens. This self-disclosure and recounting of personal stories had, as had been my experience in previous courses, a significant effect on the other students in the class, especially for those who had never considered the privileges their legal status afforded them. In the process the undocumented students became witnesses and experts, giving testimony. They made it clear they felt empowered by giving voice to their experience and seeing that their disclosures changed the minds of some of their classmates on who was undocumented and what they looked like.
After seeing the effect the testimonies had in changing the attitudes of their classmates, my undocumented students, especially one who had strongly identified with the UndocuQueer movement (in one case, the student had already participated in demonstrations), began to blog about their experiences, taking issue with stereotypes of migrants and discussing the pain reading or hearing a term like “illegals” could cause. Drawing on the course-assigned examples of writers Anzaldúa and Cherríe Moraga, they used their experiences, their physical bodies, as both evidence and metaphor of the undocumented state of being in-between, not belonging fully to any country or nation. They also wrote of their feelings of invisibility on a majority white campus where equal rights of citizenship were assumed and taken for granted. Their writing was raw and powerful, challenging, passionate and, at times, angry. These student blog posts seemed the classic case of students finding their voices. As an instructor, I was pleased with their work and gave them positive feedback, as did many of their classmates. Yet as their instructor, I was focused on the pedagogy and their learning outcomes relative to the course and had not fully considered the risk they were taking writing their truth in public.
As part of being instructor and WordPress administrator, I was also moderating comments to the blog. The settings had the blog open to public comments, with the first from any email address being hand moderated in order to prevent spamming. However, for the most part, unless an author we were reading had been alerted via Twitter, comments were between and among students in the course, which gave the course blog the feeling of being an extension of the classroom community, an illusion of privacy and intimacy. Due to this closeness, the fact the posts and comments were all coming from class members, the students and I largely lost sight of the fact we were writing in public, as the space came to feel private. This illusion of privacy was shattered when I got a comment for moderation from what turned out to be a troll demanding “illegals” be deported. Although it was not posted, what I read was an attack on one of my students, hinting the poster had done (or would do) enough digging to identify the student and their family. Not only was the comment was abusive, the commenter claimed to have reported my student to ICE.
I was reminded of the comment and the violent anger directed at undocumented students, however worthy they might try and prove themselves, again in June 2016 when Mayte Lara Ibarra, an honors high school student in Texas, tweeted her celebration of her status as valedictorian, her high GPA, her honors at graduation, her scholarship to the University of Texas and her undocumented status. While she received many messages of support, she felt forced to remove her tweet due to abuse and threats of deportation by users who claimed to have reported her and her family to Immigration and Customs Enforcement (ICE).
When I received this comment for moderation, my first response was to go through and change the status of the blog posts testifying about being undocumented to “drafts” and then to contact the students who had self-disclosed their status to let them know about the comment and the threat. I feared for my students and their families. Had I encouraged behavior–public writing–that made them vulnerable? I wondered whether I should I go to my chair for advice. Guilty self-interest was also present. At the time I was an adjunct instructor at this university, hired semester-to-semester to teach individual classes. How would my chair, the department, the university feel about my having put my students at risk to write for a blog on my own domain? Suddenly the “walls” set up by Blackboard, the university’s learning management software, that I had dismissed for being “closed,” looked appealing as I wondered how to manage this threat. Much of the discourse around public writing for the classroom discusses “risk,” but whose risk are we talking about, and how much of it can students take, and, as their instructor, what sort of risks can I be responsible for allowing them to take? Nancy Welch discusses the “attention toward working with students on public writing” as an expression of our belief as instructors that this writing “can matter in tangible ways” (2005, 474), but I questioned whether it could matter enough to be worth tangible risk to my students’ and their families physical bodies at the hands of a nation-state that has detained and deported more than 2.5 million people since 2009 (Gonzalez-Barrera, and Krogstad 2014). While some of the students in this class qualified for Deferred Action for Childhood Arrivals (DACA), giving them something of a shield, their parents, and other members of their families did not all have this protection.
By contrast, perhaps not surprisingly, my students, all of them first and second year students, felt no risk, or at least were sure they were willing to take the risk associated with the public writing. They did not want their writing taken down or hidden. My students felt they were part of a movement, a moment, to expressly leave the shadows. One even argued that the abusive comment should be posted so they could engage with its author. We discussed the risks. Initially I wanted them to be able to make the choice themselves, did not want to take their voice or power from them. Yet that was not true—what I wanted was for them to choose to take the writing down and absolve me of the responsibility for the danger in which my assignments had placed them. On the other hand though, as I explained to them, the power and responsibility rested with me. I could not conscience putting them at risk on a domain I owned, for doing work I had assigned. They agreed, albeit reluctantly. What I find most shameful in this, it was not their own risk, but their understanding mine, of my position in the department and university, that made them agree I needed to take their writing down. We made their posts password protected, shared the password with the other students for the duration of the class, and the course ended uneasily in our mutual discomfort. Nothing was comfortably resolved at this meeting of immigration law with my students’ bodies and their public writing. At the end of the course, after notifying them so they could save their writing if they wished, I did something I had never done before. I removed the students’—all of the students’—blogging from the Internet by archiving the course blog and removing it from public view.
As I began to process and analyze what had happened, I wondered what could be done differently. Was there a way to allow my students to write in public yet somehow shield them from these risks? After I presented and discussed this experience at HASTAC in 2015, I was approached with a number of possible solutions, some of which would help. Very generously, one was to allow my next course blog to be on the HASTAC site where commenting requires registration. This was a partial solution that would protect against trolling, but I questioned whether it could it protect my students from identification, from them and their families being reported to the authorities. The answer was no, it could not.
In 2011, Amy Wan examined traced and problematized the idea of creating citizens and expressing citizenship through the teaching of literacy, a concept which she traces through composition pedagogy, especially as it is expressed on syllabi and through learning objectives. The pedagogy on public writing is imbued with the assumption of citizenship with the production of citizens as a goal of public writing. Taken this way, public writing becomes a duty. Yet there is a problem with this objective of producing citizens and this desire for citizenship when it comes to students in our classes who lack legal citizenship. Anthropology in the 1990s tried to work around and give dignity to those without “full” citizenship by presenting the idea of “cultural citizenship” as a way to refer to shared values of belonging among people without legal citizenship. This was done as a way of trying to de-marginalize the marginalized and reimagine citizenship so no one’s status was second class (Rosaldo 1994, 402). But the situation of undocumented people belies this distinction, however noble and well rooted in social justice its intention. To be undocumented in the United States is to be dispossessed not only of the rights of citizenship, but to have the exercise of either the rights or responsibilities of citizenship through public speaking or writing be taken as incitement against the nation state, with some citizens viewing it as a personal assault.
This problem of the exercise of rights being seen as incitement is demonstrated by the way the display of the Mexican flag at protests for immigrant rights is seen as a rejection of the United States and refusal of US citizenship, despite the protests themselves being demands or pleas for the creation of a citizenship path. The mere display of Mexico’s flag is read as a provocation, an action which, even when done by citizens, destabilizes citizenship, seems to remove protester’s first amendment rights, and prompts cries that they should “Go back to Mexico,” or, more recently, for the government to “Build a wall.” Latinxs displaying national flags are accused of wanting to conquer (or reconquer) the southwest, reclaiming it from the United States for Mexico. This anxiety about being “conquered” by the growing Latinx population is, perhaps, displaying an anxiety that the southwestern states (what Chicanxs call Aztlán) are not so much a stable part of the conquered body, but an expression of how the idea of “nation” is itself unstable within the US borders. When a non-citizen, a subject sin papeles, writes about the experience of being undocumented, they are faced with a backlash of those who believe their existence, if they are allowed existence in the United States at all, is one without rights, without voice. Any attempt to give voice to their position brings overt threats of government action against their tenuous existence in the US, however strong their cultural ties to the United States. My students writing in public about their undocumented status, are reminded that their bodies are not citizens and, that the right to free speech, the right to write one’s truth in public is one given to citizen subjects.
This has left me with a paradox. My students should write in public. Part of what they are learning in Chicanx studies is about the importance of their voices, of their experiences and their stories are ones that should be told. Yet, given the risks in discussing migration and immigration through the use of public writing, I wonder how I as an instructor should either encourage or discourage students from writing their lives, their experiences as undocumented migrants, experiences which have touched, every aspect of their lives. From a practical point of view I could set up stricter anonymity so their identities are better shielded. I could have them create their own blogs, thus rather passing the responsibility to them to protect themselves. Or I could make the writing “public” only in the sense of it being public in the space of the classroom by using learning management software to keep it, them, behind a protective wall.
_____
Annemarie Perez is an Assistant Professor of Interdisciplinary Studies at California State University Dominguez Hills. Her area specialty is Latina/o literature and culture, with a focus on Chicana feminist writer-editors from 1965-to the present, and digital humanities and digital pedagogy and their intersections and divisions within ethnic and cultural studies. She is writing a book on Chicana feminist editorship using digital research to perform close readings across multiple editions and versions of articles and essays..
[*]This article is an outgrowth of a paper presented at HASTAC 2015 for a session titled: DH: Affordances and Limits of Post/Anti/Decolonial and Indigenous Digital Humanities. The other panel presenters were: Roopika Risam (moderator), Siobhan Senier, Micha Cárdenas and Dhanashree Thorat.
_____
Notes
[1] “Chicanx” is a gender neutral, sometimes contested, term of self-identification. I use it to mean someone of Mexican origin, raised in the United States, identifying with a politic of resistance to mainstream US hegemony and an identification with indigenous American cultures.
_____
Works Cited
Anzaldua, Gloria. 1987. Borderlands/La Frontera: The New Mestiza. San Francisco: Aunt Lute Books.
Elenes, C. Alejandra. 1997. “Reclaiming The Borderlands: Chicana/a Identity, Difference, and Critical Pedagogy.” Educational Theory 47:3. 359-75.
Moraga, Cherríe. 1983. Loving in the War Years: Lo Que Nunca Pasó Por Sus Labios. Boston, MA: South End Press.
Ohmann, Richard. 2003. Politics of Knowledge: The Commercialization of the University, the Professions, and Print Culture. Middleton, CT: Wesleyan University Press.
Perez, Laura. 1993. “Opposition and the Education of Chicana/os,” Race Identity and Representation in Education, ed. Cameron McCarthy and Warren Chrichlow. New York: Routledge.
Rosaldo, Renato. 1994 “Cultural Citizenship and Educational Democracy.” Cultural Anthropology 9:3. 402-411.
Tryon, Charles. 2006. “Writing and Citizenship: Using Blogs to Teach First-Year Composition.” Pedagogy 6:1. 128-132.
Wan, Amy J. 2011. “In the Name of Citizenship: The Writing Classroom and the Promise of Citizenship.” College English 74. 28-49.
Weisser, Christian R. 2002. Moving Beyond Academic Discourse: Composition Studies and the Public Sphere. Carbondale: Southern Illinois University Press.
Welch, Nancy. 2005. “Living Room: Teaching Public Writing in a Post-Publicity Era.” College Composition and Communication 56:3. 470-492.
“Human creativity and human capacity is limitless,” said the Bangladeshi economist Muhammad Yunus to a darkened room full of rapt Austrian elites. The setting was TEDx Vienna, and Yunus’s address bore all the trademark features of TED’s missionary version of technocratic idealism. “We believe passionately in the power of ideas to change attitudes, lives and, ultimately, the world,” goes the TED mission statement, and this philosophy is manifest in the familiar form of Yunus’s talk (TED.com). The lighting was dramatic, the stage sparse, and the speaker alone on stage, with only his transformative ideas for company. The speech ends with the zealous technophilia that, along with the minimalist stagecraft and quaint faith in the old-fashioned power of lectures, defines this peculiar genre. “This is the age where we all have this capacity of technology,” Yunus declares: “The question is, do we have the methodology to use these capacities to address these problems?… The creativity of human beings has to be challenged to address the problems we have made for ourselves. If we do that, we can create a whole new world—we can create a whole new civilization” (Yunus 2012). Yunus’s conviction that now, finally and for the first time, we can solve the world’s most intractable problems, is not itself new. Instead, what TED Talks like this offer is a new twist on the idea of progress we have inherited from the nineteenth century. And with his particular focus on the global South, Yunus riffs on a form of that old faith, which might seem like a relic of the twentieth: “development.” What is new, then, about Yunus’s articulation of these old faiths? It comes from the TED Talk’s combination of prophetic individualism and technophilia: this is the ideology of “innovation.”
“Innovation”: a ubiquitous word with a slippery meaning. “An innovation is a novelty that sticks,” writes Michael North in Novelty: A History of the New, pointing out the basic ontological problem of the word: if it sticks, it ceases to be a novelty. “Innovation, defined as a widely accepted change,” he writes, “thus turns out to be the enemy of the new, even as it stands for the necessity of the new” (North 2013, 4). Originally a pejorative term for religious heresy, in its common use today “innovation” is used a synonym for what would have once been called, especially in America, “futurity” or “progress.” In a policy paper entitled “A Strategy for American Innovation,” then-President Barack Obama described innovation as an American quality, in which the blessings of Providence are revealed no longer by the acquisition of territory, but rather by the accumulation of knowledge and technologies: “America has long been a nation of innovators. American scientists, engineers and entrepreneurs invented the microchip, created the Internet, invented the smartphone, started the revolution in biotechnology, and sent astronauts to the Moon. And America is just getting started” (National Economic Council and Office of Science and Technology Policy 2015, 10).
In the Obama administration’s usage, we can see several of the common features of innovation as an economic ideology, some of which are familiar to students of American exceptionalism. First, it is benevolent. Second, it is always “just getting started,” a character of newness constantly being renewed. Third, like “progress” and “development” have been, innovation is a universal, benevolent abstraction made manifest through material, economic accomplishments. But even more than “progress,” which could refer to political and social accomplishments like universal suffrage or the polio vaccine, or “development,” which has had communist and social democratic variants, innovation is inextricable from the privatized market that animates it. For this reason, Obama can treat the state-sponsored moon landing and the iPhone as equivalent achievements. Finally, even if it belongs to the nation, the capacity for “innovation” really resides in the self. Hence Yunus’s faith in “creativity,” and Obama’s emphasis on “innovators,” the protagonists of this heroic drama, rather than the drama itself.
This essay explores the individualistic, market-based ideology of “innovation” as it circulates from the English-speaking first world to the so-called third world, where it supplements, when it does not replace, what was once more exclusively called “development.” I am referring principally to projects that often go under the name of “social innovation” (or, relatedly, “social entrepreneurship”), which Stanford University’s Business School defines as “a novel solution to a social problem that is more effective, efficient, sustainable, or just than current solutions” (Stanford Graduate School of Business). “Social innovation” often advertises itself as “market-based solutions to poverty,” proceeding from the conviction that it is exclusion from the market, rather than the opposite, that causes poverty. The practices grouped under this broad umbrella include projects as different the micro-lending banks, for which Yunus shared the 2006 Nobel Peace Prize; smokeless, cell-phone charging cookstoves for South Asia’s rural peasantry; latrines that turn urine into electricity, for use in rural villages without running water; and the edtech academic and TED honoree Sugata Mitra’s “self-organized learning environment” (SOLE), which appears to consist mostly of giving internet-enabled laptops to poor children and calling it a day.
The discourse of social innovation is a theory about economic process and also a story of the (first-world) self. The ideal innovator that emerges from the examples to follow is a flexible, socially autonomous individual, whose creativity and prophetic vision, nurtured by the market, can refashion social inequalities as discrete “problems” that simply await new solutions. Guided by a faith in the market but also shaped by the austerity that has slashed the budgets of humanitarian and development institutions worldwide, social innovation ideology marks a retreat from the social vision of development. Crucially, the ideologues of innovation also answer a post-development critique of Western arrogance with a generous, even democratic spirit. That is, one of the reasons that “innovation” has come to supersede “development” in the vocabulary of many humanitarian and foreign aid agencies is that innovation ideology’s emphasis on individual agency serves as a response to the legitimate charges of condescension and elitism long directed at Euro-American development agencies. But compromising the social vision of development also means jettisoning the ideal of global equality that, however deluded, dishonest, or self-serving it was, also came with it. This brings us to a critical feature of innovation thinking that is often disguised by the enthusiasm of its tech-economy evangelizers: it is in fact a pessimistic ideal of social change. The ideology of innovation, with its emphasis on processes rather than outcomes, and individual brilliance over social structures, asks us to accommodate global inequality, rather than challenge it. It is a kind of idealism, therefore, well suited to our dispiriting neoliberal moment, where the sense of possibility seems to have shrunk.
My objective is not to evaluate these efforts individually, nor even to criticize their practical usefulness as solution-oriented projects (not all of them, anyway). Indeed, in response to the difficult, persistent question, “What is the alternative?” it is easy, and not terribly helpful, to simply answer “world socialism,” or at least “import-substitution industrialization.” My objective is perhaps more modest: to define the ideology of “innovation” that undergirds these projects, and to dissect the Anglo-American ego-ideal that it circulates. As an ideology, innovation is driven by a powerful belief, not only in technology and its benevolence, but in a vision of the innovator: the autonomous visionary whose creativity allows him to anticipate and shape capitalist markets.
An Orthodoxy of Unorthodoxy: Innovation, Revolution, and Salvation
Given the immodesty of the innovator archetype, it may seem odd that innovation ideology could be considered pessimistic. On its own terms, of course, it is not; but when measured against the utopian ambitions and rhetoric of many “social innovators” and technology evangelists, their actual prescriptions appear comparatively paltry. Human creativity is boundless, and everyone can be an innovator, says Yunus; this is the good news. The bad news, unfortunately, is that not everyone can have indoor plumbing or public lighting. Consider the “pee-powered toilet” sponsored by the Gates Foundation. The outcome of inadequate sewerage in the underdeveloped world has not been changed; only the process of its provision has been innovated (Smithers 2015). This combination of evangelical enthusiasm and piecemeal accommodation becomes clearer, however, when we excavate innovation’s tangled history, which by necessity, the word seems at first glance to lack entirely.
Figure 1. A demonstration toilet, capable of powering a light, or even a mobile phone, at the University of the West of England (photograph: UWE Bristol)
For most of its history, the word has been synonymous with false prophecy and dissent: initially, it was linked to deceitful promises of deliverance, either from divine judgment or more temporal forms of punishment. For centuries, this was the most common usage of this term. The charge of innovation warned against either the possibility or the wisdom of remaking the world, and disciplined those “fickle changelings and poor discontents,” as the King says in Shakespeare’s Henry IV, grasping at “hurly-burly innovation.” Religious and political leaders tarred self-styled prophets or rebels as heretical “innovators.” In his 1634 Institution of the Christian Religion, for example, John Calvin warned that “a desire to innovate all things without punishment moveth troublesome men” (Calvin 1763, 716). Calvin’s notion that innovation was both a political and theological error reflected, of course, his own jealously kept share of temporal and spiritual authority. For Thomas Hobbes, “innovators” were venal conspirators, and innovation a “trumpet of war and sedition.” Distinguishing men from bees—which Aristotle, Hobbes says, wrongly considers a political animal like humans—Hobbes laments the “contestation of honour and preferment” that plagues non-apiary forms of sociality. Bees only “talk” when and how they have to; men and women, by contrast, chatter away in their vanity and ambition (Hobbes 1949, 65-67). The “innovators” of revolutionary Paris, Edmund Burke thundered later, “leave nothing unrent, unrifled, unravaged, or unpolluted with the slime of their filthy offal” (1798, 316-17). Innovation, like its close relative “revolution,” was upheaval, destruction, the reversal of the right order of things.
Figure 2: The Innovation Tango, in The Evening World
As Godin (2015) shows in his history of the concept in Europe, in the late nineteenth century “innovation” began to be recuperated as an instrumental force in the world, which was key to its transformation into the affirmative concept we know now. Francis Bacon, the philosopher and Lord Chancellor under King James I, was what we might call an “early adopter” of this new positive instrumental meaning. How, he asked, could Britons be so reverent of custom and so suspicious of “innovation,” when their Anglican faith was itself an innovation? (Bacon 1844, 32). Instead of being an act of sudden renting, rifling, and heretical ravaging, “innovation” became a process of patient material improvement. By the turn of the last century, the word had mostly lost its heretical associations. In fact, “innovation” was far enough removed from wickedness or malice in 1914 that the dance instructor Vernon Castle invented a modest American version of the tango that year and named it “the Innovation.” The partners never touched each other in this chaste improvement upon the Argentine dance. “It is the ideal dance for icebergs, surgeons in antiseptic raiment and militant moralists,” wrote Marguerite Marshall (1914), a thoroughly unimpressed dance critic in the New YorkEvening World. “Innovation” was then beginning to assume its common contemporary form in commercial advertising and economics, as a synonym for a broadly appealing, unthreatening modification of an existing product.
Two years earlier, the Austrian-born economist Joseph Schumpeter published his landmark text The Theory of Economic Development, where he first used “innovation” to describe the function of the “entrepreneur” in economic history (1934, 74). For Schumpeter, it was in the innovation process that capitalism’s tendency towards tumult and creative transformation could be seen. He understood innovation historically, as a process of economic transformation, but he also singled out an innovator responsible for driving the process. In his 1942 book Capitalism, Socialism, and Democracy, Schumpeter returned to the idea in the midst of war and the threat of socialism, which gave the concept a new urgency. To innovate, he wrote was “to reform or revolutionize the pattern of production by exploiting an invention or, more generally, an untried technological possibility for producing a new commodity or producing an old one in a new way, by opening up a new source of supply of materials or a new outlet for products, by reorganizing an industry and so on” (Schumpeter 2003, 132). As Schumpeter goes on to acknowledge, this transformative process is hard to quantify or professionalize. The elusiveness of his theory of innovation comes from a central paradox in his own definition of the word: it is both a world-historical force and a quality of personal agency, both a material process and a moral characteristic. It was a historical process embodied in heroic individuals he called “New Men,” and exemplified in non-commercial examples, like the “expressionist liquidation of the object” in painting (126). To innovate was also to do, at the local level of the production process, what Marx and Engels credit the bourgeoisie as a class for accomplishing historically: revolutionizing the means of production, sweeping away what is old before it can ossify. Schumpeter told a different version of this story, though. For Marx, capitalist accumulation is a dialectical historical process, but what Schumpeter called innovation was a drama driven by a particular protagonist: the entrepreneur.
In a sympathetic 1943 essay about Schumpeter theory of innovation, the Marxist economist Paul Sweezy criticized the centrality Schumpeter gave to individual agency. Sweezy’s interest in the concept is unsurprising, given how Schumpeter’s treatment of capitalism as a dynamic but destructive historical force draws upon Marx’s own. It is therefore not “innovation” as a process to which Sweezy objects, but the mythologized figure of the entrepreneurial “innovator,” the social type driving the process. Rather than a free agent, powering the economy’s inexorable progress, “we may instead regard the typical innovator as the tool of the social relations in which he is enmeshed and which force him to innovate on pain of elimination,” he writes (Sweezy 1943, 96). In other words, it is capital accumulation, not the entrepreneurial function, and certainly not some transcendent ideal of creativity and genius, that drives innovation. And while the innovator (the successful one, anyway) might achieve a pantomime of freedom within the market, for Sweezy this agency is always provisional, since innovation is a conditional economic practice of historically constituted subjects in a volatile and pitiless market, not a moral quality of human beings. Of course, Sweezy’s critique has not won the day. Instead, a particularly heroic version of the Schumpeterian sense of innovation as a human, moral quality liberated by the turbulence of capitalist markets is a mainstream feature of institutional life. An entire genre of business literature exists to teach the techniques of “managing creativity and innovation in the workplace” (The Institute of Leadership and Management 2007), to uncover the “map of innovation” (O’Connor and Brown 2003), to nurture the “art of innovation” (Kelley 2001), to close the “circle of innovation” (Peters 1999), to collect the recipes in “the innovator’s cookbook” (Johnson 2011), to give you the secrets of “the sorcerers and their apprentices” (Moss 2011)—business writers leave virtually no hackneyed metaphor for entrepreneurial creativity, from the domestic to the occult, untouched.
As its contemporary proliferation shows, innovation has never quite lost its association with redemption and salvation, even if it is no longer used to signify their false promises. As Lepore (2014) has argued about its close cousin, “disruption,” innovation can be thought of as a secular discourse of economic and personal deliverance. Even as the concept became rehabilitated as procedural, its deviant and heretical connotations were common well into the twentieth century, when Emma Goldman (2000) proudly and defiantly described anarchy as an “uncompromising innovator” that enraged the princes and oligarchs of the world. Its seeming optimism, which is inseparable from the disasters from which it promises to deliver us, is thus best considered as a response to a host of persistent anxieties of twenty-first-century life: economic crisis, violence and war, political polarization, and ecological collapse. Yet the word has come to describe the reinvention or recalibration of processes, whether algorithmic, manufacturing, marketing, or otherwise. Indeed, even Schumpeter regarded the entrepreneurial function as basically technocratic. As he put it in one essay, “it consists in getting things done” (Schumpeter 1941, 151).[1] However, as the book titles above make clear, the entrepreneurial function is also a romance. If capitalism was to survive and thrive, Schumpeter suggested, it needed to do more than produce great fortunes: it had to excite the imagination. Otherwise, it would simply calcify into the very routines it was charged with overthrowing. Innovation discourse today remains, paradoxically, both procedural and prophetic. The former meaning lends innovation discourse its piecemeal, solution-oriented accommodation to inequality. In this latter sense, though, the word retains some of the heretical rebelliousness of its origins. We are familiar with the lionization of the tech CEO as a non-confirming or “disruptive” visionary, who sets out to “move fast and break things,” as the famous Facebook motto went. The archetypal Silicon Valley innovator is forward-looking and rebellious, regardless of how we might characterize the results of his or her innovation—a social network, a data mining scheme, or Uber-for-whatever. The dissenting meaning of innovation is at play in the case of social innovation, as well, given its aim to address social inequalities in significant new ways. So, in spite of innovation’s implicit bias towards the new, the history and present-day use of the word remind us that its present-day meaning is seeded with its older ones. Innovation’s new secular, instrumental meaning is therefore not a break with its older, prohibited, religious connotation, but an embellishment of it: what is described here is a spirit, an ideal, an ideological rescrambling of the word’s older heterodox meaning to suit a new orthodoxy.
The Innovation of Underdevelopment: From Exploitation to Exclusion
In his 1949 inaugural address, which is often credited with popularizing the concept of “development,” Harry Truman called for “a bold new program for making the benefits of our scientific advances and industrial progress available for the improvement and growth of underdeveloped areas” (Truman 1949).[2] “Development” in U.S. modernization theory was defined, writes Nils Gilman, by “progress in technology, military and bureaucratic institutions, and the political and social structure” (2003, 3). It was a post-colonial version of progress that defined itself as universal and placeless; all underdeveloped societies could follow a similar path. As Kristin Ross argues, development in the vein of post-war modernization theory anticipated a future “spatial and temporal convergence” (1996, 11-12). Emerging in the collapse of European colonialism, the concept’s positive value was that it positioned the whole world, south and north, as capable of the same level of social and technical achievement. As Ross suggests, however, the future “convergence” that development anticipates is a kind of Euro-American ego-ideal—the rest of the world’s brightest possible future resembled the present of the United States or western Europe. As Gilman puts it, the modernity development looked forward to was “an abstract version of what postwar American liberals wished their country to be.”
Emerging as it did in the decline, and then in the wake, of Europe’s African, Asian, and American empires, mainstream mid-century writing on development tread carefully around the issue of exploitation. Gunnar Myrdal, for example, was careful to distinguish the “dynamic” term “underdeveloped” from its predecessor, “backwards” (1957, 7). Rather than view the underdeveloped as static wards of more “advanced” metropolitan countries, in other words, the preference was to view all peoples as capable of historical dynamism, even if they occupied different stages on a singular timeline. Popularizers of modernization theory like Walter Rostow described development as a historical stage that could be measured by certain material benchmarks, like per-capita car ownership. But it also required immaterial, subjective cultural achievements, as Josefina Saldaña-Portillo, Jorge Larrain, and Molly Geidel have pointed out. In his well-known Stages of Economic Growth, Rostow emphasized how achieving modernity required the acquisition of what he called “attitudes,” such as a “Newtonian” worldview and an acclimation to “a life of change and specialized function” (1965, 26). His emphasis on cultural attributes—prerequisites for starting development that are also consequences of achieving it—is an example of the development concept’s circular, often self-contradictory meanings. “Development” was both a process and its end point—a nation undergoes development in order to achieve development, something Cowen and Shenton call the “old utilitarian tautology of development” (1996, 4), in which a precondition for achieving development would appear to be its presence at the outset.
This tautology eventually circles back to what Nustad (2007, 40) calls the lingering colonial relationship of trusteeship, the original implication of colonial “development.” For post-colonial critics of developmentalism the very notion of “development” as a process unfolding in time is inseparable from this colonial relation, given the explicit or implicit Euro-American telos of most, if not all, development models. Where modernization theorists “naturalized development’s emergence into a series of discrete stages,” Saldaña-Portillo (2003, 27) writes, the Marxist economists and historians grouped loosely under the heading of “dependency theory” spatialized global inequality, using a model of “core” and “periphery” economies to counter the model of “traditional” and “modern” ones. Two such theorists, Andre Gunder Frank and Walter Rodney, framed their critiques of development with the grammar of the word itself. Like “innovation,” “development” is a progressive noun, which indicates an ongoing process in time. Its temporal and agential imprecision—when will the process ever end? Can it? Who is in charge?—helps to lend development a sense of moral and political neutrality, which it shares with “innovation.” Frank titled his most famous book on the subject The Development of Underdevelopment, the title emphasizing the point that underdevelopment was not a mere absence of development, but capitalist development’s necessary product. Rodney’s book How Europe Underdeveloped Africa did something similar, by making “underdevelop” into a transitive verb, rather than treating “underdevelopment” as a neutral condition.[3]
As Luc Boltanski and Eve Chiapello argue, this language of neutrality became a hallmark of European accounts of global poverty and underdevelopment after the 1960s. According to their survey of economics and development literature, the category of “exclusion” (and its opposite number, “empowerment”) and the gradual disappearance of “exploitation” from economic and humanitarian literature about poverty. No single person, firm, institution, party, or class is responsible for “exclusion,” Boltanksi and Chiapello explain. Reframing exploitation as exclusion therefore “permits identification of something negative without proceeding to level accusations. The excluded are no one’s victims” (2007, 347 & 354). Exploitation is a circumstance that enriches the exploiter; the poverty that results from exclusion, however, is a misfortune profiting no one. Consider, as an example, the mission statement of the Grameen Foundation, which Yunus founded. It remains one of the leading microlenders in the world, devoted to bringing impoverished people in the global South, especially women, into the financial system through the provision of small, low-collateral loans. “Empowerment” and “innovation” are two of its core values. “We champion innovation that makes a difference in the lives of the poor,” runs one plank of the Foundation’s mission statement (Grameen Foundation India nd). “We seek to empower the world’s poor, especially the poorest women.” “Innovation” is often not defined in such statements, but rather treated as self-evidently meaningful. Like “development,” innovation is a perpetually ongoing process, with no clear beginning or end. One undergoes development to achieve development; innovation, in turn, is the pursuit of innovation, and as soon as one innovates, the innovation thus created soon ceases to be an innovation. This wearying semantic circle helps evacuate the processes of its power dynamics, of winners and losers. As Evgeny Morozov (2014, 5) has argued about what he calls “solutionism,” the celebration of technological and design fixes approaches social problems like inequality, infrastructural collapse, inadequate housing, etc.—which might be regarded as results of “exploitation”—as intellectual puzzles for which we simply have to discover the solutions. The problems are not political; rather, they are conceptual: we either haven’t had the right ideas, or else we haven’t applied them right.[4] Grameen’s mission, to bring the world’s poorest into financial markets that currently do not include them, relies on a fundamental presumption: that the global financial system is something you should definitely want to be a part of.[5] But as Banerjee et. al (2015: 23) have argued, to the extent that microcredit programs offer benefits, they mostly accrue to already profitable businesses. The broader social benefits touted by the programs—women’s “empowerment,” more regular school attendance, and so on—were either negligible or non-existent. And as a local government official in the Indian province of Anhan Pradesh told the New York Times in 2010, microloan programs in his district had not proven to be less exploitative than their predecessors, only more remote. “The money lender lives in the community,” he said. “At least you can burn down his house” (Polgreen and Bajaj 2010).
Humanitarian Innovation and the Idea of “The Poor”
Yunus’s TED Talk and the Grameen Foundation’s mission statement draw on the twinned ideal of innovation as procedure and salvation, and in so doing they recapitulate development’s modernist faith in the leveling possibilities of technology, albeit with the individualist, market-based zeal that is particular to neoliberal innovation thinking. “Humanitarian innovation” is a growing subfield of international development theory, which, like “social innovation,” encourages market-based solutions to poverty. Most scholars date the concept to the 2009 fair held by ALNAP (Active Learning Network for Accountability and Performance in Humanitarian Action), an international humanitarian aid agency that measures and evaluates aid programs. Two of its leading academic proponents, Alexander Betts and Louise Bloom of the Oxford Humanitarian Innovation Project, define it thusly:
“Innovation is the way in which individuals or organizations solve problems and create change by introducing new solutions to existing problems. Contrary to popular belief, these solutions do not have to be technological and they do not have to be transformative; they simply involve the adaptation of a product or process to context. ‘Humanitarian’ innovation may be understood, in turn, as ‘using the resources and opportunities around you in a particular context, to do something different to what has been done before’ to solve humanitarian challenges” (Betts and Bloom 2015, 4).[6]
Here and elsewhere, the HIP hews closely to conventional Schumpeterian definitions of the term, which indeed inform most uses of “innovation” in the private sector and elsewhere: as a means of “solving problems.” Read in this light, “innovation” might seem rather innocuous, even banal: a handy way of naming a human capacity for adaptation, improvisation, and organization. But elsewhere, the authors describe humanitarian innovation as an urgent response to very specific contemporary problems that are political and ecological in nature. “Over the past decade, faced with growing resource constraints, humanitarian agencies have held high hopes for contributions from the private sector, particularly the business community,” they write. Compounding this climate of economic austerity that derives from “growing resource constraints” is an environmental and geopolitical crisis that means “record numbers of people are displaced for longer periods by natural disasters and escalating conflicts.” But despite this combination of violence, ecological degradation, and austerity, there is hope in technology: “new technologies, partners, and concepts allow humanitarian actors to understand and address problems quickly and effectively” (Betts and Bloom 2014, 5-6).
The trope of “exclusion,” and its reliance on a rather anodyne vision of the global financial system as a fair sorter of opportunities and rewards, is crucial to a field that counsels collaboration with the private sector. Indeed, humanitarian innovators adopt a financial vocabulary of “scaling,” “stakeholders,” and “risk” in assessing the dangers and effectiveness (the “cost” and “benefits”) of particular tactics or technologies. In one paper on entrepreneurial activity in refugee camps, de la Chaux and Haugh make an argument in keeping with innovation discourse’s combination of technocratic proceduralism and utopian grandiosity: “Refugee camp entrepreneurs reduce aid dependency and in so doing help to give life meaning for, and confer dignity on, the entrepreneurs,” they write, emphasizing in their first clause the political and economic austerity that conditions the “entrepreneurial” response (2014, 2). Relying on an exclusion paradigm, the authors point to a “lack of functioning markets” as a cause of poverty in the camps. By “lack of functioning markets,” de la Chaux and Haugh mean lack of capital—but “market,” in this framework, becomes simply an institutional apparatus which one enters and is adjudicated on one’s merits, rather than a field of conflict in which one labors in a globalized class society. At the same time, “innovation” that “empowers” the world’s “poorest” also inherits an enduring faith in technology as a universal instrument of progress. One of the preferred terms for this faith is “design”: a form of techne that, two of its most famous advocates argue, “addresses the needs of the people who will consume a product or service and the infrastructure that enables it” (Brown and Wyatt, 2010).[7] The optimism of design proceeds from the conviction that systems—water safety, nutrition, etc.—fail because they are designed improperly, without input from their users. De la Chaux addresses how ostensibly temporary camps grow into permanent settlements, using Jordan’s Za’atari refugee camp near the Syrian border as an example. Her elegant solution to the infrastructural problems these under-resourced and overpopulated communities experience? “Include urban planners in the early phases of the humanitarian emergency to design out future infrastructure problems,” as if the political question of resources is merely secondary to technical questions of design and expertise (de la Chaux and Haugh 2014, 19; de la Chaux 2015).
In these examples, we can see once again how the ideal type of the “innovator” or entrepreneur emerges as the protagonist of the historical and economic drama unfolding in the peripheral spaces of the world economy. The humanitarian innovator is a flexible, versatile, pliant, and autonomous individual, whose potential is realized in the struggle for wealth accumulation, but whose private zeal for accumulation is thought to benefit society as a whole.[8] Humanitarian or social innovation discourse emphasizes the agency and creativity of “the poor,” by discursively centering the authority of the “user” or entrepreneur rather than the agency or the consumer. Individual qualities like purpose, passion, creativity, and serendipity are mobilized in the service of broad social goals. Yet while this sort of individualism is central in the literature of social and humanitarian innovation, it is not itself a radically new “innovation.” It instead recalls a pattern that Molly Geidel has recently traced in the literature and philosophy of the Peace Corps. In Peace Corps memoirs and in the agency’s own literature, she writes, the “romantic desire” for salvation and identification with the excluded “poor” was channeled into the “technocratic language of development” (2015, 64).
Innovation’s emphasis on the intellectual, spiritual, and creative faculties of single entrepreneur as historically decisive recapitulates in these especially individualistic terms a persistent thread in Cold War development thinking: its emphasis on cultural transformations as prerequisites for economic ones. At the same time, humanitarian innovation’s anti-bureaucratic ethos of autonomy and creativity is often framed as a critique of “developmentalism” as a practice and an industry. It is a response to criticisms of twentieth-century development as a form of neocolonialism, as too growth-dependent, too detached from local needs, too fixated on big projects, too hierarchical. Consider the development agency UNICEF, whose 2014 “Innovation Annual Report” embraces a vocabulary and funding model borrowed from venture capital. “We knew that we needed to help solve concrete problems experienced by real people,” reads the report, “not just building imagined solutions at our New York headquarters and then deploy them” (UNICEF 2014, 2). Rejecting a hierarchical model of modernization, in which an American developmentalist elite “deploys” its models elsewhere, UNICEF proposes “empowerment” from within. And in place of “development,” as a technical process of improvement from a belated historical and economic position of premodernity, there is “innovation,” the creative capacity responsive to the desires and talents of the underdeveloped.
As in the social innovation model promoted by the Stanford Business School and the ideal of “empowerment” advanced by Grameen, the literature of humanitarian innovation sees “the market” as a neutral field. The conflict between the private sector, military, other non-humanitarian actors in the process of humanitarian innovation is mitigated by considering each as an equivalent “stakeholder,” with a shared “investment” in the enterprise and its success; abuse of the humanitarian mission by profit-seeking and military “stakeholders” can be prevented via the fabrication of “best practices” and “voluntary codes of conduct” (Betts and Bloom 2015, 24) One report, produced for ALNAP along with the Humanitarian Innovation Fund, draws on Everett Rogers’s canonical theory of innovation diffusion. Rogers taxonomizes and explains the ways innovative products or methods circulate, from the most forward-thinking “early adopters” to the “laggards” (1983, 247-250). The ALNAP report does grapple with the problems of importing profit-seeking models into humanitarian work, however. “In general,” write Obrecht and Warner (2014, 80-81), “it is important to bear in mind that the objective for humanitarian scaling is improvement to humanitarian assistance, not profit.” Here, the problem is explained as one of “diffusion” and institutional biases in non-profit organizations, not a conflict of interest or a failing of the private market. In the humanitarian sector, they write, “early adopters” of innovations developed elsewhere are comparatively rare, since non-profit workers tend to be biased towards techniques and products they develop themselves. However, as Wendy Brown (2015, 129) has recently argued about the concepts of “best practices” and “benchmarking,” the problem is not necessarily that the goals being set or practices being emulated are intrinsically bad. The problem lies in “the separation of practices from products,” or in other words, the notion that organizational practices translate seamlessly across business, political, and knowledge enterprises, and that different products—market dominance, massive profits, reliable electricity in a rural hamlet, basic literacy—can be accomplished via practices imported from the business world.
Again, my objective here is not to evaluate the success of individual initiatives pursued under this rubric, nor to castigate individual humanitarian aid projects as irredeemably “neoliberal” and therefore beyond the pale. To do so basks a bit too easily in the comfort of condemnation that the pejorative “neoliberal” offers the social critic, and it runs the risk, as Ferguson (2009, 169) writes, of nostalgia for the era of “old-style developmental states,” which were mostly capitalist as well, after all.[9] Instead, my point is to emphasize the political work that “innovation” as a concept does: it depoliticizes the resource scarcity that makes it seem necessary in the first place by treating the private market as a neutral arbiter or helpful partner rather than an exploiter, and it does so by disavowing the power of a Western subject through the supposed humility and democratic patina of its rhetoric. For example, the USAID Development Innovation Ventures, which seeds projects that will win support from private lenders later, stipulates that “applicants must explain how they will use DIV funds in a catalytic fashion so that they can raise needed resources from sources other than DIV” (USAID 2017). The hoped-for innovation here, it would seem, is the skill with which the applicants accommodate the scarcity of resources, and the facility with which they commercialize their project. One funded project, an initiative to encourage bicycle helmets in Cambodia, “has the potential to save the Cambodian government millions of dollars over the next 10 years,” the description proclaims. But obviously, just because something saves the Cambodian government millions doesn’t mean there is a net gain for the health and safety of Cambodians. It could simply allow the Cambodian government to give more money away to private industry or buy $10 million worth of new weapons to police the Laotian border. “Innovation,” here, requires an adjustment to austerity.
Adjustment, often reframed positively as “resilience,” is a key concept in this literature. In another report, Betts, Bloom, and Weaver (2015, 8) single out a few exemplary innovators from the informal economy of the displaced person’s camp. They include tailors in a Syrian camp’s outdoor market; the Somali owner of an internet café in a Kenyan refugee camp; an Ethiopian man who repairs refrigerators with salvaged air conditioners and fans; and a Ugandan who built a video-game arcade in a settlement near the Rwandan border. This man, identified only as Abdi, has amassed a collection of second-hand televisions and game consoles he acquired in Kampala, the Ugandan capital. “Instead of waiting for donors I wanted to make a living,” says Abdi in the report, exemplifying the values of what Betts, Bloom, and Weaver call “bottom-up innovation” by the refugee entrepreneur. Their assessment is a generous one that embraces the ingenuity and knowledge of displaced and impoverished people affected by crisis. Top-down or “sector-wide” development aid, they write, “disregards the capabilities and adaptive resourcefulness that people and communities affected by conflict and disaster often demonstrate” (2015, 2). In this report, refugees are people of “great resilience,” whose “creativity” makes them “change makers.” As Julian Reid and Brad Evans write, we apply the word “resilient” to a population “insofar as it adapts to rather than resists the conditions of its suffering in the world” (2014, 81). The discourse of humanitarian innovation has the same concession to the inevitability of the structural conditions that make such resilience necessary in the first place. Nowhere is it suggested that refugee capitalists might be other than benevolent, or that inclusion in circuits of national and transnational capital might exacerbate existing inequalities, rather than transcend them. Furthermore, humanitarian innovation advocates never argue that market-based product and service “innovation” are, in a refugee context, beneficial to the whole, given the paucity of employment and services in affected communities; this would at least be an arguable point. The problem is that the question is never even asked. The market is like oxygen.
Conclusion: The TED Talk and the Innovation Romance
In 2003, I visited a recently-settled barrio settlement—one could call it a “shantytown”—perched on a hillside high above the east side of Caracas. I remember vividly a wooden, handmade press, ringed with barbed wire scavenged from a nearby business, that its owner, a middle-aged woman newly arrived in the capital, used to crush sugar cane into juice. It was certainly an innovation, by any reasonable definition: a novel, creative solution to a problem of scarcity, a new process for doing something. I remember being deeply impressed by the device, which I found brilliantly ingenious. What I never thought to call it, though, was a “solution” to its owner’s poverty. Nor, I am sure, did she; she lived in a hard-core chavista neighborhood, where dispossessing the country’s “oligarchs” would have been offered as a better innovation—in the old Emma Goldman sense. Therefore, it is not that individual ingenuity, creativity, fearlessness, hard work, and resistance to the impossible demands that transnational capital has placed on people like the video-game entrepreneur in Uganda, or that woman in Caracas, are disreputable things to single out and praise. Quite the contrary: my objection is to the capitulation to their exploitation that is smuggled in with this admiration.
I have argued that “innovation” is, at best, a vague concept asked to accommodate far too much in its combination of heroic and technocratic meanings. Innovation, in its modern meaning, is about revolutionizing “process” and technique: this often leaves outcomes unexamined and unquestioned. The outcome of that innovative sugar cane press in Caracas is still a meager income selling juice in a perilous informal marketplace. The promiscuity of innovation’s use also makes it highly mobile and subject to abuse, as even enthusiastic users of the concept, like Betts and Bloom at the Oxford Humanitarian Innovation Project, acknowledge. As they caution, “use of the term in the humanitarian system has lacked conceptual clarity, leading to misuse, overuse, and the risk that it may become hollow rhetoric” (2014, 5). I have also argued that innovation, especially in the context of neoliberal development, must be understood in moral terms, as it makes a virtue of private accumulation and accomodation to scarcity, and it circulates an ego-ideal of the first-world self to an audience of its admirers. It is also an ideological celebration of what Harvey calls the neoliberal alignment of individual well-being with unregulated markets, and what Brown calls “the economization of the self” (2015, 33). Finally, as a response to the enduring crises of third-world poverty, exacerbated by the economic and ecological dangers of the twenty-first century, the language of innovation beats a pessimistic retreat from the ideal of global equality that, in theory at least, development in its manifold forms always held out as its horizon.
Innovation discourse draws on deep wells—its moral claim is not new, as a reader of The Protestant Ethic and the Spirit of Capitalism will observe. Inspired in part by the example of Benjamin Franklin’s autobiography, Max Weber argued that capitalism in its ascendancy reimagined profit-seeking activities, which might once have been described as avaricious or vulgar as a virtuous “ethos” (2001, 16-17). Capitalism’s challenge to tradition, Weber argued, demanded some justification; reframing business as a calling or a vocation could help provide one. Capitalism in our time demands still demands validation not only as a virtuous discipline, but as an enterprise devoted to serving the “common good,” write Boltanski and Chiapello. As they say, “an existence attuned to the requirements of accumulation must be marked out for a large number of actors to deem it worth the effort of being lived” (2007, 10-11). “Innovation” as an ideology marks out this sphere of purposeful living for the contemporary managerial classes. Here, again, the word’s close association with “creativity” is instrumental, since creativity is often thought to be an intrinsic, instinctual human behavior. “Innovating” is therefore not only a business practice that will, as Franklin argued about his own industriousness, improve oneself in the eyes of both man and God. It is also a secular expression of the most fundamental individual and social features of the self—the impulse to understand and to improve the world. This is particularly evident in the discourse of social innovation, which the Social Innovation Lab at Stanford defines as a practice that aims to leverage the private market to solve modern society’s most intractable “problems”: housing, pollution, hunger, education, and so on. When something like world hunger is described as a “problem” in this way, though, international food systems, agribusiness, international trade, land ownership, and other sources of malnutrition disappear. Structures of oppression and inequality simply become discrete “problems” for no one has yet invented the fix. They are individual nails in search of a hammer, and the social innovator is quite confident that a hammer exists for hunger.
Microfinance is another one of these hammers. As one economist critical of the microcredit system notes at the beginning of his own book on the subject, “most accounts of microfinance—the large-scale, businesslike provision of financial services to poor people—begin with a story” (Roodman 2012, 1). These are usually some narrative of an encounter with a sympathetic third-world subject. For Roodman, the microfinancial stories of hardship and transcendence have a seductive power over their first-world audiences, of which he is legitimately suspicious. As we saw above, Schumpeter’s procedural “entrepreneurial function” is itself also a story of a creative entrepreneur navigating the tempests of modern capitalism. In the postmodern romance of social innovation in the “underdeveloped” world, the Western subject of the drama is both ever-present and constantly disavowed. The TED Talk, with which we began, is in its crude way the most expressive genre of this contemporary version of the entrepreneurial romance.
Rhetorically transformative but formally archaic—what could be less innovative than a lecture?—the genre of the social innovation TED Talk models innovation ideology’s combination of grandiosity and proceduralism, even as its strict generic conventions—so often and easily parodied—repeatedly undermine the speakers’ regular claims to transcendent breakthroughs. For example, in his TEDx Montreal address, Ethan Kay (2012) began in the conventional way: with a dire assessment of a monumental, yet easily overlooked, social problem in a third-world country. “If we were to think about the biggest problems affecting our world,” Kay begins, “any socially conscious person would have to include poverty, disease, and climate change. And yet there is one thing that causes all three of these simultaneously, that we pay almost no attention to, even though a very good solution exists.” Having established the scope of the problem, next comes the sentimental identification. The knowledge of this social problem is only possible because of the hospitality and insight of some poor person abroad, something familiar from Geidel’s reading of Peace Corps memoirs and Roodman’s microcredit stories: in Kay’s case, it is in the unelectrified “hut” of a rural Indian woman where, choking on cooking smoke, he realizes the need for a clean-burning indoor cookstove. Then comes the self-deprecating joke, in which the speaker acknowledges his early naivete and establishes his humble capacity for self-reflection. (“I’m just a guy from Cleveland, Ohio, who has trouble cooking a grilled-cheese sandwich,” says Kay, winning a few reluctant laughs.) And then, the technocratic solution emerges: when the insight thus acquired is subjected to the speaker’s reason and empathy, the deceptively simple and yet world-making “solution” emerges. Despite the prominent formal place of the underdeveloped character in this genre, the teller of the innovation story inevitably ends up the hero. The throat-clearing self-seriousness, the ritualistic gestures of humility, the promise to the audience of transformative change without inconvenient political consequences, and the faith in technology as a social leveler all perform the TED Talk’s ego-ideal of social “innovation.”
One of the most successful social innovation TED Talks is Mitra’s tale of the “self-organized learning environment” (SOLE). Mitra won a $1 million prize from TED in 2013 for a talk based on his “hole-in-the-wall” experiment in New Delhi, which tests poor children’s ability to learn autonomously, guided only by internet-enabled laptops and cloud-based adult mentors abroad. (Ted.com 2013). Mitra’s idea was an excellent example of innovation discourse’s combination of the procedural and the prophetic. In the case of the latter, he begins: “There was a time when Stone Age men and women used to sit and look up at the sky and say, ‘What are those twinkling lights?’ They built the first curriculum, but we’ve lost sight of those wondrous questions” (Mitra 2013). What gets us to this lofty goal, however, is a comparatively simple process. True to genre, Mitra describes the SOLE as the fruit of a serendipitous discovery. After cutting a hole in the wall that separated his technology firm’s offices from an adjoining New Delhi slum, they placed an Internet-enabled computer in the new common area. When he returned weeks later, Mitra found local children using it expertly. Leaving unsupervised children in a room with a laptop, it turns out, activates innate capacities for self-directed learning stifled by conventional schooling. Mitra promises a cost-effective solution to the problem of primary and secondary education in the developing world—do virtually nothing. “This is done by children without the help of any teacher,” Mitra confidently concludes, sharing a PowerPoint slide of the students’ work. “The teacher only raises the question, and then stands back and admires the answer.”
When we consider innovation’s religious origins in false prophecy, its current orthodoxy in the discourse of technological evangelism—and, more broadly, in analog versions of social innovation—is often a nearly literal example of Rayvon Fouché’s argument that the formerly colonized, “once attended to by bibles and missionaries, now receive the proselytizing efforts of computer scientists wielding integrated circuits in the digital age” (2012, 62). One of the additional ironies of contemporary innovation ideology, though, is that these populations exploited by global capitalism are increasingly charged with redeeming it—the comfortable denizens of the West need only “stand back and admire” the process driven by the entrepreneurial labor of the newly digital underdeveloped subject. To the pain of unemployment, the selfishness of material pursuits, the exploitation of most of humanity by a fraction, the specter of environmental cataclysm that stalks our future and haunts our imagination, and the scandal of illiteracy, market-driven innovation projects like Mitra’s “hole in the wall” offer next to nothing, while claiming to offer almost everything.
_____
John Patrick Leary is associate professor of English at Wayne State University in Detroit and a visiting scholar in the Program in Literary Theory at the Universidade de Lisboa in Portugal in 2019. He is the author of A Cultural History of Underdevelopment: Latin America in the U.S. Imagination (Virginia 2016) and Keywords: The New Language of Capitalism, forthcoming in 2019 from Haymarket Books. He blogs about the language and culture of contemporary capitalism at theageofausterity.wordpress.com.
[1] “The entrepreneur and his function are not difficult to conceptualize,” Schumpeter writes: “the defining characteristic is simply the doing of new things or the doing of things that are already being done in a new way (innovation).”
[2] The term “underdeveloped” was only a bit older: it first appeared in “The Economic Advancement of Under-developed Areas,” a 1942 pamphlet on colonial economic planning by a British economist, Wilfrid Benson.
[3] I explore this semantic and intellectual history in more detail in my book, A Cultural History of Underdevelopment (Leary, 4-10).
[4] Morozov describes solutionism as an ideology that sanctions the following delusion: “Recasting all complex social situations either as neatly defined problems with definite, computable solutions or as transparent and self-evident processes that can be easily optimized—if only the right algorithms are in place!”
[5] “Although the number of unbanked people globally dropped by half a billion from 2011 to 2014,” reads a Foundation web site’s entry under the tab “financial services”, “two billion people are still locked out of formal financial services.” One solution to this problem focuses on Filipino convenience stores, called “sari-sari” stores: “In a project funded by the JPMorgan Chase Foundation, Grameen Foundation is empowering sari-sari store operators to serve as digital financial service agents to their customers.” Clearly, the project must result not only in connecting customers to financial services, but in opening up new markets to JP Morgan Chase. See “Alternative Channels.”
[6] This quoted definition of “humanitarian innovation” is attributed to an interview with an unnamed international aid worker.
[7] Erickson (2015, 113-14) writes that “design thinking” in public education “offers the illusion that structural and institutional problems can be solved through a series of cognitive actions…” She calls it “magic, the only alchemy that matters.”
[8] A management-studies article on the growth of so-called “innovation prizes” for global development claimed sunnily that at a recent conference devoted to such incentives, “there was a sense that society is on the brink of something new, something big, and something that has the power to change the world for the better” (Everett, Wagner, and Barnett 2012, 108).
[9] “It is here that we have to look more carefully at the ‘arts of government’ that have so radically reconfigured the world in the last few decades,” writes Ferguson, “and I think we have to come up with something more interesting to say about them than just that we’re against them.” Ferguson points out that neoliberalism in Africa—the violent disruption of national markets by imperial capital—looks much different than it does in western Europe, where it usually treated as a form of political rationality or an “art of government” modeled on markets. It is the political rationality, as it is formed through an encounter with the “third world” object of imperial neoliberal capital, that is my concern here.
_____
Works Cited
Bacon, Francis. 1844. The Works of Francis Bacon, Lord Chancellor of England. Vol. 1. London: Carey and Hart.
Banerjee, Abhijit, et al. 2015. “The Miracle of Microfinance? Evidence from a Randomized Evaluation.” American Economic Journal: Applied Economics 7:1.
Betts, Alexander, Louise Bloom, and Nina Weaver. 2015. “Refugee Innovation: Humanitarian Innovation That Starts with Communities.” Humanitarian Innovation Project, University of Oxford.
Betts, Alexander and Louise Bloom. 2014. “Humanitarian Innovation: The State of the Art.” OCHA Policy and Studies Series.
Boltanski, Luc and Eve Chiapello. 2007. The New Spirit of Capitalism. Translated by Gregory Elliot. New York: Verso.
De la Chaux, Marlen and Helen Haugh. 2014. “Entrepreneurship and Innovation: How Institutional Voids Shape Economic Opportunities in Refugee Camps.” Judge Business School, University of Cambridge,
Erickson, Megan. 2015. Class War: The Privatization of Childhood. New York: Verso.
Everett, Bryony, Erika Wagner, and Christopher Barnett. 2012. “Using Innovation Prizes to Achieve the Millennium Development Goals.” Innovations: Technology, Governance, Globalization 7:1.
Ferguson, James. 2009. “The Uses of Neoliberalism.” Antipode 41:S1.
Fouché, Rayvon. 2012. “From Black Inventors to One Laptop Per Child: Exporting a Racial Politics of Technology.” In Race after the Internet, edited by Lisa Nakamura and Peter Chow-White. New York: Routledge. 61-84
Frank, Andre Gunder. 1991. The Development of Underdevelopment. Stockholm, Sweden: Bethany Books.
Geidel, Molly. 2015. Peace Corps Fantasies: How Development Shaped the Global Sixties. Minneapolis: University of Minnesota Press.
Gilman, Nils. 2003. Mandarins of the Future: Modernization Theory in Cold War America. Baltimore: Johns Hopkins University Press, 2003.
Godin, Benoit. 2015. Innovation Contested: The Idea of Innovation Over the Centuries. New York: Routledge.
Morozov, Evgeny. 2014. To Save Everything, Click Here: The Folly of Technological Solutionism. New York: Public Affairs.
Moss, Frank. 2011. The Sorcerers and Their Apprentices: How the Digital Magicians of the MIT Media Lab Are Creating the Innovative Technologies that Will Transform Our Lives. New York: Crown Business.
National Economic Council and Office of Science and Technology Policy. 2015. “A Strategy for American Innovation.” Washington, DC: The White House.
North, Michael. 2013. Novelty: A History of the New. Chicago: University of Chicago Press.
Nustad, Knut G. 2007. “Development: The Devil We Know?” In Exploring Post-Development: Theory and Practice, Problems and Perspectives, edited by Aram Ziai. London: Routledge. 35-46.
Obrecht Alice and Alexandra T. Warner. 2014. “More than Just Luck: Innovation in Humanitarian Action.” London: ALNAP/ODI.
O’Connor, Kevin and Paul B. Brown. 2003. The Map of Innovation: Creating Something Out of Nothing. New York: Crown.
Peters, Tom. 1999. The Circle of Innovation: You Can’t Shrink Your Way to Greatness. New York: Vintage.
Polgreen, Lydia and Vikas Bajaj. 2010. “India Microcredit Faces Collapse From Defaults.” The New York Times (Nov 17).
Rodney, Walter. 1981. How Europe Underdeveloped Africa. Washington, DC: Howard University Press.
Ross, Kristin. 1996. Fast Cars, Clean Bodies: Decolonization and the Reordering of French Culture. Cambridge, MA: The MIT Press.
Rostow, Walter. 1965. The Stages of Economic Growth: A Non-Communist Manifesto. New York: Cambridge University Press.
Reid, Julian and Brad Evans. 2014. Resilient Life: The Art of Living Dangerously. New York: John Wiley and Sons.
Rogers, Everett M. 1983. Diffusion of Innovations. Third edition. New York: The Free Press.
Roodman, David. 2012. Due Diligence: An Impertinent Inquiry into Microfinance. Washington, D.C.: Center for Global Development.
Saldaña-Portillo, Josefina. 2003. The Revolutionary Imagination in the Americas and the Age of Development. Durham, NC: Duke University Press.
Schumpeter, Joseph. 1934. The Theory of Economic Development. Cambridge, MA. Harvard University Press.
Schumpeter, Joseph. 1941. “The Creative Response in Economic History,” The Journal of Economic History 7:2.
Schumpeter, Joseph. 2003. Capitalism, Socialism, and Democracy. London: Routledge.
Seitler, Ellen. 2005. The Internet Playground: Children’s Access, Entertainment, and Miseducation. New York: Peter Lang.
Shakespeare, William. 2005. Henry IV. New York: Bantam Classics.
In official, commercial, and activist discourses, networked computing is frequently heralded for establishing a field of inclusive, participatory political activity. It is taken to be the latest iteration of, or a standard-bearer for, “technology”: an autonomous force penetrating the social world, an independent variable whose magnitude may not directly be modified and whose effects are or ought to be welcomed. The internet, its component techniques and infrastructures, and related modalities of computing are often supposed to be accelerating and multiplying various aspects of the ideological lynchpin of the neoliberal order: individual sovereignty.[1] The Internet is heralded as the dawn of a new communication age, one in which democracy is to be reinvigorated and expanded through the publicity and interconnectivity made possible by new forms of networked relations among informed consumers.
Composed of consumer choice, intersubjective rationality, and the activity of the autonomous subject, such sovereignty also forms the basis of many strands of contemporary ethical thought—which has increasingly come to displace rival conceptions of political thought in sectors of the Anglophone academy. In this essay, I focus on two turns and their parallels—the turn to the digital in commerce, politics, and society; and the turn to the ethical in professional and elite thought about how such domains should be ordered. I approach the digital turn through the case of the free and open source software movements. These movements are concerned with sustaining a publicly-available information commons through certain technical and juridical approaches to software development and deployment. The community of free, libre, and open source (FLOSS) developers and maintainers is one of the more consequential spaces in which actors frequently endorse the claim that the digital turn precipitates an unleashing of democratic potential in the form of improved deliberation, equalized access to information, networks, and institutions, and a leveling of hierarchies of authority. I approach the ethical turn through an examination of the political theory of democracy, particularly as it has developed in the work of theorists of deliberative democracy like Jürgen Habermas and John Rawls.
By FLOSS I refer, more or less interchangeably, to software that is licensed such that it may be freely used, modified, and distributed, and whose source code is similarly available so that it may be inspected or changed by anyone (Free Software Foundation 2018). (It stands in contradistinction to “closed source” or proprietary software that is typically produced and sold by large commercial firms.) The agglomeration of “free,” “libre,” and “open source” reflects the multiple ideological geneses of non-proprietary software. Briefly, “free” or “libre” software is so named because, following Stallman’s (2015) original injunction in 1985, the conditions of its distribution forbid rendering the code (or derivative code) proprietary for the sake of maximizing the freedom of downstream coders and users to do as they see fit with it. The signifier “free” primarily connotes the absence of restrictions on use, modification, and distribution, rather than considerations of cost or exchange value. Of crucial importance to the free software movement was the adoption of “copyleft” licensure of software, in which copies of software are freely distributed with the restriction that subsequent users and distributors not impose additional restrictions upon subsequent distribution. As Stallman has noted, copyleft is built on a deliberate contradiction of copyright: “Copyleft uses copyright law, but flips it over to serve the opposite of its usual purpose: instead of a means of privatizing software, it becomes a means of keeping software free” (Stallman 2002, 22). Avowed members of the free software movement also conceive of free software’s importance not just in technical terms but in moral terms as well. For them, the free software ecosystem is a moral-pedagogical space in which values are reproduced and developers’ skills are fostered through unfettered access to free software (Kelty 2008).
“Open source” software derives its name from a push—years after Stallman’s cri de coeur—that stressed non-proprietary software’s potential in the business world. Advocates of the open source framing downplayed free software’s origins in the libertarian-individualist ethos of the early free software movement. They discarded its rhetorics of individual freedom in favor of the invocation of “innovation,” “openness,” and neoliberal subjectivity. Toward the end of the twentieth century, open source activists “partially codified this philosophical frame by establishing a clear priority for pragmatic technical achievement over ideology (which was more central to the culture of the Free Software Foundation)” (Weber 2005, 165). In the current moment, antagonisms between proponents of the respective terminologies are comparatively muted. In many FLOSS developer spaces, the most commonly-avowed view is that the practical upshot of the differences in emphasis between “free” and “open source” is unimportant: the typical user or producer doesn’t care, and the immediate social consequences of the distinction are close to nil. (It is noteworthy that this framing is fully compatible with the self-consciously technicist, pragmatic framing of the open source movement, less so with the ideological commitments of the free software movement. Whether or not it is the case at the micro level that free software and open source software retain meaningfully different political valences is beyond the scope of this essay, although it is possible that voices welcoming an elision of “free” and “open source” do protest too much.)
FLOSS is situated at the intersection of several trends and tendencies. It is a body of technical practice (hacking or coding); it is also a political-ethical formation. FLOSS is an integral component of capitalist software development—but it is also a hobbyist’s toy and a creator’s instrument (Kelty 2008), a would-be entrepreneur’s tool (Weber 2005), and an increasingly essential piece of academic kit (see, e.g., Coleman 2012). A generation of scholarship in anthropology, cultural studies, history, sociology, and other related fields has established that FLOSS is an appropriate object of study not only because its participants are typically invested in the internet-as-emancipatory-technology narrative, but also because free and open source software development has been profoundly consequential for both the cultural and technical character of the present-day information commons.
In the remainder of the essay, I gesture at a critique of this view of the internet’s alleged emancipatory potential by examining its underlying assumptions and the theory of democracy to which it adheres. This theory trades on the idea that democracy is an ethical practice, one that achieves its fullest expression in the absence of coercion and the promotion of deliberative norms. This approach to thinking about democracy has numerous analogues in current debates in political theory and political philosophy. In prevailing models of liberal politics, institutions and ethical constraints are privileged over concepts like organization, contestation, and—above all—the pursuit and exercise of power. Indeed, within contemporary liberal political thought it is sometimes difficult to discern the activity of thinking about politics as such. I do not argue here for the merits of contestatory democracy, nor do I conceal an unease with the depoliticizing tendencies of deliberative democracy, or with the tendency to substitute the ethical for the political. Instead I draw out the theoretical commonalities between the emergence of deliberative democracy and the turn toward the digital in relations of production and reproduction. I suggest that critiques of the shortcomings of liberal thought regarding political activity and political persuasion are also applicable to the social and political claims and propositions that undergird the strategies and rhetorics of FLOSS. The hierarchies of commitment that one finds in contemporary liberalism may be detected in FLOSS thought as well. Liberalism typically prioritizes intersubjectivity over mass political action and contestation. Similarly, FLOSS rhetoric focuses on ethical persuasion rather than the pursuit of influence and social power such that proprietarian computing may be resisted or challenged. Liberalism also prioritizes property relations over other social relations. The FLOSS movement similarly retains a stark commitment to the priority of liberal property relations and to the idea of personal property in digital commodities (Pedersen 2010).
In the context of FLOSS and the information commons, a depoliticized theory of democracy fails to attend to the dynamics of power, and to crucial considerations of political economy in communications and computing. An insistence on conceiving of democracy as an ethical aspiration or as a moral ideal—rather than as a practice of mass politics with a given historical and institutional specificity—serves to obscure crucial features of the internet as a cultural and social phenomenon. It also grants an illusory warrant for ideological claims to the effect that computing and internet-mediated communication constitute meaningful and consequential forms of civic participation and political engagement. As the ethical displaces the political, so the technological displaces the ethical. In the process, the workings of power are obscured, the ideological trappings of technologically-mediated domination are mystified, and the social forms that are peculiar to internet subcultures are naturalized as typifying the form of social organization that all democrats ought to seek after.
In identifying the theoretical affinities between the liberalism of the digital turn and the ethical turn in liberal political theory, I hope to contribute to an enriched, interdisciplinary understanding of the available spaces for investigation and research with respect to emerging trends in digital life. The social relations that are both constituted by and constitutive of the worlds of software, networked communication, and pervasive computing are rightly becoming the objects of sustained study within disparate fields in humanistic disciplines. This essay aims at provoking new questions in such study by examining the theoretical linkages between the digital turn and the ethical turn.
The Digital Turn
The internet—considered in the broadest possible sense, as something comprised of networks and terminals through which various forms of sociality are mediated electronically—attracts, of course, no small amount of academic, elite, and popular attention. A familiar story tends to arise out of these attentions. The digital turn ushers in the promise of digital democracy: an expansion of opportunities for participation in politics (Klein 1999), and a revolutionizing of communications that connects individuals in networks (Castells 2010) of informed and engaged consumers and producers of non-material content (Shirky 2008). Dissent would prove impossible to stifle, as information—endowed with its own virtual, composite personality, and empowered by sophisticated technologies—would both want and be able to be free. “The Net interprets censorship as damage and routes around it” (as cited in Reagle 1999) is a famous—and possibly apocryphal—variant of this piece of folk wisdom. Pervasive networked computing ensures that citizens will be self-mobilizing in their participation in politics and in their scrutiny of corruption and rights abuses. Capital, meanwhile, can anticipate a new suite of needs to be satisfied through informational commodities. The only losers are governments that, despite enthusiastic rhetoric about an “information superhighway,” are unable to keep pace with technological growth, or with popular adoption of decentralized communications media. Their capacities to restrict or control discourse will be crippled; their control over their own populations will diminish in proportion to the growth of electronically-mediated communication.[2]
Much of the excitement over the internet is freighted with neoliberal (Brown 2005) ideology, either in implicit or explicit terms. On this view, liberalism’s focus on the unfettered movement of commodities and the unrestricted consumption activities of individuals will find its final and definitive instantiation in a world of digital objects (with a marginal cost approaching zero) and the satisfaction of consumer needs through novel and innovative patterns of distribution. The cultural commons may be reclaimed through transformations of digital labor—social, collaborative, and remix-friendly (Benkler 2006). Problems of production can be solved through increasingly sophisticated chains of logistics (Bonacich and Wilson 2008), finally fulfilling the unrealized cybernetic dreams of planners and futurists in the twentieth century.[3] Political superintendence of the market—and many other social fields—will be rendered redundant by rapid, unmediated feedback mechanisms linking producers and consumers. This contradictory utopia will achieve a non-coercive panopticon of full information, made possible through the endless concatenation of individual decisions to consume, evaluate, and generate information (Shirky 2008).
This prediction has not been vindicated. Contemporary observers of the internet age do not typically describe it in terms of democratic vistas and cultural efflorescence. They are likelier to examine it in terms of the extension of technologies of control and surveillance, and in terms of the subsumption of sociality under the regime of neoliberal capital accumulation. Indeed, the digital turn follows a trajectory similar to that of the neoliberal turn in governance. The neoliberal turn has enhanced rather than undermined the capacity of the state. Those capacities are directed not at the provision of public goods and social services but rather coercive security and labor discipline. The digital turn’s course has decidedly not been one of individual empowerment and an expansion of the scope of participatory forms of democratic politics. Instead, networked computing is now a profit center for a small number of titanic capitals. Certainly, the revolution in communications technology has influenced social relations. But the political consequences of that influence do not constitute a profound transformation and extension of democracy (Hindman 2008). Nor are the consequences of the revolution in communications uniformly emancipatory (Morozov 2011). More generally, the subsumption of greater swathes of sociality within the logics of computing presents the risk of the enclosure of public information, and of the extension of the capabilities of the powerful to surveil and coerce others while evading public supervision (Drahos 2002, Golumbia 2009, Pasquale 2015).
Extensive critiques of “the Californian ideology” (Barbrook and Cameron 2002), renascent “cyberlibertarianism” (Dahlberg 2010) and its affinities with longstanding currents in right-wing thought (Golumbia 2013), and related ideological formations are all ready to hand. The digital turn is of course not characterized by a singular politics. However, the hegemonic political tendency associated with it may be fairly described as a complex of libertarian ideology, neoliberal political economy, and antistatist rhetoric. The material substrate for this complex is the burgeoning arena of capitals pursuing profits through the exploitation of “digital labor” (Fuchs 2014). Such labor occurs in software development, but also in hardware manufacturing; the buying, selling, and licensing of intellectual property; and the extractive industries providing the necessary mineral ores, rare earth metals, and other primary inputs for the production of computers (on this point see especially Dyer-Witheford 2015). The growth of this sector has been accomplished through the exploitation of racialized and marginalized populations (see, for example, Amrute 2016), the expropriation of the commons through the transformation of public assets into private property, and the decoupling in the public mind of any link between easily accessed electronic media and computing power, on the one hand, and massive power consumption and environmental devastation, on the other.
To the extent that hopes for the emancipatory potential of a cyberlibertarian future have been dashed, enthusiasm for the left-right hybrid politics that first bruited it is still widespread. In areas in which emancipatory hopes remain unchastened by the experience of capital’s colonization of the information commons, that enthusiasm is undiminished. FLOSS movements are important examples of such areas. In FLOSS communities and spaces, left-liberal commitments to social justice causes are frequently melded with a neoliberal faith in decentralized, autonomous activity in the development, deployment, and maintenance of computing processes. When FLOSS activists self-reflexively articulate their political commitments, they adopt rhetorics of democracy and cooperative self-determination that are broadly left-liberal. However, the politics of FLOSS, like hacker politics in general, also betray a right-libertarian fixation on the removal of obstacles to individual wills. The hacker’s political horizon is the unfettering of the socially untethered, electronically empowered self (Borsook 2000). Similarly, the liberal commitments that undergird contemporary theories of “deliberative democracy” are easily adapted to serve libertarian visions of the good society.
The Ethical and the Political
The liberalism of such political theory as is encountered in FLOSS discourse may be fruitfully compared to the turn toward deliberative models of social organization. This turn is characterized by a dual trend in postwar political thought, centrally but not exclusively limited to the North Atlantic academy. It consists of the elision of theoretical distinctions between individual ethical practice and democratic citizenship, while increasing the theoretical gap between agonistic practices—contestation, conflict, direction action—and policy-making within the institutional context of liberal constitutionality. The political is often equated with conflict—and thereby, potentially, violence or coercion. The ethical, by contrast, comes closer to resembling democracy as such. Democracy is, or ought to be, “depoliticized” (Pettit 2004); deliberative democracy, aimed at the realization of ethical consensus, is normatively prior to aggregative democracy or the mere counting of votes. On this view, the historical task of democracy is not to grant greater social purchase to political tendencies or formations; nor does it consist in forging tighter links between decision-making institutions and the popular will. Rather, democracy is a legitimation project, under which the decisions of representative elites are justified in terms of the publicity of the reasons or justifications supplied on their behalf. The uncertain movement between these two poles—conceiving of democracy as a normative ideal, and conceiving of it as a description of adequately legitimated institutions—is hardly unique to contemporary democratic theory. The turn toward the deliberative and the ethical is distinguished by the narrowness of its conception of the democratic—indeed by its insistence that the democratic, properly understood, is characterized by the dampening of political conflict and a tendential movement toward consensus.
Why ought we consider the trajectory of postwar liberal thought in conjunction with the digital turn? First, there are, of course, similarities and continuities between the fortunes of liberal ideology in both the world of software work and the world of academic labor. The former is marked to a much greater extent by a widespread distrust of mechanisms of governance and is indelibly marked by outpourings of an ascendant strain of libertarian triumphalism. Where ideological development in software work has charted a libertarian course, in academic Anglophone political thought it has more closely followed a path of neoliberal restructuring. To the extent that we maintain an interest in the consequences of the digitization of sociality, it is germane and appropriate to consider liberalism in software work and liberalism in professional political theory in tandem. However, there is a rather more important reason to chart the movement of liberal political thought in this context: many of the debates, problematics, and proffered solutions in the politico-ideological discourse in the world of software work are, as it were, always already present in liberal democratic theory. As such, an examination of the ethical turn—liberal democratic theory’s disavowal of contestation, and of the agon that interpellates structures of politics (Mouffe 2005, 80–105)—can aid further, subsequent examinations of the ontological, methodological, and normative presuppositions that inform the self-understanding of formations and tendencies within FLOSS movements. Both FLOSS discourses and professional democratic theory tend to discharge conclusions in favor of a depoliticized form of democracy.
Deliberative democracy’s roots lie in liberal legitimation projects begun in response to challenges from below and outside existing power structures. Despite effacing its own political content, deliberative democracy must nevertheless be understood as a political project. Notable gestures toward the concept may be found in John Rawls’s theory-building project, beginning with A Theory of Justice (1971); and in Jürgen Habermas’s attempts to render the intellectual legacy of the Frankfurt School compatible with postwar liberalism, culminating in Between Facts and Norms (1996). These philosophical moves were being made at the same time as the fragmentation of the postwar political and economic consensus in developed capitalist democracies. Critics have detected a trend toward retrenchment in both currents: the evacuation of political economy—let alone Marxian thought—from critical theory; the accommodation made by Rawls and his epigones with public choice theory and neoliberal economic frames. The turn from contestatory politics in Anglophone political thought was simultaneous with the rise of a sense that the institutional continuity and stability of democracy were in greater need of defense than were demands for political criticism and social transformation. By the end of the postwar boom years, an accommodation with “neoliberal governmentality” (Brown 2015) was under way throughout North Atlantic intellectual life. The horizons of imagined political possibility were contracting at the very conjuncture when labor movements and left political formations foundered in the face of the consolidation of the capitalist restructuring under way since the third quarter of the twentieth century.
Rawls’s account of justified institutions does not place a great emphasis on mass politics; nor does Habermas’s delineation of the boundaries of the ideal circumstances for communication—except insofar as the memory of fascism that Habermas inherited from the Frankfurt School weighs heavily on his forays into democratic theory. Mass politics is an inherently suspect category in Habermas’s thought. It is telling—and by no means surprising—that the two heavyweight theorists of North Atlantic postwar social democracy are primarily concerned with political institutions and with “the ideal speech situation” (Habermas 1996, 322–328) rather than with mass politics. They are both concerned with making justificatory moves rather than with exploring the possibilities and limits to mass politics and collective action. Rawls’s theory of justice describes a technocratic scheme for a minimally redistributive social democratic polity, while Habermas’s oeuvre has increasingly come to serve as the most sophisticated philosophical brief on behalf of the project of European cosmopolitan liberalism. Within the confines of this essay it is impossible to engage in a sustained consideration of the full sweep of Rawls’s political theory, including his conception of an egalitarian and redistributive polity and his constructivist account of political justification; similarly, the survey of Habermas presented here is necessarily compressed and abstracted. I restrict the scope of my critical gestures to the contributions made by Rawls and Habermas to the articulation of a deliberative conception of democracy. In this respect, they were strikingly similar:
Both Rawls and Habermas assert, albeit in different ways, that the aim of democracy is to establish a rational agreement in the public sphere. Their theories differ with respect to the procedures of deliberation that are needed to reach it, but their objective is the same: to reach a consensus, without exclusion, on the ‘common good.’ Although they claim to be pluralist, it is clear that theirs is a pluralism whose legitimacy is only recognized in the private sphere and that it has no constitutive place in the public one. They are adamant that democratic politics requires the elimination of passions from the public sphere. (Mouffe 2013, 55)
In neither Rawls’s nor Habermas’s writings is the theory of deliberative democracy simply the expression of a preference for the procedural over the substantive. It is better understood as a preference for unity and consensus, coupled with a minoritarian suspicion of the institutions and norms of mass electoral democracy. It is true that both their deliberative democratic theories evince considerable concern for the procedures and conditions under which issues are identified, alternatives are articulated, and decisions are made. However, this concern is motivated by a preoccupation with a particular substantive interest: specifically, the reproduction of liberal democratic forms. Such forms are valued not for their own sake—indeed, that would verge on incoherence—but because they are held to secure certain moral ends: respect for individuals, reciprocity of regard or recognition between persons, the banishment of coercion from public life, and so on. The ends of politics are framed in terms of morality—a system of universal duties or ends. The task of political theory is to envision institutions which can secure ends or goods that may be seen as intrinsically desirable. Notions that the political might be an autonomous domain of human activity, or that political theory’s ambit extends beyond making sense of existing configurations of institutions, are discarded. In their place is an approach to political thought rooted in concerns about technologies of governance. Such an approach concerns itself with political disagreement primarily insofar as it is a foreseeable problem that must be managed and contained.
Depoliticized, deliberative democracy may be characterized as one or more of several forms of commitment to an apolitical conception of social organization. It is methodologically individualist: it takes the (adult, sociologically normative and therefore likely white and cis-male) individual person as the appropriate object of analysis and as the denominator to which social structures ultimately reduce. It is often intersubjective in its model of communication: that is, ideas are transmitted by and between individuals, typically or ideally two individuals standing in a relation of uncoerced respect with one another. It is usually deliberative in the kind of decision-making it privileges: authoritative decisions arise not out of majoritarian voting mechanisms or mass expressions of collective will, but rather out of discursive encounters that encourage the formation and exchange of claims whose content conform to specific substantive criteria. It is often predicated on the notion that the most valuable or self-constitutive of individuals’ beliefs and understandings are pre-political: individual rational agents are “self-authenticating sources of valid claims” (Rawls 2001, 23). Their claims are treated as exogenous to the social and political contexts in which they are found. Depoliticized democracy is frequently racialized and erected on a series of assumptions and cultural logics of hierarchy and domination (Mills 1997). Finally, depoliticized democracy insists on a particular hermeneutic horizon: the publicity of reasons. For any claim to be considered credible, and for public exercises to be considered legitimate, they must be comprehensible in terms of the worldviews, held premises, or anterior normative commitments of all persons who might somehow be affected by them.
Theories of deliberative democracy are not merely suspicious of political disagreement—they typically treat it as pathological. Social cleavages over ideology (which may always be reduced to the concatenation of individual deliberations) are evidence either of bad faith argumentation or a failure to apprehend the true nature of the common good. To the extent that deliberative democracy is not nakedly elitist, it ascribes to those democratic polities it considers well-formed a capacity for a peculiar kind of authority. Such collectivities are capable, by virtue of their well-formed deliberative structures, of discharging decisions that are binding precisely because they are correct with reference to standards that are anterior to any dialectic that might take place within the social body itself. Consequently, much depends on the ideological content of those standards.
The concept of public reason has acquired special potency in the hands of Rawls’s legatees in North American analytic political philosophy. Similar in aim to Habermas’s ideal speech situation, the modern idea of public reason is meant to model an ideal state of deliberative democracy. Rawls locates its origins in Rousseau (Rawls 2007, 231). However, it acquires a specifically Kantian conception in his elaboration (Rawls 2001, 91–94), and an extensive literature in analytic political philosophy is devoted to the elaboration of the concept in a Rawlsian mode (for a good recent discussion see Quong 2013). Public reason requires that contested policies’ justifications are comprehensible to those who controvert those policies. More generally, the polity in which the ideal public reason obtains is one in which interlocutors hold themselves to be obliged to share, to the extent possible, the premises from which political reasoning proceeds. Arguments that are deemed to originate from outside the boundaries of public reason cannot serve a legitimating function. Public reason usually finds expression in the writings of liberal theorists as an explanation for why controverted policies or decisions may nevertheless be viewed as substantively appropriate and democratically legitimated.
Proponents of public reason often cast the ideal as a commonplace of reasonable discussion that merely binds interlocutors to deliberate in good faith. However, public reason may also be described as a cudgel with which to police the boundaries of debate. It effectively cedes discursive power to those who controvert public policy in order to control the trajectory of the discourse—if they are possessed of enough social power. Explicitly liberal in its philosophical genealogy, public reason is expressive of liberal democratic theory’s wariness with respect to both radical and reactionary politics. Many liberal theorists are primarily concerned to show how public reason constrains reactionaries from advancing arguments that rest on religious or theological grounds. An insistence on public reasonableness (perhaps framed through an appeal to norms of civility) may also allow the powerful to cavil at challenges to prevailing economic thought as well as to prevailing understandings of the relationship between the public and the religious.
Habermas’s project on the communicative grounds of liberal democracy (1998) reflects a similar commitment to containing disagreement and establishing the parameters when and how citizens may contest political institutions and the rules they produce and enforce. His “discourse principle” (1996, 107) is not unlike Rawls’s conception of public reason in that it is intended to serve as a justificatory ground for deliberations tending toward consensus. According to the discourse principle, a given rule or law is justified if and only if those who are to be affected by it could accept it as the product of a reasonable discourse. Much of Habermas’s work—particularly Between Facts and Norms (1996)—is devoted to establishing the parameters of reasonable discourses. Such cartographies are laid out not with respect to controversies arising out of actually existing politics (such as pan-European integration or the problems of contemporary German right-wing politics). They are instead sited within the coordinates of Habermas’s specification of the linguistic and pragmatic contours of the social world in established constitutional democracies. The practical application of the discourse principle is often recursive, in that the particular implications and the scope of the discourse principle require further elaboration or extension within any given domain of practical activity in which the principle is invoked. Despite its rarefied abstraction, the discourse principle is meant in the final instance to be embedded in real activities and sites of discursive activity. (Habermas’s work in ethics parallels his discourse-theoretic approach to politics. His dialogical principle of universalization holds that moral norms are valid insofar as its observance—and the effects of that observance—would be accepted singly and jointly by all those affected.)
Both Rawls and Habermas’s conceptions of the communicative activity underlying collective decision-making are strongly motivated by concerns for intersubjective ethical concerns. If anything, Habermas’s discourse ethics, and the parallel moves that he makes in his interventions in political thought, are more exacting than Rawls’s conception of public reason, both in terms of the discursive environments that they presuppose as well as the demands that they place upon individual interlocutors. Both thinkers’ views also conceive of political conflict as a field in which ethical questions predominate. Indeed, under these views political antagonism might be seen as pathological, or at least taken to be the locus of a sort of problem situation: If politics is taken to be a search for the common welfare (grounded in commonly-avowed terms), or is held to consist in the provision of public goods whose worth can, in principle, be agreed upon, then it would make sense to think that political antagonism is an ill to be avoided. Politics would then be exceptional, whereas the suspension of political antagonism for the sake of decisive, authoritative decision-making would be the norm. This is the core constitutive contradiction of the theory of deliberative democracy: the priority given to discussion and rationality tends to foreclose the possibility of contestation and disagreement.
If, however, politics is a struggle for power in the pursuit of collective interests, it becomes harder to insist that the task of politics is to smooth over differences, rather than to articulate them and act upon them. Both Rawls and Habermas have been the subjects of extensive critique by proponents of several different perspectives in political theory. Communitarian critics have typically charged Rawls with relying on a too-atomized conception of individual subjects, whose preferences and beliefs are unformed by social, cultural or institutional contexts (Gutmann 1985); similar criticisms have been mounted against Habermas (see, for example, C. Taylor 1989). Both thinkers’ accounts of the foundations of political order fail to acknowledge the politically constitutive aspects of gender and sexuality (Okin 1989, Meehan 1995). From the perspective of a more radical conception of democracy, even Rawls’s later writings in which he claims to offer a constructivist (rather than metaphysical) account of political morality (Rawls 1993) does not necessarily pass muster, particularly given that his theory is fundamentally a brief for liberalism and not for the democratization of society (for elaboration of this claim see Wolin 1996).
Deliberative democracy, considered as a prescriptive model of politics, represents a striking departure both from political thought on the right—typically preoccupied with maintaining cultural logics and preserving existing social hierarchies—and political thought on the left, which often emphasizes contingency, conflict, and the priority of collective action. Both of these latter approaches to politics take social phenomena as subjects of concern in and of themselves, and not merely as intermediate formations which reduce to individual subjectivity. The substitution of the ethical for the political marks an intellectual project that is adequate to the imperatives of a capitalist political economy. The contradictory merger of the ethical anxieties underpinning deliberative democratic theory and liberal democracy’s notional commitment to legitimation through popular sovereignty tends toward quietism and immobilism.
FLOSS and Democracy
The free and open source software movements are cases of distinct importance in the emergence of digital democracy. Their traditions, and many of the actors who participate in them, antedate the digital turn considerably: the free software movement began in earnest in the mid-1980s, while its social and technical roots may be traced further back and are tangled with countercultural trends in computing in the 1970s. The movements display durable commitments to ethical democracy in their rhetoric, their organizational strategies, and the philosophical presuppositions that are revealed in their aims and activities (Coleman 2012).
FLOSS is sited at the intersection of many of liberal democratic theory’s desiderata. These are property, persuasion, rights, and ethics. The movement is a flawed, incompletely successful, but suggestive and instructive attempt at reconfiguring capitalist property relations—importantly, and fatally, from inside of an existing set of capitalist property relations—for the sake of realizing liberal ethical commitments with respect to expression, communication, and above all personal autonomy. Self-conscious hackers in the world of FLOSS conceive of their shared goals as the maximization of individual freedom with respect to the use of computers. Coleman describes how many hackers conceive of this activity in explicitly ethical terms. For them, hacking is a vital expression of individual freedom—simultaneously an aesthetic posture as well as a furtherance of specific ethical projects (such as the dissemination of information, or the empowerment of the alienated subject).
The origins of the free software movement are found in the countercultural currents of computing in the 1970s, when several lines of inquiry and speculation converged: cybernetics, decentralization, critiques of bureaucratic organization, and burgeoning individualist libertarianism. Early hacker values—such as unfettered sharing and collaboration, a suspicion of distant authority given expression through decentralization and redundancy, and the maximization of the latitude of individual coders and users to alter and deploy software as they see fit—might be seen as the outflowing of several political traditions, notably participatory democracy and mutualist forms of anarchism. Certainly, the computing counterculture born in the 1970s was self-consciously opposed to what it saw as the bureaucratized, sclerotic, and conformist culture of major computing firms and research laboratories (Barbrook and Cameron 2002). Richard Stallman’s 1985 declaration of the need for, and the principles underlying, the free development of software is often treated as the locus classicus of the movement (Stallman, The GNU Manifesto 2015). Stallman succeeded in instigating a narrow kind of movement, one whose social specifity it is possible to trace. Its social basis consisted of communities of software developers, analysts, administrators, and hobbyists—in a word, hackers—that shared Stallman’s concerns over the subsumption of software development under the value-expanding imperatives of capital. As they saw it, the values of hacking were threatened by a proprietarian software development model predicated on the enclosure of the intellectual commons.
Democracy, as it is championed by FLOSS advocates, is not necessarily an ideal of well-ordered constitutional forms and institutions whose procedures are grounded in norms of reciprocity and intersubjective rationality. It is characterized by a tension between an enthusiasm for volatile forms of participatory democracy and a tendency toward deference to the competence or charisma (the two are frequently conflated) of leaders. Nevertheless, the parallels between the two political projects—deliberative democracy and hacker liberation under the banner of FLOSS—are striking. Both projects share an emphasis on the persuasion of individuals, such that intersubjective rationality is the test of the permissibility of power arrangements or use restrictions. As such, both projects—insofar as they are to be considered to be interventions in politics—are necessarily self-limiting.
Exponents of digital democracy rely on a conception of democracy that is strikingly similar to the theory of ethical democracy considered above. The constitutive documents and inscriptive commitments of various FLOSS communities bear witness to this. FLOSS communities should attract our interest because they are frequently animated by ethical and political concerns which appear to be liberal—even left-liberal—rather than libertarian. Barbrook and Cameron’s “Californian ideology” is frequently manifested in libertarian rhetorics that tend to have a right-wing grounding. The rise of Bitcoin is also a particularly resonant recent example (Golumbia 2016). The adulation that accompanies the accumulation of wealth in Silicon Valley furnishes a more abstract example of the ideological celebration of acquisitive amour propre in computing’s social relations. The ideological substrate of commercial computing is palpably right-wing, at least in its orientation to political economy. As such it is all the more noteworthy that the ideological commitments of many FLOSS projects appear to be animated by ethico-political concerns that are more typical of left-liberalism, such as: consensus-seeking modes of collective decision-making; recognition of the struggles and claims of members of marginalized or oppressed groups; and the affirmation of differing identifies.
Free software rhetoric relies on concepts like liberty and freedom (Free Software Foundation 2016). It is in this rhetoric that free software’s imbrication within capitalist property relations is most apparent:
Freedom means having control over your own life. If you use a program to carry out activities in your life, your freedom depends on your having control over the program. You deserve to have control over the programs you use, and all the more so when you use them for something important in your life. (Stallman 2015)
Stallman’s equation of freedom with control—self-control—is telling: Copyleft does not subvert copyright; it depends upon it. Hacking is dependent upon the corporate structure of industrial software development. It is embedded in the social matrix of closed-source software production, even though hackers tend to believe that “their expertise will keep them on the upside of the technology curve that protects the best and brightest from proletarianization” (Ross 2009, 168). A dual contradiction is at work here. First, copyleft inverts copyright in order to produce social conditions in which free software production may occur. Second, copyleft nevertheless remains dependent on closed-source software development for its own social reproduction. Without the state power that is necessary for contracts to be enforced, or without the reproduction of technical knowledge that is underwritten by capital’s continued interest in software development, FLOSS loses its social base. Artisanal hacking or digital homesteading could not enter into the void were capitalist computing to suddenly disappear. The decentralized production of software is largely epiphenomenal upon the centralized and highly cooperative models of development and deployment that typify commercial software development. The openness of development stands in uneasy contrast with the hierarchical organization of the management and direction of software firms (Russell 2014).
Capital has accommodated free and open source software with little difficulty, as can be seen in the expansion of the open source software movement. As noted above, many advocates of both the free software and open source software movements frequently aver that their commitments overlap to the point that any differences are largely ones of emphasis. Nevertheless, open source software differs—in an ideal, if not political, sense—from free software in its distinct orientation to the value of freedom: it is something which is to be valued as the absence of the fetters on coding, design, and debugging that characterize proprietary software development. As such open source software trades on an interpretation of freedom that is rather distinct from the ethical individualism of free software. Indeed, it is more recognizably politically adjacent to right-wing libertarianism. This may be seen, for example, in the writings of His influential essay “The Cathedral and the Bazaar” is a paean not to the emancipatory potential of open source software but its adaptability and suitability for large-scale, rapid-turnover software development—and its amenability to the prerogatives of capital (Raymond 2000).
One of the key ethical arguments made by free and open source software advocates rests on an understanding of property that is historically specific. The conception of property deployed within FLOSS is the absolute and total right of owners to dispose of their possessions—a form of property rights that is peculiar to the juridical apparatus of capitalism. There are, of course, superficial resemblances between software license agreements—which curtail the rights of those who buy hardware with pre-installed commercial software, for example—and the seigneurial prerogatives associated with feudalism. However, the specific set of property relations underpinning capitalist software development is also the same set of property relations that are traded upon in FLOSS theory. FLOSS criticism of proprietary software rarely extends to a criticism of private property as such. Ethical arguments for the expansion of personal computing freedoms, made with respect to the prevailing set of property relations, frequently focus on consumption. The focus may be positive: the freedom of the individual finds expression in the autonomy of the rational consumer of commodities. Or the focus may be negative: individual users must eschew a consumerist approach to computing or they will be left at the mercy of corporate owners of proprietary software.
Arguments erected on premises about individual consumption choices are not easily extended to the sphere of collective political action. They do not discharge calls for pressuring political institutions or pursuing public power. The Free Software Foundation, the main organizational node of the free software movement, addresses itself to individual users (and individual capitalist firms) and places its faith in the ersatz property relations made possible by copyleft’s parasitism on copyright. The FSF’s ostensible non-alignment is really complementary, rather than antagonistic with, the alignments of major open source organizations. Organizations associated with the open source software movement are eager to find institutional partners in the business world. It is certainly the case that in the world of commercial computing, the open source approach has been embraced as an effective means for socializing the costs of software production (and the reproduction of software development capacities) while privatizing the monetary rewards that can be realized on the basis of commodified software. Meanwhile, the writings of Stallman and the promotional literature of the Free Software Foundation eschew the kind of broad-based political strategy that their analysis would seem to militate for, one in which FLOSS movements would join up with other social movements. An immobilist tendency toward a single-issue approach to politics is characteristic of FLOSS at large.
One aspect of deliberative democracy—an aspect that is, as we have seen treated as banal in an unproblematic by many theorists of liberalism—that is often given greater emphasis by active proponents of digital democracy is the primacy of liberal property relations. Property relations take on special urgency in the discourse and praxis of free and open source software movements. Particularly in the propaganda and apologia of the open source movement, the personal computer is the ultimate form of personal property. More than that—it is an extension of the self. Computers are intimately enmeshed in human lives, to a degree even greater than was the case thirty years ago. To many hackers, the possibility that the code executed on their machines is beyond their inspection is a violation of their individual autonomy. Tellingly, analogies for this putative loss of freedom take as their postulates the “normal,” extant ways in which owners relate to the commodities they have purchased. (For example, running proprietary code on a computer may be analogized to driving a car whose hood cannot be opened.)
Consider the Debian Social Contract, which encodes a variety of liberal principles as the constitutive political materials of the Debian project, adopted in the wake of a series of controversies and debates about gender imbalance (O’Neil 2009, 129–146). That the project’s constitutive document is self-reflexively liberal is signaled in its very title: it presupposes liberal concerns with the maximization of personal freedom and the minimization of coercion, all under the rubric of cooperation for a shared goal. The Debian Social Contract was the product of internal struggles within the Debian project, which aims to produce a technically sophisticated and yet ethically grounded version of the GNU/Linux operating system. It represents the ascendancy of a tendency within the Debian project that sought to affirm the project’s emancipatory aims. This is not to suggest that, prior to the adoption of the Social Contract, the project was characterized by an uncontested focus on technical expertise, at the direct expense of an emancipatory vision of FLOSS computing; nevertheless, the experience decisively shifted Debian’s trajectory such that it was no longer parallel with that of related projects.
Another example of FLOSS’s fetishism for non-coercive, individual-centered ethics may be found in the emphasis placed on maximizing individual user freedom. The FSF, for example, considers it a violation of user autonomy to make the use of free, open source software conditional by restricting its use—even only notionally—to legal or morally-sanctioned use cases. As is often the case when individualist libertarianism comes into contact with practical politics, an obstinate insistence on abstract principles discharges absurd commitments. The major stakeholders and organizational nodes in the free software movement—the FSF, the GNU development community, and so on—refuse even to censure the use of free software in situations characterized by the restriction or violation of personal freedoms: military computing, governmental surveillance, and so on.
It must also be noted that the hacker ethos is at least partially coterminous with cyberlibertarianism. Found in both is the tendency to see the digital sphere as both the vindication of neoliberal economic precepts as well as the ideal terrain in which to pursue right-wing social projects. From the user’s perspective, cyberlibertarianism is presented as a license to use and appropriate the work of others who have made their works available for such purposes. It may perhaps be said that cyberlibertarianism is the ethos of the alienated monad pursuing jouissance through the acquisition of technical mastery and control over a personal object, the computer.
Persuasion and Contestation
We are now in a position to examine the contradictions in the theory of politics that informs FLOSS activity. These contradictions converge at two distinct—though certainly related—sites. The first site centers on power, and interest aggregation; the second, on property and the claims of users over their machines and data. An elaboration and examination of these contradictions will suggest that, far from overcoming or transcending the contradictions of liberalism as they inhere either in contemporary political practice or in liberal political thought, FLOSS hackers and activists have reproduced them in their practices as well as in their texts.
The first site of contradiction centers on politics. FLOSS advocates adhere to an understanding of politics that emphasizes moral suasion and that valorizes the autonomy of the individual to pursue chosen projects and satisfy their own preferences. This despite the fact that the primary antagonists in the FLOSS political imaginary—corporate owners of IP portfolios, developers and retailers of proprietary software, and policy-makers and bureaucrats—possess considerable political, legal, and social power. FLOSS discourses counterpose to this power, not counterpower but evasion, escape, and exit. Copyleft itself may be characterized as evasive, but more central here is the insistence that FLOSS is an ethical rather than a political project, in which individual developers and users must not be corralled into particular formations that might use their collective strength to demand concessions or transform digitally mediated social relations. This disavowal of politics directly inhibits the articulation of counter-positions and the pursuit of counterpower.
So long as FLOSS as a political orientation remains grounded in a strategic posture of libertarian individualism and interpersonal moral suasion, it will be unable to effectively underwrite demands or place significant pressures on institutions and decision-making bodies. FLOSS political rhetoric trades heavily on tropes of individual sovereignty, egalitarian epistemologies, and participatory modes of decision-making. Such rhetorics align comfortably with the currently prevailing consensus regarding the aims and methods of democratic politics, but when relied on naïvely or uncritically, they place severe limits on the capacity for the FLOSS movement to expand its political horizons, or indeed to assert itself in such a way as to become a force to be reckoned with.
The second site of contradiction is centered on property relations. In the self-reflexive and carefully articulated discourse of FLOSS advocates, persons are treated as ethical agents, but such agents are primarily concerned with questions of the disposition of their property—most importantly, their personal computing devices. Free software advocates, in particular, emphasize the importance of users’ freedoms, but their attentiveness to such freedoms appears to end at the interface between owner and machine. More generally, property relations are foregrounded in FLOSS discourse even as such discourse draws upon and deploys copyleft in order to weaponize intellectual property law against proprietarian use cases.
For so long as FLOSS as a social practice remains centered on copyleft, it will reproduce and reinforce the property relations which sustain a scarcity economy of intellectual creations. Copyleft is commonly understood as an ingenious solution to what is seen as an inherent tendency in the world of software towards restrictions on access, limitations on communication and exchange of information, and the diminution of the informational commons. However, these tendencies are more appropriately conceived of as notably enduring features of the political economy of capitalism itself. Copyleft cannot dismantle a juridical framework heavily weighted in favor of ownership in intellectual property from the inside—no more so than a worker-controlled-and-operated enterprise threatens the circuits of commodity production and exchange that comprise capitalism as a set of social relations. Moreover, major FLOSS advocates—including the FSF and the Open Source Initiative—proudly note the reliance of capitalist firms on open source software in their FAQs, press releases, and media materials. Such a posture—welcoming the embrace of FLOSS the software industry, with its attendant practices of labor discipline and domination, customer and citizen surveillance, and privatization of data—stands in contradiction with putative FLOSS values like collaborative production, code transparency, and user freedom.
The persistence—even, in some respects, the flourishing—of FLOSS in the current moment represents a considerable achievement. Capitalism’s tendency toward crisis continues to impel social relations toward the subsumption of more and more of the social under the rubric of commodity production and exchange. And yet it is still the case that access to computing processes, logics, and resources remains substantially unrestricted by legal or commercial barriers. Much of this must be credited to the efforts of FLOSS activists. The first cohort of FLOSS activists recognized that resisting the commodification of the information commons was a social struggle—not simply a technical challenge—and sought to combat it. That they did so according to the logic of single-issue interest group activism, rather than in solidarity with a broader struggle against commodification, should perhaps not be surprising; in the final quarter of the twentieth century, broad struggles for power and recognition by and on behalf of workers and the poor were at their lowest ebb in a century, and a reconfiguration of elite power in the state and capitalism was well under way. Cross-class, multiracial, and gender-inclusive social movements were losing traction in the face of retrenchment by a newly emboldened ruling class; and the conceptual space occupied by such work was contested. Articulating their interests and claims as participants in liberal interest group politics was by no means the poorest available strategic choice for FLOSS proponents.
The contradictions of such an approach have nevertheless developed apace, such that the current limitations and impasses faced by FLOSS movements appear more or less intractable. Free and open source software is integral to the operations of some of the largest firms in economic history. Facebook (2018), Apple (2018), and Google (Alphabet, Inc. 2018), for example, all proudly declare their support of and involvement in open source development.[4] Millions of coders, hackers, and users can and do participate in widely (if unevenly) distributed networks of software development, debugging, and deployment. It is now a practical possibility for the home user to run and maintain a computer without proprietary software installed on it. Nevertheless, proprietary software development remains a staggeringly profitable undertaking, FLOSS hacking remains socially and technically dependent on closed computing, and the home computing market is utterly dominated by the production and sale of machines that ship with and run software that is opaque—by design and by law—to the user’s inspection and modification. These limitations are compounded by FLOSS movements’ contradictions with respect to property relations and political strategy.
Implications and Further Questions
The paradoxes and contradictions that attend both the practice and theory of digital democracy in the FLOSS movements bear strong family resemblances to the paradoxes and contradictions that inhere in much contemporary liberal political theory. Liberal democratic theory is frequently committed to the melding of a commitment to rational legitimation with the affirmation of the ideal of popular sovereignty; but an insistence on rational authority tends to undermine the insurgent potential of democratic mass action. Similarly, the public avowals of respect for human rights and the value of user freedom that characterize FLOSS rhetoric are in tension with a simultaneous insistence on moral suasion centered on individual subjectivity. What’s more, they are flatly contradicted by the stated commitments by prominent leaders and stakeholders in FLOSS communities in favor of capitalist labor relations and neutrality with respect to the social or moral consequences of the use of FLOSS. Liberal political theory is potentially self-negating to the extent that it discards the political in favor of the ethical. Similarly, FLOSS movements short-circuit much of FLOSS’s potential social value through a studied refusal to consider the merits of collective action or the necessity of social critique.
The disjunctures between the rhetorics and stated goals of FLOSS movements and their actual practices and existing social configurations are deserving of greater attention from a variety of perspectives. I have approached those disjunctures through the lens of political theory, but these phenomena are also deserving of attention within other disciplines. The contradiction between FLOSS’s discursive fealty to the emancipatory potential of software and the dependence of FLOSS upon the property relations of capitalism merits further elaboration and exploration. The digital turn is too easily conflated with the democratization of a social world that is increasingly intermediated by networked computing. The prospects for such an opening up of digital public life remain dim.
_____
Rob Hunter is an independent scholar who holds a PhD in Politics from Princeton University.
[*] I am grateful to the b2o: An Online Journal editorial collective and to two anonymous reviewers for their feedback, suggestions, and criticism. Any and all errors in this article are mine alone. Correspondence should be directed to: jrh@rhunter.org.
_____
Notes
[1] The notion of the digitally-empowered “sovereign individual” is adumbrated at length in an eponymous book by Davidson and Rees-Mogg (1999) that sets forth a right-wing techno-utopian vision of network-mediated politics—a reactionary pendant to liberal optimism about the digital turn. I am grateful to David Golumbia for this reference.
[2] For simultaneous presentations and critiques of these arguments see, for example, Dahlberg and Siapera (2007), Margolis and Moreno-Riaño (2013), Morozov (2013), Taylor (2014), and Tufekci (2017).
[3] See Bernes (2013) for a thorough presentation of the role of logistics in (re)producing social relations in the present moment.
[4] “Google believes that open source is good for everyone. By being open and freely available, it enables and encourages collaboration and the development of technology, solving real world problems” (Alphabet, Inc. 2017).
Amrute, Sareeta. 2016. Encoding Race, Encoding Class: Indian IT Workers in Berlin. Durham, NC: Duke University Press.
Apple Inc. 2018. “Open Source.” (Accessed July 31, 2018.)
Barbrook, Richard, and Andy Cameron. (1995) 2002. “The Californian Ideology.” In Peter Ludlow, ed., Crypto Anarchy, Cyberstates, and Pirate Utopias. Cambridge, MA: The MIT Press. 363–387.
Benkler, Yochai. 2006. The Wealth of Networks: How Social Production Transforms Markets and Freedom. New Haven, CT: Yale University Press.
Bernes, Jasper. 2013. “Logistics, Counterlogistics and the Communist Prospect.” Endnotes 3. 170–201.
Bonacich, Edna, and Jake Wilson. 2008. Getting the Goods: Ports, Labor, and the Logistics Revolution. Ithaca, NY: Cornell University Press.
Borsook, Paulina. 2000. Cyberselfish: A Critical Romp through the Terribly Libertarian Culture of High Tech. New York: PublicAffairs.
Brown, Wendy. 2005. “Neoliberalism and the End of Liberal Democracy.” In Edgework: Critical Essays on Knowledge and Politics. Princeton, NJ: Princeton University Press. 37–59.
Brown, Wendy. 2015. Undoing the Demos: Neoliberalism’s Stealth Revolution. New York: Zone Books.
Castells, Manuel. 2010. The Rise of The Network Society. Malden, MA: Wiley-Blackwell.
Coleman, E. Gabriella. 2012. Coding Freedom: The Ethics and Aesthetics of Hacking. Princeton, NJ: Princeton University Press.
Dahlberg, Lincoln. 2010. “Cyber-Libertarianism 2.0: A Discourse Theory/Critical Political Economy Examination.” Cultural Politics 6:3. 331–356.
Dahlberg, Lincoln, and Eugenia Siapera. 2007. “Tracing Radical Democracy and the Internet.” In Lincoln Dahlberg and Eugenia Siapera, eds., Radical Democracy and the Internet: Interrogating Theory and Practice. Basingstoke: Palgrave. 1–16.
Davidson, James Dale, and William Rees-Mogg. 1999. The Sovereign Individual: Mastering the Transition to the Information Age. New York: Touchstone.
Drahos, Peter. 2002. Information Feudalism: Who Owns the Knowledge Economy? New York: The New Press.
Dyer-Witheford, Nick. 2015. Cyber-Proletariat: Global Labour in the Digital Vortex. London: Pluto Press.
Golumbia, David. 2016. The Politics of Bitcoin: Software as Right-Wing Extremism. Minneapolis, MN: University of Minnesota Press.
Gutmann, Amy. 1985. “Communitarian Critics of Liberalism.” Philosophy and Public Affairs 14. 308–322.
Habermas, Jürgen. 1996. Between Facts and Norms: Contributions to a Discourse Theory of Law and Democracy. Cambridge, MA: MIT Press.
Habermas, Jürgen. 1998. The Inclusion of the Other. Edited by Ciarin P. Cronin and Pablo De Greiff. Cambridge, MA: MIT Press.
Hindman, Matthew. 2008. The Myth of Digital Democracy. Princeton, NJ: Princeton University Press.
Kelty, Christopher M. 2008. Two Bits: The Cultural Significance of Free Software. Durham, NC: Duke University Press.
Klein, Hans. 1999. “Tocqueville in Cyberspace: Using the Internet for Citizens Associations.” Technology and Society 15. 213–220.
Laclau, Ernesto, and Chantal Mouffe. 2014. Hegemony and Socialist Strategy: Towards a Radical Democratic Politics. London: Verso.
Margolis, Michael, and Gerson Moreno-Riaño. 2013. The Prospect of Internet Democracy. Farnham: Ashgate.
Meehan, Johanna, ed. 1995. Feminists Read Habermas. New York: Routledge.
Mills, Charles W. 1997. The Racial Contract. Ithaca, NY: Cornell University Press.
Morozov, Evgeny. 2011. The Net Delusion: The Dark Side of Internet Freedom. New York: PublicAffairs.
Morozov, Evgeny. 2013. To Save Everything, Click Here: The Folly of Technological Solutionism. New York: PublicAffairs.
Mouffe, Chantal. 2005. The Democratic Paradox. London: Verso.
Mouffe, Chantal. 2013. Agonistics: Thinking the World Politically. London: Verso.
Okin, Susan Moller. 1989. “Justice as Fairness, For Whom?” In Justice, Gender and the Family. New York: Basic Books. 89–109.
O’Neil, Mathieu. 2009. Cyberchiefs: Autonomy and Authority in Online Tribes. London: Pluto Press.
Pasquale, Frank. 2015. The Black Box Society: The Secret Algorithms That Control Money and Information. Cambridge, MA: Harvard University Press.
Pedersen, J. Martin. 2010. “Introduction: Property, Commoning and the Politics of Free Software.” The Commoner 14 (Winter). 8–48.
Pettit, Philip. 2004. “Depoliticizing Democracy.” Ratio Juris 17:1. 52–65.
Quong, Jonathan. 2013. “On the Idea of Public Reason.” In The Blackwell Companion to Rawls, edited by John Mandle and David A. Reidy. Oxford: Wiley-Blackwell. 265–280.
Rawls, John. 1971. A Theory of Justice. Cambridge, MA: Harvard University Press.
Rawls, John. 1993. Political Liberalism. New York: Columbia University Press.
Rawls, John. 2001. Justice as Fairness: A Restatement. Cambridge, MA: Harvard University Press.
Rawls, John. 2007. Lectures in the History of Political Philosophy. Cambridge, MA: The Belknap Press of Harvard University Press.
Indigenous people are here—here in digital space just as ineluctably as they are in all the other “unexpected places” where historian Philip Deloria (2004) suggests we go looking for them. Indigenous people are on Facebook, Twitter, and YouTube; they are gaming and writing code, podcasting and creating apps; they are building tribal websites that disseminate immediately useful information to community members while asserting their sovereignty. And they are increasingly present in electronic archives. We are seeing the rise of Indigenous digital collections and exhibits at most of the major heritage institutions (e.g., the Smithsonian) as well as at a range of museums, universities and government offices. Such collections carry the promise of giving tribal communities more ready access to materials that, in some cases, have been lost to them for decades or even centuries. They can enable some practical, tribal-nation rebuilding efforts, such as language revitalization projects. From English to Algonquian, an exhibit curated by the American Antiquarian Society, is just one example of a digitally-mediated collaboration between tribal activists and an archiving institution that holds valuable historic Native-language materials.
“Digital repatriation” is a term now used to describe many Indigenous electronic archives. These projects create electronic surrogates of heritage materials, often housed in non-Native museums and archives, making them more available to their tribal “source communities” as well as to the larger public. But digital repatriation has its limits. It is not, as some have pointed out, a substitute for the return of the original items. Moreover, it does not necessarily challenge the original archival politics. Most current Indigenous digital collections, indeed, are based on materials held in universities, museums and antiquarian societies—the types of institutions that historically had their own agendas of salvage anthropology, and that may or may not have come by their materials ethically in the first place. There are some practical reasons that settler institutions might be first to digitize: they tend to have rather large quantities of material, along with the staff, equipment and server space to undertake significant electronic projects. The best of these projects are critically self-conscious about their responsibilities to tribal communities. And yet the overall effect of digitizing settler collections first is to perpetuate colonial archival biases—biases, for instance, toward baskets and buckskins rather than political petitions; biases toward sepia photographs of elders rather than elders’ letters to state and federal agencies; biases toward more “exotic” images, rather than newsletters showing Native activists successfully challenging settler institutions to acknowledge Indigenous peoples’ continuous, and political presence.
Those petitions, letters and newsletters do exist, but they tend to reside in the legions of small archives gathered, protected and curated by tribal people themselves, often with gallingly little material support or recognition from outside their communities. While it is true that many Indigenous cultural heritage items have been taken from their source communities for display in remote collecting institutions, it is also true that Indigenous people have continued to maintain their own archives of books, papers and art objects in tribal offices, tribal museums, attics and garages. Such items might be in precarious conditions of preservation, subject to mold, mildew or other damage. They may be incompletely inventoried, or catalogued only in an elder’s memory. And they are hardly ever digitized. A recent survey by the Association of Tribal Archives Libraries and Museums (2013) found that, even though digitization is now the industry standard for libraries and archives, very few tribal collections in the United States are digitizing anything at all. Moreover, the survey found, this often isn’t for lack of desire, but for lack of resources—lack of staff and time, lack of access to adequate equipment and training, lack of broadband.[1]
Tribally stewarded collections often hold radically different kinds of materials that tell radically different stories from those historically promoted by institutions that thought they were “preserving” cultural remnants. Of particular interest to me as a literary scholar is the Indigenous writing that turns up in tribal and personal archives: tribal newsletters and periodicals; powwow and pageant programs; mimeographed books used to teach language and traditional narratives; recorded oral histories; letters, memoirs and more. Unlike the ethnographers’ photographs, colonial administrators’ records and (sometimes) decontextualized material objects that dominate larger museums, these writings tell stories of Indigenous survival and persistence. In what follows, I give a brief review of some of the best-known Indigenous electronic archives, followed by a consideration of how digitizing Indigenous writing, specifically, could change the way we see such archives. In their own recirculations of their writings online, Native people have shown relatively little interest in the concerns that currently dominate the field of Digital Humanities, including “preservation,” “open access,” “scalability,” and (perhaps the most unfortunate term in this context) “discoverability.” They seem much keener to continue what those literary traditions have in fact always done: assert and enact their communities’ continuous presence and political viability.
Digital Repatriation and Other Consultative Practices
Indigenous digital archives are very often based in universities, headed by professional scholars, often with substantial community engagement. The Yale Indian Papers Project, which seeks to improve access to primary documents demonstrating the continuous presence of Indigenous people in New England, elicits editorial assistance from a number of Indigenous scholars and tribal historians. The award-winning Plateau People’s Web Portal at Washington State University takes this collaborative methodology one step further, inviting consultants from neighboring tribal nations to come in to the university archives and select and curate materials for the web. Other digital Indigenous exhibits come from prestigious museums and collecting institutions, like the American Philosophical Society’s “Native American Images Project.” Indeed, with so many libraries, museums and archives now creating digital collections these days (whether in the form of e-books, scanned documents, or full electronic exhibits), materials related to Indigenous people can be found in an ever-growing variety of formats and places. Hence, there is also a rising popularity in portals—regional or state-based sites that can act as gateways to a wide variety of digital collections. Some are specific to Indigenous topics and locations, like the Carlisle Indian School Digital Resource Center, which compiles web-based resources for studying U.S. boarding school history. Others digital portals sweep up Indigenous objects along with other cultural materials, like the Maine Memory Network or the Digital Public Library of America.
It is not surprising that the bent of most of these collections is decidedly ethnographic, given that Indigenous people the world over have been the subjects of one prolonged imperial looting. Cultural heritage professionals are now legally (or at least ethically) required to repatriate human remains and sacred objects, but in recent years, many have also begun to speak of “digital repatriation.” Just as digital collections of all kinds are providing new access to materials held in far-flung locations, these are arguably a boon to elders or Native people living far away, for instance, from the Smithsonian Museum, to be able to readily view their cultural property. The digitization of heritage and materials can, in fact, help promote cultural revitalization and culturally responsive teaching (Roy and Christal 2002; Srinivasan et al. 2010). Many such projects aim expressly “to reinstate the role of the cultural object as a generator, rather than an artifact, of cultural information and interpretation” (Brown and Nicholas 2012, 313).
Nonetheless, Indigenous people may be forgiven if they take a dim view of their cultural heritage items being posted willy nilly on the internet. Some have questioned whether digital repatriation is a subterfuge for forestalling or refusing the return of the original items. Jim Enote (Zuni), Executive Director of the A:shiwi A:wan Museum and Heritage Center, has gone so far as to say that the words “digital” and “repatriation” simply don’t belong in the same sentence, pointing out that nothing in fact is being repatriated, since even the digital item is, in most cases, also created by a non-Native institution (Boast and Enote 2013, 110). Others worry about the common assumption that unfettered access to information is always and everywhere an unqualified good. Anthropologist Kimberly Christen has asked pointedly, “Does Information Really Want to be Free?” Her answer: “For many Indigenous communities in settler societies, the public domain and an information commons are just another colonial mash-up where their cultural materials and knowledge are ‘open’ for the profit and benefit of others, but remain separated from the sociocultural systems in which they were and continue to be used, circulated, and made meaningful” (Christen 2012, 2879-80).
A truly decolonized archive, then, calls for a critical re-examination of the archive itself. As Ellen Cushman (Cherokee) puts it, “Archives of Indigenous artifacts came into existence in part to elevate the Western tradition through a process of othering ‘primitive’ and Native traditions . . . . Tradition. Collection. Artifacts. Preservation. These tenets of colonial thought structure archives whether in material or digital forms” (Cushman 2013, 119). The most critical digital collections, therefore, are built not only through consultation with Indigenous knowledge-keepers, but also with considerable self-consciousness about the archival endeavor itself. The Yale editors, for instance, explain that “we cannot speak for all the disciplines that have a stake in our work, nor do we represent the perspective of Native people themselves . . . . [Therefore tribal] consultants’ annotations might include Native origin stories, oral sources, and traditional beliefs while also including Euro-American original sources of the same historical event or phenomena, thus offering two kinds of narratives of the past” (Grant-Costa, Glaza, and Sletcher 2012). Other sites may build this archival awareness into the interface itself. Performing Archive: Curtis + the “vanishing race,” for instance, seeks explicitly to “reject enlightenment ideals of the cumulative archive—i.e. that more materials lead to better, more accurate knowledge—in order to emphasize the digital archive as a site of critique and interpretation, wherein access is understood not in terms of access to truth, but to the possibility of past, present, and future performance” (Kim and Wernimont 2014).
Additional innovations worth mentioning here include the content management system Mukurtu, initially developed by Christen and her colleagues to facilitate culturally responsive archiving for an Aboriginal Australian collection, and quickly embraced by projects worldwide. Recognizing that “Indigenous communities across the globe share similar sets of archival, cultural heritage, and content management needs” (2005:317), Mukurtu lets them build their own digital collections and exhibits, while giving them finely grained control over who can access those materials—e.g., through tribal membership, clan system, family network, or some other benchmark. Christen and her colleague Jane Anderson have also created a system of traditional knowledge (TK) licenses and labels—icons that can be placed on a website to help educate site visitors about the culturally appropriate use of heritage materials. The licenses (e.g., “TK Commercial,” “TK Non-Commercial”) are meant to be legal instruments for owners of heritage material; a tribal museum, for instance, could use them to signal how it intends for electronic material to be used or not used. The TK labels, meanwhile, are extra-legal tools meant to educate users about culturally appropriate approaches to material that may, legalistically, be in the “public domain,” but from a cultural standpoint have certain restrictions: e.g., “TK Secret/Sacred,” “TK Women Restricted,” “TK Community Use Only.”)
All of the projects described here, many still in their incipient stages, aim to decolonize archives at their core. They put Indigenous knowledge-keepers in partnership with computing and heritage management professionals to help communities determine how, whether, and why their collections shall be digitized and made available. As such, they have a great deal to teach digital literary projects—literary criticism (if I may) not being a profession historically inclined to consult with living subjects very much at all. Next, I ponder why, despite great strides in both Indigenous digital collections and literary digital collections, the twain have really yet to meet.
Electronic Textualities: The Pasts and Futures of Indigenous Literature
While signatures, deeds and other Native-authored texts surface occasionally in the aforementioned heritage projects, digital projects devoted expressly to Indigenous writing are relatively few and far between.[2] Granting that Aboriginal people, like any other people, do produce writings meant to be private, as a literary scholar I am confronted daily with a rather different problem than that of cultural “protection”: a great abundance of poetry, fiction and nonfiction written by Indigenous people, much of which just never sees the larger audiences for which it was intended. How can the insights of the more ethnographically oriented Indigenous digital archives inform digital literary collections, and vice versa? How do questions of repatriation, reciprocity, and culturally sensitive contextualization change, if at all, when we consider Indigenous writing?
Literary history is another of those unexpected places in which Indians are always found. But while Indigenous literature—both historic and contemporary—has garnered increasing attention in the academy and beyond, the Digital Humanities does not seem to have contributed very much to the expansion and promotion of these canons. Conversely, while DH has produced some dynamic and diverse literary scholarship, scholars in Native American Studies seem to be turning toward this scholarship only slowly. Perhaps digital literary studies has not felt terribly inviting to Indigenous texts; many observers (Earhart 2012; Koh 2015) have remarked that the emerging digital literary canon, indeed, looks an awful lot like the old one, with the lion’s share of the funding and prestige going to predictable figures like William Shakespeare, William Blake, and Walt Whitman. At this moment, I know of no mass movement to digitize Indigenous writing, although a number of “public domain” texts appear in places like the Internet Archive, Google Books, and Project Gutenberg.[3] Indigenous digital literature seems light years away from the kinds of scholarly and technical standards achieved by the Whitman and Rosetti Archives. And without a sizeable or searchable corpus, scholarship on Indigenous literature likewise seems light years from the kinds of text mining, topic modeling and network analysis that is au courant in DH.
Instead, we see small-scale, emergent digital collections that nevertheless offer strong correctives to master narratives of Indigenous disappearance, and that supply further material for ongoing sovereignty struggles. The Hawaiian language newspaper project is one powerful example. Started as a massive crowdsourcing effort that digitized at least half of the remarkable 100 native-language newspapers produced by Hawaiian people between the 1830s and the 1940s, it calls itself “the largest native-language cache in the Western world,” and promises to change the way Hawaiian history is seen. It might well do so if, as Noenoe Silva (2004, 2) has argued, “[t]he myth of [Indigenous Hawaiian] nonresistance was created in part because mainstream historians have studiously avoided the wealth of material written in Hawaiian.” A grassroots digitization movement like the Hawaiian Nupepa Project makes such studious avoidance much more difficult, and it brings to the larger world of Indigenous digital collections direct examples—through Indigenous literacy—of Indigenous political persistence.
It thus points to the value of the literary in Indigenous digitization efforts. Jessica Pressman and Lisa Swanstrom (2013) have asked, “What kind of scholarly endeavors are possible when we think of the digital humanities as not just supplying the archives and data-sets for literary interpretation but also as promoting literary practices with an emphasis on aesthetics, on intertextuality, and writerly processes? What kind of scholarly practices and products might emerge from a decisively literary perspective and practice in the digital humanities?” Abenaki historian Lisa Brooks (2012, 309) has asked similar questions from an Indigenous perspective, positing that digital space allows us to challenge conventional notions of literary periodization and of place, to “follow paths of intellectual kinship, moving through rhizomic networks of influence and inquiry.” Brooks and other literary historians have long argued that Indigenous people have deployed alphabetic literacy strategically to (re)build their communities, restore and revitalize their traditions, and exercise their political and cultural sovereignty. Digital literary projects, like the Hawaiian newspaper project, can offer powerful extensions of these practices in electronic space.
These were some of the questions and issues we had in mind when we started dawnlandvoices.org.[4] This archive is emergent—not a straight scan-and-upload of items residing in one physical site or group of sites, but rather a collaboration among tribal authors, tribal collections, and university-based scholars and students. It came out of a print volume, Dawnland Voices: An Anthology of Writing from Indigenous New England (Senier 2014), edited by myself and eleven tribal historians. Organized by tribal nation, the book ranges from the earliest writings (petroglyphs and political petitions) to the newest (hip-hop poetry and blog entries). The print volume already aimed to be a counter-archive, insofar as it represents the literary traditions of “New England,” a region that has built its very identity on colonial dispossession, colonial boundaries and the myth of Indian disappearance. It also already aimed to decolonize the archive, insofar as it distributes editorial authority and control to Indigenous writers, historians and knowledge-keepers. At almost 700 pages, though, Dawnland in book form could only scratch the surface of the wealth of writing that regional Native people have produced, and that remains, for the most part, in their own hands.
We wanted a living document—one that could expand to include some of the vibrant pieces we could not fit in the book, one that could be revised and reshaped according to ongoing community conversation. And we wanted to keep presenting historic materials alongside new (in this case born-digital) texts, the better to highlight the long history of Indigenous writing in this region. But we also realized that this required resources. We approached the National Endowment for the Humanities and received a $38,000 Preservation and Access grant to explore how digital humanities resources might be better redistributed to empower tribal communities who want to digitize their texts, either for private tribal use or more public dissemination. The partners on this grant included three different, but representative kinds of collections: a tribal museum with some history of professional archiving and private support (the Tomaquag Indian Memorial Museum in Rhode Island); a tribal office that finds itself acting as an unofficial repository for a variety of papers and documents, and that does not have the resources to completely inventory or protect these (the Passamaquoddy Cultural Preservation Office in Maine); and four elders who have amassed a considerable collection of books, papers, and slides from their years working in the Boston Children’s Museum and Plimoth Plantation, and were storing these in their own homes (the Indigenous Resources Collaborative in Massachusetts). Under the terms of the grant, the University of New Hampshire sent digital librarians to each site to set up basic hardware and software for digitization, while training tribal historians in digitization basics. The end result of this two-year pilot project was a small exhibit of sample items from each archive.
The obstacles to this kind of work for small tribal collections are perhaps not unique, but they are intense. Digitization is expensive, time-consuming, and labor-intensive, even more so for collections that do not have ample (or any) paid staff, that can’t afford to update basic software or that don’t even have reliable internet connections. And there were additional hurdles: while the pressure from DH writ large (and granting institutions individually) is frequently to demonstrate scalability, in the end, the tribal partners on this grant did not coalesce around a shared goal of digitizing their collections wholesale. The Passamaquoddy tribal heritage preservation officer has scanned and uploaded the greatest quantity of material by far, but he has strategically zeroed in on dozens of tribal newsletters containing radical histories of Native resistance and survival in the latter half of the twentieth century. The Tomaquag Museum does want to digitize its entire collection, but it prefers to do so in-house, for optimum control of intellectual property. The Indigenous Resources Collaborative, meanwhile, would rather digitize and curate just a small handful of items as richly as possible. While these elders were initially adamant that they wanted to learn to scan and upload their own documents, they learned quickly just how stultifying this labor is. What excited them much more was the process of selecting individual documents and dreaming about how to best share these online. An old powwow flyer describing the Mashpee Wampanoag game of fireball, for instance, had them naming elders and players they could interview, with the possibility of adding metadata in the form of video or narrative audio.
More than a year after articulating this aspiration, the IRC has not begun to conduct or record any such interviews. Such a project is beyond their current energies, time and resources; and to be sure, any continuation of their work on this partner project at dawnlandvoices.org should be compensated, which will mean applying for new grants. But the delay or inaction also points to a larger conundrum: that for all of the Web’s avowed multimodality, indigenous digital collections have generally not reflected the longstanding multimodality of indigenous literatures themselves—in particular, their longstanding and mutually sustaining interplay of oral and written forms. Some (Golumbia 2015) would attribute this to an unwillingness within DH to recognize the kinds of digital language work being done by Indigenous communities worldwide. Perhaps, too, it owes something to the history of violence embedded in “recording” or “preserving” Indigenous oral traditions (Silko 1981); the Indigenous partners with whom I have worked are generally skeptical of the need to put their traditional narratives—or even some of the recorded oral histories they may have stored in cassette—online. Too, there is the time and labor involved in recording. It is now common to hear digital publishers wax enthusiastic about the “affordances” of the Web (it seems so easy, to just add an mp3), but with few exceptions, dawnlandvoices.org has not elicited many recordings, despite our invitations to authors to contribute them.
Unlike the texts in the most esteemed digital literature archives like the Rosetti Archive (edited, contextualized and encoded to the highest scholarly standard), the texts in dawnlandvoices.org are often rough, edgy, and unfinished; and that, quite possibly, is the way they will remain. Insofar as dawnlandvoices.org aspires to be a “database” at all (and we are not sure that it does), it makes sense at this point for there to be multiple pathways in and out of that collection, multiple ways of formatting and presenting material. It is probably fair to say that most scholars working on indigenous digital archives dream of a day when these sites will have robust community engagement and commentary. At the same time, many would readily admit that it’s not as simple as building it and hoping they will come. David Golumbia (2015) has gone so far as to suggest that what marginalizes Indigenous projects within DH is the archive-centric nature of the field itself—that while “most of the major First Nations groups now maintain rich community/governmental websites with a great deal of information on history, geography, culture, and language. . . none of this work, or little of it, is perceived or labeled as DH.” Thus, the esteemed digital archives might not, in fact, be what tribal communities want most. Brown and Nicholas raise the equally provocative possibility that “[i]nstitutional databases may . . . already have been superseded by social networking sites as digital repositories for cultural information” (2012:315). And, in fact, that most pervasive and understandably-maligned of social-networking sites, Facebook, seems to be serving some tribal museums’, authors’ and historians’ immediate cultural heritage needs surprisingly well. Many post historic photos or their own writings to their walls, and generate fabulously rich commentary: identifications of individuals in pictures, memories of places and events, praise and criticism for poetry. Facebook is a proprietary, and notoriously problematic platform, especially on the issue of intellectual property. And yet it has made room, at least for now, for a kind of fugitive curation that, albeit fragile and fugitive, raises the question of whether such curation should be “institutional” at all. We can see similar things happening on Twitter (as in Daniel Heath Justice’s recent “year of tweets” naming Indigenous authors) and Instagram (where artists like Stephen Paul Judd store, share, and comment on their work). Outside of DH and settler institutions, indigenous people are creating all kinds of collections that—if they are not “archives” in a way that satisfy professional archivists—seem to do what Native people, individually and collectively, need them to do. At least for today, these collections create what First Nations artists Jason Lewis and Skawennati Tricia Fragnito call “Aboriginally determined territories in cyberspace” (2005).
What the conversations initiated by Kim Christen, Jane Anderson, Jim Enote and others can bring to digital literature collections is a scrupulously ethical concern for Indigenous intellectual property, an insistence on first voice and community engagement. What Indigenous literature, in turn, can bring to the table is an insistence on politics and sovereignty. Like many literary scholars, I often struggle with what (if anything) makes “Literature” distinctive. It’s not that baskets or katsina masks cannot be read expressions of sovereignty—they can, and they are. But Native literatures—particularly the kinds saved by Indigenous communities themselves rather than by large collecting institutions and salvage anthropologists—provide some of the most powerful and overt archives of resistance and resurgence. The invisibility of these kinds of tribal stories and tribal ways of knowing and keeping stories is an ongoing concern, even on the “open” Web. It may be that Digital Humanities writ large will continue to struggle against the seeming centrifugal force of traditional literary and cultural canons. It is not likely, however, that Indigenous communities will wait for us.
_____
Siobhan Senier is associate professor of English at the University of New Hampshire. She is the editor of Dawnland Voices: An Anthology of Writing from Indigenous New England and dawnlandvoices.org.
[1] A study by Native Public Media (Morris and Meinrath 2009) found that broadband access in and around Native American and Alaska Native communities was less than 10 percent, sometimes as low as 5 to 6 percent.
[3] The University of Virginia Electronic Texts Center at one time had an excellent collection of Native-authored or Native-related works, but these are now buried within the main digital catalog.
_____
Works Cited
Association of Tribal Archives Libraries and Museums. 2013. “International Conference Program.” Santa Ana Pueblo, NM.
Boast, Robin, and Jim Enote. 2013. “Virtual Repatriation: It Is Neither Virtual nor Repatriation.” In Peter Biehl and Christopher Prescott, eds., Heritage in the Context of Globalization. SpringerBriefs in Archaeology. New York, NY: Springer New York. 103–13.
Brooks, Lisa. 2012. “The Primacy of the Present, the Primacy of Place: Navigating the Spiral of History in the Digital World.” PMLA 127:2. 308–16.
Brown, Deidre, and George Nicholas. 2012. “Protecting Indigenous Cultural Property in the Age of Digital Democracy: Institutional and Communal Responses to Canadian First Nations and Māori Heritage Concerns.” Journal of Material Culture 17:3. 307–24.
Christen, Kimberly. 2005. “Gone Digital: Aboriginal Remix and the Cultural Commons.” International Journal of Cultural Property 12:3. 315–45.
Christen, Kimberly. 2012. “Does Information Really Want to Be Free?: Indigenous Knowledge Systems and the Question of Openness.” International Journal of Communication 6. 2870–93.
Cushman, Ellen. 2013. “Wampum, Sequoyan, and Story: Decolonizing the Digital Archive.” College English 76:2. 116–35.
Deloria, Philip Joseph. 2004. Indians in Unexpected Places. Lawrence: University Press of Kansas.
Roy, Loriene, and Mark Christal. 2002. “Digital Repatriation: Constructing a Culturally Responsive Virtual Museum Tour.” Journal of Library and Information Science 28:1. 14–18.
Senier, Siobhan, ed. 2014. Dawnland Voices: An Anthology of Indigenous Writing from New England. Lincoln: University of Nebraska Press.
Silko, Leslie Marmon. 1981. “An Old-Time Indian Attack Conducted in Two Parts.” In Geary Hobson, ed. The Remembered Earth: An Anthology of Contemporary Native American Literature. Albuquerque: University of New Mexico Press. 211–16
Silva, Noenoe K. 2004. Aloha Betrayed: Native Hawaiian Resistance to American Colonialism. Durham: Duke University Press.
Srinivasan, Ramesh, et al. 2010. “Diverse Knowledges and Contact Zones within the Digital Museum.” Science Technology Human Values 35:5. 735–68.
God made the sun so that animals could learn arithmetic – without the succession of days and nights, one supposes, we should not have thought of numbers. The sight of day and night, months and years, has created knowledge of number, and given us the conception of time, and hence came philosophy. This is the greatest boon we owe to sight.
– Plato, Timaeus
The term “computational capital” understands the rise of capitalism as the first digital culture with universalizing aspirations and capabilities, and recognizes contemporary culture, bound as it is to electronic digital computing, as something like Digital Culture 2.0. Rather than seeing this shift from Digital Culture 1.0 to Digital Culture 2.0 strictly as a break, we might consider it as one result of an overall intensification in the practices of quantification. Capitalism, says Nick Dyer-Witheford (2012), was already a digital computer and shifts in the quantity of quantities lead to shifts in qualities. If capitalism was a digital computer from the get-go, then “the invisible hand”—as the non-subjective, social summation of the individualized practices of the pursuit of private (quantitative) gain thought to result in (often unknown and unintended) public good within capitalism—is an early, if incomplete, expression of the computational unconscious. With the broadening and deepening of the imperative toward quantification and rational calculus posited then presupposed during the early modern period by the expansionist program of Capital, the process of the assignation of a number to all qualitative variables—that is, the thinking in numbers (discernible in the commodity-form itself, whereby every use-value was also an encoded as an exchange-value)—entered into our machines and our minds. This penetration of the digital, rendering early on the brutal and precise calculus of the dimensions of cargo-holds in slave ships and the sparse economic accounts of ship ledgers of the Middle Passage, double entry bookkeeping, the rationalization of production and wages in the assembly line, and more recently, cameras and modern computing, leaves no stone unturned. Today, as could be well known from everyday observation if not necessarily from media theory, computational calculus arguably underpins nearly all productive activity and, particularly significant for this argument, those activities that together constitute the command-control apparatus of the world system and which stretch from writing to image-making and, therefore, to thought.[1] The contention here is not simply that capitalism is on a continuum with modern computation, but rather that computation, though characteristic of certain forms of thought, is also the unthought of modern thought. The content indifferent calculus of computational capital ordains the material-symbolic and the psycho-social even in the absence of a conscious, subjective awareness of its operations. As the domain of the unthought that organizes thought, the computational unconscious is structured like a language, a computer language that is also and inexorably an economic calculus.
The computational unconscious allows us to propose that much contemporary consciousness (aka “virtuosity” in post-Fordist parlance) is a computational effect—in short, a form of artificial intelligence. A large part of what “we” are has been conscripted, as thought and other allied metabolic processes are functionalized in the service of the iron clad movements of code. While “iron clad” is now a metaphor and “code” is less the factory code and more computer code, understanding that the logic of industrial machinery and the bureaucratic structures of the corporation and the state have been abstracted and absorbed by discrete state machines to the point where in some quarters “code is law” will allow us to pursue the surprising corollary that all the structural inequalities endemic to capitalist production—categories that often appear under variants of the analog signs of race, class, gender, sexuality, nation, etc., are also deposited and thus operationally disappeared into our machines.
Put simply, and, in deference to contemporary attention spans, too soon, our machines are racial formations. They are also technologies of gender and sexuality.[2] Computational capital is thus also racial capitalism, the longue durée digitization of racialization and, not in any way incidentally, of regimes of gender and sexuality. In other words inequality and structural violence inherent in capitalism also inhere in the logistics of computation and consequently in the real-time organization of semiosis, which is to say, our practices and our thought. The servility of consciousness, remunerated or not, aware of its underlying operating system or not, is organized in relation not just to sociality understood as interpersonal interaction, but to digital logics of capitalization and machine-technics. For this reason, the political analysis of postmodern and, indeed, posthuman inequality must examine the materiality of the computational unconscious. That, at least, is the hypothesis, for if it is the function of computers to automate thinking, and if dominant thought is the thought of domination, then what exactly has been automated?
Already in the 1850s the worker appeared to Marx as a “conscious organ” in the “vast automaton” of the industrial machine, and by the time he wrote the first volume of Capital Marx was able to comment on the worker’s new labor of “watching the machine with his eyes and correcting its mistakes with his hands” (Marx 1867: 496, 502). Marx’s prescient observation with respect to the emergent role of visuality in capitalist production, along with his understanding that the operation of industrial machinery posits and presupposes the operation of other industrial machinery, suggests what was already implicit if not fully generalized in the analysis: that Dr. Ure’s notion, cited by Marx, of the machine as a “vast automaton,” was scalable—smaller machines, larger machines, entire factories could be thus conceived, and with the increasing scale and ubiquity of industrial machines, the notion could well describe the industrial complex as a whole. Historically considered, “watching the machine with his eyes and correcting the mistakes with his hands” thus appears as an early description of what information workers such as you and I do on our screens. To extrapolate: distributed computation and its integration with industrial process and the totality of social processes suggest that not only has society as a whole become a vast automaton profiting from the metabolism of its conscious organs, but further that the confrontation or interface with the machine at the local level (“where we are”) is an isolated and phenomenal experience that is not equivalent to the perspective of the automaton or, under capitalism, that of Capital. Given that here, while we might still be speaking about intelligence, we are not necessarily speaking about subjects in the strict sense, we might replace Althusser’s relation of S-s—Big Subject (God, the State, etc) to small subject (“you” who are interpellated with and in ideology)—with AI-ai— Big Artificial Intelligence (the world system as organized by computational capital) and “you” Little Artificial Intelligence (as organized by the same). Here subjugation is not necessarily intersubjective, and does not require recognition. The AI does not speak your language even if it is your operating system. With this in mind we may at once understand that the space-time regimes of subjectivity (point-perspective, linear time, realism, individuality, discourse function, etc.) that once were part of the digital armature of “the human,” have been profitably shattered, and that the fragments have been multiplied and redeployed under the requisites of new management. We might wager that these outmoded templates or protocols may still also meaningfully refer to a register of meaning and conceptualization that can take the measure of historical change, if only for some kind of species remainder whose value is simultaneously immeasurable, unknown and hanging in the balance.
Ironically perhaps, given the progress narratives attached to technical advances and the attendant advances in capital accumulation, Marx’s hypothesis in Capital Chapter 15, “Machinery and Large-Scale Industry,” that “it would be possible to write a whole history of the inventions made since 1830 for the purpose of providing capital with weapons against working class revolt” (1867, 563), casts an interesting light on the history of computing and its creation-imposition of new protocols. Not only have the incredible innovations of workers been abstracted and absorbed by machinery, but so also have their myriad antagonisms toward capitalist domination. Machinic perfection meant the imposition of continuity and the removal of “the hand of man” by fixed capital, in other words, both the absorption of know-how and the foreclosure of forms of disruption via automation (Marx 1867, 502).
Dialectically understood, subjectivity, while a force of subjugation in some respects, also had its own arsenal of anti-capitalist sensibilities. As a way of talking about non-conformity, anti-sociality and the high price of conformity and its discontents, the unconscious still has its uses, despite its unavoidable and perhaps nostalgic invocation of a future that has itself been foreclosed. The conscious organ does not entirely grasp the cybernetic organism of which it is a part; nor does it fully grasp the rationale of its subjugation. If the unconscious was machinic, it is now computational, and if it is computational it is also locked in a struggle with capitalism. If what underlies perceptual and cognitive experience is the automaton, the vast AI, what I will be referring to as The Computer, which is the totalizing integration of global practice through informatic processes, then from the standpoint of production we constitute its unconscious. However, as we are ourselves unaware of our own constitution, the Unconscious of producers is their/our specific relation to what Paolo Virno acerbically calls, in what can only be a lamentation of history’s perverse irony, “the communism of capital” (2004, 110). If the revolution killed its father (Marx) and married its mother (Capitalism), it may be worth considering the revolutionary prospects of an analysis of this unconscious.
Introduction: The Computational Unconscious
Beginning with the insight that the rise of capitalism marks the onset of the first universalizing digital culture, this essay, and the book of which it is chapter one, develops the insights of The Cinematic Mode of Production (Beller 2006) in an effort to render the violent digital subsumption by computational racial capital that the (former) “humans” and their (excluded) ilk are collectively undergoing in a manner generative of sites of counter-power—of, let me just say it without explaining it, derivatives of counter-power, or, Derivative Communism. To this end, the following section offers a reformulation of Marx’s formula for capital, Money-Commodity-Money’ (M-C-M’), that accounts for distributed production in the social factory, and by doing so hopes to direct attention to zones where capitalist valorization might be prevented or refused. Prevented or refused not only to break a system which itself functions by breaking the bonds of solidarity and mutual trust that formerly were among the conditions that made a life worth living, but also to posit the redistribution of our own power towards ends that for me are still best described by the word communist (or perhaps meta-communist but that too is for another time). This thinking, political in intention, speculative in execution and concrete in its engagement, also proposes a revaluation of the aesthetic as an interface that sensualizes information. As such, the aesthetic is both programmed, and programming—a privileged site (and indeed mode) of confrontation in the digital apartheid of the contemporary.
Along these lines, and similar to the analysis pursued in The Cinematic Mode of Production, I endeavor to de-fetishize a platform—computation itself—one that can only be properly understood when grasped as a means of production embedded in the bios. While computation is often thought of as being the thing accomplished by hardware churning through a program (the programmatic quantum movements of a discrete state machine), it is important to recognize that the universal Turing machine was (and remains) media indifferent only in theory and is thus justly conceived of as an abstract machine in the realm of ideas and indeed of the ruling ideas. However, it is an abstract machine that, like all abstractions, evolves out of concrete circumstances and practices; which is to say that the universal Turing Machine is itself an abstraction subject to historical-materialist critique. Furthermore, Turing Machines iterate themselves on the living, on life, reorganizing its practices. One might situate the emergence and function of the universal Turing machine as perhaps among the most important abstract machines in the last century, save perhaps that of capital itself. However, both their ranking and even their separability is here what we seek to put into question.
Without a doubt, the computational process, like the capitalist process, has a corrosive effect on ontological precepts, accomplishing a far-reaching liquidation of tradition that includes metaphysical assumptions regarding the character of essence, being, authenticity and presence. And without a doubt, computation has been built even as it has been discovered. The paradigm of computation marks an inflection point in human history that reaches along temporal and spatial axes: both into the future and back into the past, out to the cosmos and into the sub-atomic. At any known scale, from plank time (10^-44 seconds) to yottaseconds (10^24 seconds), and from 10^-35 to 10^27 meters, computation, conceptualization and sense-making (sensation) have become inseparable. Computation is part of the historicity of the senses. Just ask that baby using an iPad.
The slight displacement of the ontology of computation implicit in saying that it has been built as much as discovered (that computation has a history even if it now puts history itself at risk) allows us to glimpse, if only from what Laura Mulvey calls “the half-light of the imaginary” (1975, 7)—the general antagonism is feminized when the apparatus of capitalization has overcome the symbolic—that computation is not, so far as we can know, the way of the universe per se, but rather the way of the universe as it has become intelligible to us vis-à-vis our machines. The understanding, from a standpoint recognized as science, that computation has fully colonized the knowable cosmos (and is indeed one with knowing) is a humbling insight, significant in that it allows us to propose that seeing the universe as computation, as, in short, simulable, if not itself a simulation (the computational effect of an informatic universe), may be no more than the old anthropocentrism now automated by apparatuses. We see what we can see with the senses we have—autopoesis. The universe as it appears to us is figured by—that is, it is a figuration of—computation. That’s what our computers tell us. We build machines that discern that the universe functions in accord with their self-same logic. The recursivity effects the God trick.
Parametrically translating this account of cosmic emergence into the domain of history, reveals a disturbing allegiance of computational consciousness organized by the computational unconscious, to what Silvia Federici calls the system of global apartheid. Historicizing computational emergence pits its colonial logic directly against what Fred Moten and Stefano Harney identify as “the general antagonism” (2013, 10) (itself the reparative antithesis, or better perhaps the reverse subsumption of the general intellect as subsumed by capital). The procedural universalization of computation is a cosmology that attributes and indeed enforces a sovereignty tantamount to divinity and externalities be damned. Dissident, fugitive planning and black study – a studied refusal of optimization, a refusal of computational colonialism — may offer a way out of the current geo-(post-)political and its computational orthodoxy.
Computational Idolatry and Multiversality
In the new idolatry cathetcted to inexorable computational emergence, the universe is itself currently imagined as a computer. Here’s the seductive sound of the current theology from a conference sponsored by the sovereign state of NYU:
As computers become progressively faster and more powerful, they’ve gained the impressive capacity to simulate increasingly realistic environments. Which raises a question familiar to aficionados of The Matrix—might life and the world as we know it be a simulation on a super advanced computer? “Digital physicists” have developed this idea well beyond the sci-fi possibilities, suggesting a new scientific paradigm in which computation is not just a tool for approximating reality but is also the basis of reality itself. In place of elementary particles, think bits; in place of fundamental laws of physics, think computer algorithms. (Scientific American 2011)
Science fiction, in the form of “the Matrix,” is here used to figure a “reality” organized by simulation, but then this reality is quickly dismissed as something science has moved well beyond. However, it would not be illogical here to propose that “reality” is itself a science fiction—a fiction whose current author is no longer the novel or Hollywood but science. It is in a way no surprise that, consistent with “digital physics,” MIT physicist, Max Tegmark, claims that consciousness is a state of matter: Consciousness as a phenomenon of information storage and retrieval, is a property of matter described by the term “computronium.” Humans represent a rather low level of complexity. In the neo-Hegelian narrative in which the philosopher—scientist reveals the working out of world—or, rather, cosmic—spirit, one might say that it is as science fiction—one of the persistent fictions licensed by science—that “reality itself” exists at all. We should emphasize that the trouble here is not so much with “reality,” the trouble here is with “itself.” To the extent that we recognize that poesis (making) has been extended to our machines and it is through our machines that we think and perceive, we may recognize that reality is itself a product of their operations. The world begins to look very much like the tools we use to perceive it to the point that Reality itself is thus a simulation, as are we—a conclusion that concurs with the notion of a computational universe, but that seems to (conveniently) elide the immediate (colonial) history of its emergence. The emergence of the tools of perception is taken as universal, or, in the language of a quantum astrophysics that posits four levels of multiverses: multiversal. In brief, the total enclosure by computation of observer and observed is either reality itself becoming self-aware, or tautological, waxing ideological, liquidating as it does historical agency by means of the suddenly a priori stochastic processes of cosmic automation.
Well! If total cosmic automation, then no mistakes, so we may as well take our time-bound chances and wager on fugitive negation in the precise form of a rejection of informatic totalitarianism. Let us sound the sedimented dead labor inherent in the world-system, its emergent computational armature and its iconic self-representations. Let us not forget that those machines are made out of embodied participation in capitalist digitization, no matter how disappeared those bodies may now seem. Marx says, “Consciousness is… from the very beginning a social product and remains so for as long as men exist at all” (Tucker 1978, 178). The inescapable sociality and historicity of knowledge, in short, its political ontology, follows from this—at least so long as humans “exist at all.”
The notion of a computational cosmos, though not universally or even widely consented to by scientific consciousness, suggests that we respire in an aporiatic space—in the null set (itself a sign) found precisely at the intersection of a conclusion reached by Gödel in mathematics (Hofstadter 1979)—that there is no sufficiently powerful logical system that is internally closed such that logical statements cannot be formulated that can neither be proved nor disproved—and a different conclusion reached by Maturana and Varela (1992), and also Niklas Luhmann (1989), that a system’s self-knowing, its autopoesis, knows no outside; it can know only in its own terms and thus knows only itself. In Gödel’s view, systems are ineluctably open, there is no closure, complete self-knowledge is impossible and thus there is always an outside or a beyond, while in the latter group’s view, our philosophy, our politics and apparently our fate is wedded to a system that can know no outside since it may only render an outside in its own terms, unless, or perhaps, even if/as that encounter is catastrophic.
Let’s observe the following: 1) there must be an outside or a beyond (Gödel); 2) we cannot know it (Maturana and Varela); 3) and yet…. In short, we don’t know ourselves and all we know is ourselves. One way out of this aporia is to say that we cannot know the outside and remain what we are. Enter history: Multiversal Cosmic Knoweldge, circa 2017, despite its awesome power, turns out to be pretty local. If we embrace the two admittedly humbling insights regarding epistemic limits—on the one hand, that even at the limits of computationally—informed knowledge (our autopoesis) all we can know is ourselves, along with Gödel’s insight that any “ourselves” whatsoever that is identified with what we can know is systemically excluded from being All—then it as axiomatic that nothing (in all valences of that term) fully escapes computation—for us. Nothing is excluded from what we can know except that which is beyond the horizon of our knowledge, which for us is precisely nothing. This is tantamount to saying that rational epistemology is no longer fully separable from the history of computing—at least for any us who are, willingly or not, participant in contemporary abstraction. I am going to skip a rather lengthy digression about fugitive nothing as precisely that bivalent point of inflection that escapes the computational models of consciousness and the cosmos, and just offer its conclusion as the next step in my discussion: We may think we think—algorithmically, computationally, autonomously, or howsoever—but the historically materialized digital infrastructure of the socius thinks in and through us as well. Or, as Marx put it, “The real subject remains outside the mind and independent of it—that is to say, so long as the mind adopts a purely speculative, purely theoretical attitude. Hence the subject, society, must always be envisaged as the premises of conception even when the theoretical method is employed” (Marx: vol. 28, 38-39).[3]
This “subject, society” in Marx’s terms, is present even in its purported absence—it is inextricable from and indeed overdetermines theory and, thus, thought: in other words, language, narrative, textuality, ideology, digitality, cosmic consciousness. This absent structure informs Althusser’s Lacanian-Marxist analysis of Ideology (and of “the ideology of no ideology,” 1977) as the ideological moment par excellance: an analog way of saying “reality” is simulation) as well as his beguiling (because at once necessary and self-negating) possibility of a subjectless scientific discourse. This non-narrative, unsymbolizeable absent structure akin to the Lacanian “Real” also informs Jameson’s concept of the political unconscious as the black-boxed formal processor of said absent structure, indicated in his work by the term “History” with a capital “H” (1981). We will take up Althusser and Jameson in due time (but not in this paper). For now, however, for the purposes of our mediological investigation, it is important to pursue the thought that precisely this functional overdetermination, which already informed Marx’s analysis of the historicity of the senses in the 1844 manuscripts, extends into the development of the senses and the psyche. As Jameson put it in The Political Unconscious thirty-five years ago: “That the structure of the psyche is historical and has a history, is… as difficult for us to grasp as that the senses are not themselves natural organs but rather the result of a long process of differentiation even within human history”(1981, 62).
The evidence for the accuracy of this claim, built from Marx’s notion that “the forming of the five senses requires the history of the world down to the present” has been increasing. There is a host of work on the inseparability of technics and the so-called human (from Mauss to Simondon, Deleuze and Guattari, and Bernard Stiegler) that increasingly makes it possible to understand and even believe that the human, along with consciousness, the psyche, the senses and, consequently, the unconscious are historical formations. My own essay “The Unconscious of the Unconscious” from The Cinematic Mode of Production traces Lacan’s use of “montage,” “the cut,” the gap, objet a, photography and other optical tropes and argues (a bit too insistently perhaps) that the unconscious of the unconscious is cinema, and that a scrambling of linguistic functions by the intensifying instrumental circulation of ambient images (images that I now understand as derivatives of a larger calculus) instantiates the presumably organic but actually equally technical cinematic black box known as the unconscious.[iv] Psychoanalysis is the institutionalization of a managerial technique for emergent linguistic dysfunction (think literary modernism) precipitated by the onslaught of the visible.
More recently, and in a way that suggests that the computational aspects of historical materialist critique are not as distant from the Lacanian Real as one might think, Lydia Liu’s The Freudian Robot (2010) shows convincingly that Lacan modeled the theory of the unconscious from information theory and cybernetic theory. Liu understands that Lacan’s emphasis on the importance of structure and the compulsion to repeat is explicitly addressed to “the exigencies of chance, randomness, and stochastic processes in general” (2010, 176). She combs Lacan’s writings for evidence that they are informed by information theory and provides us with some smoking guns including the following:
By itself, the play of the symbol represents and organizes, independently of the peculiarities of its human support, this something which is called the subject. The human subject doesn’t foment this game, he takes his place in it, and plays the role of the little pluses and minuses in it. He himself is an element in the chain which, as soon as it is unwound, organizes itself in accordance with laws. Hence the subject is always on several levels, caught up in the crisscrossing of networks. (quoted in Liu 2010, 176)
Liu argues that “the crisscrossing of networks” alludes not so much to linguistic networks but to communication networks, and precisely references the information theory that Lacan read, particularly that of George Gilbaud, the author of What is Cybernetics?. She writes that, “For Lacan, ‘the primordial couple of plus and minus’ or the game of even and odd should precede linguistic considerations and is what enables the symbolic order.”
“You can play heads or tails by yourself,” says Lacan, “but from the point of view of speech, you aren’t playing by yourself – there is already the articulation of three signs comprising a win or a loss and this articulation prefigures the very meaning of the result. In other words, if there is no question, there is no game, if there is no structure, there is no question. The question is constituted, organized by the structure” (quoted in Liu 2010, 179). Liu comments that “[t]his notion of symbolic structure, consistent with game theory, [has] important bearings on Lacan’s paradoxically non-linguistic view of language and the symbolic order.”
Let us not distract ourselves here with the question of whether or not game theory and statistical analysis represent discovery or invention. Heisenberg, Schrödinger, and information theory formalized the statistical basis that one way or another became a global (if not also multiversal) episteme. Norbert Wiener, another father, this time of cybernetics, defined statistics as “the science of distribution” (Weiner 1989, 8). We should pause here to reflect that, given that cybernetic research in the West was driven by military and, later, industrial applications, that is, applications deemed essential for the development of capitalism and the capitalist way of life, such a statement calls for a properly dialectical analysis. Distribution is inseparable from production under capitalism, and statistics is the science of this distribution. Indeed, we would want to make such a thesis resonate with the analysis of logistics recently undertaken by Moten and Harney and, following them, link the analysis of instrumental distribution to the Middle Passage, as the signal early modern consequence of the convergence of rationalization and containerization—precisely the “science” of distribution worked out in the French slave ship Adelaide or the British ship Brookes. For the moment, we underscore the historicity of the “science of distribution” and thus its historical emergence as socio-symbolic system of organization and control. Keeping this emergence clearly in mind helps us to understand that mathematical models quite literally inform the articulation of History and the unconscious—not only homologously as paradigms in intellectual history, but materially, as ways of organizing social production in all domains. Whether logistical, optical or informatic, the technics of mathematical concepts, which is to say programs, orchestrate meaning and constitute the unconscious.
Perhaps more elusive even than this historicity of the unconscious grasped in terms of a digitally encoded matrix of materiality and epistemology that constitutes the unthought of subjective emergence, may be that the notion that the “subject, society” extends into our machines. Vilém Flusser, in Towards a Philosophy of Photography, tells us,
Apparatuses were invented to simulate specific thought processes. Only now (following the invention of the computer), and as it were in hindsight, it is becoming clear what kind of thought processes we are dealing with in the case of all apparatuses. That is: thinking expressed in numbers. All apparatuses (not just computers) are calculating machines and in this sense “artificial intelligences,” the camera included, even if their inventors were not able to account for this. In all apparatuses (including the camera) thinking in numbers overrides linear, historical thinking. (Flusser 2000, 31)
This process of thinking in numbers, and indeed the generalized conversion of multiple forms of thought and practice to an increasingly unified systems language of numeric processing, by capital markets, by apparatuses, by digital computers requires further investigation. And now that the edifice of computation—the fixed capital dedicated to computation that either recognizes itself as such or may be recognized as such—has achieved a consolidated sedimentation of human labor at least equivalent to that required to build a large nation (a superpower) from the ground up, we are in a position to ask in what way has capital-logic and the logic of private property, which as Marx points out is not the cause but the effect of alienated wage- (and thus quantified) labor, structured computational paradigms? In what way has that “subject, society” unconsciously structured not just thought, but machine-thought? Thinking, expressed in numbers, materialized first by means of commodities and then in apparatuses capable of automating this thought. Is computation what we’ve been up to all along without knowing it? Flusser suggests as much through his notion that 1) the camera is a black box that is a programme, and, 2) that the photograph or technical image produces a “magical” relation to the world in as much as people understand the photograph as a window rather than as information organized by concepts. This amounts to the technical image as itself a program for the bios and suggests that the world has long been unconsciously organized by computation vis-à-vis the camera. As Flusser has it, cameras have organized society in a feedback loop that works towards the perfection of cameras. If the computational processes inherent in photography are themselves an extension of capital logic’s universal digitization (an argument I made in TheCinematic Mode of Production and extended in The Message is Murder), then that calculus has been doing its work in the visual reorganization of everyday life for almost two centuries.
Put another way, thinking expressed in numbers (the principles of optics and chemistry) materialized in machines automates thought (thinking expressed in numbers) as program. The program of say, the camera, functions as a historically produced version of what Katherine Hayles has recently called “nonconscious cognition” (Hayles 2016). Though locally perhaps no more self-aware than the sediment sorting process of a riverbed (another of Hayles’s computational examples) the camera nonetheless affects purportedly conscious beings from the domain known as the unconscious, as, to give but one shining example, feminist film theory clearly shows: The function of the camera’s program organizes the psycho-dynamics of the spectator in a way that at once structures film form through market feedback, gratifies the (white-identified) male ego and normalizes the violence of heteropatriarchy, and does so at a profit. Now that so much human time has gone into developing cameras, computer hardware and programming, such that hardware and programming are inextricable from the day to day and indeed nano-second to nano-second organization of life on planet earth (and not only in the form of cameras), we can ask, very pointedly, which aspects of computer function, from any to all, can be said to be conditioned not only by sexual difference but more generally still, by structural inequality and the logistics of racialization? Which computational functions perpetuate and enforce these historically worked up, highly ramified social differences ? Structural and now infra-structural inequalities include social injustices—what could be thought of as and in a certain sense are algorithmic racism, sexism and homophobia, and also programmatically unequal access to the many things that sustain life, and legitimize murder (both long and short forms, executed by, for example, carceral societies, settler colonialism, police brutality and drone strikes), and catastrophes both unnatural (toxic mine-tailings, coltan wars) and purportedly natural (hurricanes, droughts, famines, ambient environmental toxicity). The urgency of such questions resulting from the near automation of geo-political emergence along with a vast conscription of agents is only exacerbated as we recognize that we are obliged to rent or otherwise pay tribute (in the form of attention, subscription, student debt) to the rentier capitalists of the infrastructure of the algorithm in order to access portions of the general intellect from its proprietors whenever we want to participate in thinking.
For it must never be assumed that technology (even the abstract machine) is value-neutral, that it merely exists in some uninterested ideal place and is then utilized either for good or for ill by free men (it would be “men” in such a discourse). Rather, the machine, like Ariella Azoulay’s understanding of photography, has a political ontology—it is a social relation, and an ongoing one whose meaning is, as Azoulay says of the photograph, never at an end (2012, 25). Now that representation has been subsumed by machines, has become machinic (overcoded as Deleuze and Guattari would say) everything that appears, appears in and through the machine, as a machine. For the present (and as Plato already recognized by putting it at the center of the Republic), even the Sun is political. Going back to my opening, the cosmos is merely a collection of billions of suns—an infinite politics.
But really, this political ontology of knowledge, machines, consciousness, praxis should be obvious. How could technology, which of course includes the technologies of knowledge, be anything other than social and historical, the product of social relations? How could these be other than the accumulation, objectification and sedminentation of subjectivities that are themselves an historical product? The historicity of knowledge and perception seems inescapable, if not fully intelligible, particularly now, when it is increasingly clear that it is the programmatic automation of thought itself that has been embedded in our apparatuses. The programming and overdetermination of “choice,” of options, by a rationality that was itself embedded in the interested circumstances of life and continuously “learns” vis-à-vis the feedback life provides has become ubiquitous and indeed inexorable (I dismiss “Object Oriented Ontology” and its desperate effort to erase white-boy subjectivity thusly: there are no ontological objects, only instrumental epistemic horizons). To universalize contemporary subjectivity by erasing its conditions of possibility is to naturalize history; it is therefore to depoliticize it and therefore to recapitulate its violence in the present.
The short answer then regarding digital universality is that technology (and thus perception, thought and knowledge) can only be separated from the social and historical—that is, from racial capitalism—by eliminating both the social and historical (society and history) through its ownoperations. While computers, if taken as a separate constituency along with a few of their biotic avatars, and then pressed for an answer, might once have agreed with Margaret Thatcher’s view that “there is no such thing as society,” one would be hard-pressed to claim that this post-sociological (and post-Birmingham) “discovery” is a neutral result. Thatcher’s observation, that “the problem with socialism is that you eventually run out of other people’s money,” while admittedly pithy, if condescending, classist and deadly, subordinates social needs to existing property-relations and their financial calculus at the ontological level. She smugly valorizes the status quo by positing capitalism as an untranscendable horizon since the social product is by definition always already “other people’s money.” But neoliberalism has required some revisioning of late (which is a polite way of saying that fascism has needed some updating): the newish but by now firmly-established term “social media” tells us something more about the parasitic relation that the cold calculus this mathematical universe of numbers has to the bios. To preserve global digital apartheid requires social media, the process(ing) of society itself cybernetically-interfaced with the logistics of racial-capitalist computation. This relation, a means of digital expropriation aimed to profitably exploit an equally significant global aspiration towards planetary communicativity and democratization, has become the preeminent engine of capitalist growth. Society, at first seemingly negated by computation and capitalism, is now directly posited as a source of wealth, for what is now explicitly computational capital and actually computational racial capital. The attention economy, immaterial labor, neuropower, semio-capitalism: all of these terms, despite their differences, mean in effect that society, as a deterritorialized factory, is no longer disappeared as an economic object; it disappears only as a full beneficiary of the dominant economy which is now parasitical on its metabolism. The social revolution in planetary communicativity is being farmed and harvested by computational capitalism.
Dialectics of the Human-Machine
For biologists it has become au courant when speaking of humans to speak also of the second genome—one must consider not just the 26 chromosomes of the human genome that replicate what was thought of as the human being as an autonomous life-form, but the genetic information and epigenetic functionality of all the symbiotic bacteria and other organisms without which there are no humans. Pursuant to this thought, we might ascribe ourselves a third genome: information. No good scientist today believes that human beings are free standing forms, even if most (or really almost all) do not make the critique of humanity or even individuality through a framework that understands these categories as historically emergent interfaces of capitalist exchange. However, to avoid naturalizing the laws of capitalism as simply an expression of the higher (Hegalian) laws of energetics and informatics (in which, for example ATP can be thought to function as “capital”), this sense of “our” embeddedness in the ecosystem of the bios must be extended to that of the materiality of our historical societies, and particularly to their systems of mediation and representational practices of knowledge formation—including the operations of textuality, visuality, data visualization and money—which, with convergence today, means precisely, computation.
If we want to understand the emergence of computation (and of the anthropocene), we must attend to the transformations and disappearances of life forms—of forms of life in the largest sense. And we must do so in spite of the fact that the sedimentation of the history of computation would neutralize certain aspects of human aspiration and of humanity—including, ultimately, even the referent of that latter sign—by means of law, culture, walls, drones, derivatives, what have you. The biosynthetic process of computation and human being gives rise to post-humanism only to reveal that there were never any humans here in the first place: We have never been human—we know this now. “Humanity,” as a protracted example of maiconaissance—as a problem of what could be called the humanizing-machine or, better perhaps, the human-machine, is on the wane.
Naming the human-machine, is of course a way of talking about the conquest, about colonialism, slavery, imperialism, and the racializing, sex-gender norm-enforcing regimes of the last 500 years of capitalism that created the ideological legitimation of its unprecedented violence in the so-called humanistic values it spat out. Aimé Césaire said it very clearly when he posed the scathing question in Discourse on Colonialism: “Civilization and Colonization?” (1972). “The human-machine” names precisely the mechanics of a humanism that at once resulted from and were deployed to do the work of humanizing planet Earth for the quantitative accountings of capital while at the same time divesting a large part of the planetary population of any claims to the human. Following David Golumbia, in The Cultural Logic of Computation (2009), we might look to Hobbes, automata and the component parts of the Leviathan for “human” emergence as a formation of capital. For so many, humanism was in effect more than just another name for violence, oppression, rape, enslavement and genocide—it was precisely a means to violence. “Humanity” as symptom of The Invisible Hand, AI’s avatar. Thus it is possible to see the end of humanism as a result of decolonization struggles, a kind of triumph. The colonized have outlasted the humans. But so have the capitalists.
This is another place where recalling the dialectic is particularly useful. Enlightenment Humanism was a platform for the linear time of industrialization and the French revolution with “the human” as an operating system, a meta-ISA emerging in historical movement, one that developed a set of ontological claims which functioned in accord with the early period of capitalist digitality. The period was characterized by the institutionalization of relative equality (Cedric Robinson does not hesitate to point out that the precondition of the French Revolution was colonial slavery), privacy, property. Not only were its achievements and horrors inseparable the imposition of logics of numerical equivalence, they were powered by the labor of the peoples of Earth, by the labor-power of disparate peoples, imported as sugar and spices, stolen as slaves, music and art, owned as objective wealth in the form of lands, armies, edifices and capital, and owned again as subjective wealth in the form of cultural refinement, aesthetic sensibility, bourgeois interiority—in short, colonial labor, enclosed by accountants and the whip, was expatriated as profit, while industrial labor, also expropriated, was itself sustained by these endeavors. The accumulation of the wealth of the world and of self-possession for some was organized and legitimated by humanism, even as those worlded by the growth of this wealth struggled passionately, desultorily, existentially, partially and at times absolutely against its oppressive powers of objectification and quantification. Humanism was colonial software, and the colonized were the outsourced content providers—the first content providers—recruited to support the platform of so-called universal man. This platform humanism is not so much a metaphor; rather it is the tendency that is unveiled by the present platform post-humanism of computational racial capital. The anatomy of man is the key to the anatomy of the ape, as Marx so eloquently put the telos of man. Is the anatomy of computation the key to the anatomy of “man”?
So the end of humanism, which in a narrow (white, Euro-American, technocratic) view seems to arrive as a result of the rise of cyber-technologies, must also be seen as having been long willed and indeed brought about by the decolonizing struggles against humanism’s self-contradictory and, from the point of view of its own self-proclaimed values, specious organization. Making this claim is consistent with Césaire’s insight that people of the third world built the European metropoles. Today’s disappearance of the human might mean for the colonizers who invested so heavily in their humanisms, that Dr. Moreau’s vivisectioned cyber-chickens are coming home to roost. Fatally, it seems, since Global North immigration policy, internment centers, border walls, police forces give the lie to any pretense of humanism. It might be gleaned that the revolution against the humans has also been impacted by our machines. However, the POTUSian defeat of the so-called humans is double-edged to say the least. The dialectic of posthuman abundance on the one hand and the posthuman abundance of dispossession on the other has no truck with humanity. Today’s mainstream futurologists mostly see “the singularity” and apocalypse. Critics of the posthuman with commitments to anti-racist world-making have clearly understood the dominant discourse on the posthuman as not the end of the white liberal human subject but precisely, when in the hands of those not committed to an anti-racist and decolonial project as a means for its perpetuation—a way of extending the unmarked, transcendental, sovereign, subject (of Hobbes, Descartes, C.B. Macpherson)—effectively the white male sovereign who was in possession of a body rather than forced to be a body. Sovereignty itself must change (in order, as Guiseppe Lampedusa taught us, to remain the same), for if one sees production and innovation on the side of labor, then capital’s need to contain labors’ increasing self-organization has driven it into a position where the human has become an impediment to its continued expansion. Human rights, though at times also a means to further expropriation, are today in the way.
Let’s say that it is global labor that is shaking off the yoke of the human from without, as much as it the digital machines that are devouring it from within. The dialectic of computational racial capital devours the human as a way of revolutionizing the productive forces. Weapon-makers, states, and banks, along with Hollywood and student debt, invoke the human only as a skeuomorph—an allusion to an old technology that helps facilitate adoption of the new. Put another way, the human has become a barrier to production, it is no longer a sustainable form. The human, and those (human and otherwise) falling under the paradigm’s dominion, must be stripped, cut, bundled, reconfigured in derivative forms. All hail the dividual. Again, female and racialized bodies and subjects have long endured this now universal fragmentation and forced recomposition and very likely dividuality may also describe a precapitalist, pre-colonial interface with the social. However we are obliged to point out that this, the current dissolution of the human into the infrastructure of the world-system, is double-edged, neither fully positive, nor fully negative—the result of the dialectics of struggles for liberation distributed around the planet. As a sign of the times, posthumanism may be, as has been remarked about capitalism itself, among those simultaneously best and worst things to ever happen in history. On the one hand, the disappearance of presumably ontological protections and legitimating status for some (including the promise of rights never granted to most), on the other, the disappearance of a modality of dehumanization and exclusion that legitimated and normalized white supremacist patriarchy by allowing its values to masquerade as universals. However, it is difficult to maintain optimism of the will when we see that that which is coming, that which is already upon us may also be as bad or worse, in absolute numbers, is already worse, for unprecedented billions of concrete individuals. Frankly, in a world where the cognitive-linguistic functions of the species have themselves been captured by the ambient capitalist computation of social media and indeed of capitalized computational social relations, of what use is a theory of dispossession to the dispossessed?
For those of us who may consider ourselves thinkers, it is our burden—in a real sense, our debt, living and ancestral—to make theory relevant to those who haunt it. Anything less is betrayal. The emergence of the universal value form (as money, the general form of wealth) with its human face (as white-maleness, the general form of humanity) clearly inveighs against the possibility of extrinsic valuation since the very notion of universal valuation is posited from within this economy. What Cedric Robinson shows in his extraordinary Black Marxism (1983) is that capitalism itself is a white mythology. The history of racialization and capitalization are inseparable, and the treatment of capital as a pure abstraction deracinates its origins and functions – both its conditions of possibility as well as its operations—including those of the internal critique of capitalism that has been the basis of much of the Marxist tradition. Both capitalism and its negation as Marxism have proceeded through a disavowal of racialization. The quantitative exchange of equivalents, circulating as exchange values without qualities, are the real abstractions that give rise to philosophy, science, and white liberal humanism wedded to the notion of the objective. Therefore, when it comes to values, there is no degree zero, only perhaps nodal points of bounded equilibrium. To claim neutrality for an early digital machine, say, money, that is, to argue that money as a medium is value-neutral because it embodies what has (in many respects correctly, but in a qualified way) been termed “the universal value form,” would be to miss the entire system of leveraged exploitation that sustains the money-system. In an isolated instance, money as the product of capital might be used for good (building shelters for the homeless) or for ill (purchasing Caterpillar bulldozers) or both (building shelters using Caterpillar machines), but not to see that the capitalist-system sustains itself through militarized and policed expropriation and large-scale, long-term universal degradation is to engage in mere delusional, utopianism and self-interested (might one even say psychotic?) naysaying.
Will the apologists calmly bear witness to the sacrifice of billions of human beings so that the invisible hand may placidly unfurl its/their abstractions in Kubrikian sublimity? 2001’s (Kubrick 1968) cold longshot of the species lifespan as an instance of a cosmic program is not so distant from the endemic violence of postmodern—and, indeed, post-human—fascism he depicted in A Clockwork Orange (Kubrick 1971). Arguably, 2001 rendered the cosmology of early Posthuman Fascism while A Clockwork Orange portrayed its psychology. Both films explored the aesthetics of programming. For the individual and for the species, what we beheld in these two films was the annihilation of our agency (at the level of the individual and of the species) —and it was eerily seductive, Benjamin’s self-destruction as an aesthetic pleasure of the highest order taken to cosmic proportions and raised to the level of Art (1969).
So what of the remainders of those who may remain? Here, in the face of the annihilation of remaindered life (to borrow a powerfully dialectical term from Neferti Tadiar, 2016) by various iterations of techné, we are posing the following question: how are computers and digital computing, as universals, themselves an iteration of long-standing historical inequality, violence, and murder, and what are the entry points for an understanding of computation-society in which our currently pre-historic (in Marx’s sense of the term) conditions of computation might be assessed and overcome? This question of technical overdetermination is not a matter of a Kittlerian-style anti-humanism in which “media determine our situation,” nor is it a matter of the post-Kittlerian, seemingly user-friendly repurposing of dialectical materialism which in the beer-drinking tradition of “good-German” idealism, offers us the poorly historicized, neo-liberal idea of “cultural techniques” courtesy of Cornelia Vismann and Bernhard Siegert (Vismann 2013, 83-93; Siegert 2013, 48-65). This latter is a conveniently deracinated way of conceptualizing the distributed agency of everything techno-human without having to register the abiding fundamental antagonisms, the life and death struggle, in anything. Rather, the question I want to pose about computing is one capable of both foregrounding and interrogating violence, assigning responsibility, making changes, and demanding reparations. The challenge upon us is to decolonize computing. Has the waning not just of affect (of a certain type) but of history itself brought us into a supposedly post-historical space? Can we see that what we once called history, and is now no longer, really has been pre-history, stages of pre-history? What would it mean to say in earnest “What’s past is prologue?”[6] If the human has never been and should never be, if there has been this accumulation of negative entropy first via linear time and then via its disruption, then what? Postmodernism, posthumanism, Flusser’s post-historical, and Berardi’s After the Future notwithstanding, can we take the measure of history?
I would like to conclude this essay with a few examples of techno-humanist dehumanization. In 1889, Herman Hollerith patented the punchcard system and mechanical tabulator that was used in the 1890 censuses in Germany, England, Italy, Russia, Austria, Canada, France, Norway, Puerto Rico, Cuba, and the Philippines. A national census, which normally took eight to ten years now took a single year. The subsequent invention of the plugboard control panel in 1906 allowed for tabulators to perform multiple sorts in whatever sequence was selected without having to be rebuild the tabulators—an early form of programming. Hollerith’s Tabulating Machine Company merged with three other companies in 1911 to become the Computing Tabulating Recording Company, which renamed itself IBM in 1924.
While the census opens a rich field of inquiry that includes questions of statistics, computing, and state power that are increasingly relevant today (particularly taking into account the ever-presence of the NSA), for now I only want to extract two points: 1) humans became the fodder for statistical machines and 2) as Vince Rafael has shown regarding the Philippine census and as Edwin Black has shown with respect to the holocaust, the development of this technology was inseparable from racialization and genocide (Rafael 2000; Black 2001)
Rafael shows that coupled to photographic techniques, the census at once “discerned” and imposed a racializing schema that welded historical “progress” to ever-whiter waves of colonization, from Malay migration to Spanish Colonialism to U.S. Imperialism (2000) Racial fantasy meets white mythology meets World Spirit. For his part, Edwin Black (2001) writes:
Only after Jews were identified—a massive and complex task that Hitler wanted done immediately—could they be targeted for efficient asset confiscation, ghettoization, deportation, enslaved labor, and, ultimately, annihilation. It was a cross-tabulation and organizational challenge so monumental, it called for a computer. Of course, in the 1930s no computer existed.
But IBM’s Hollerith punch card technology did exist. Aided by the company’s custom-designed and constantly updated Hollerith systems, Hitler was able to automate his persecution of the Jews. Historians have always been amazed at the speed and accuracy with which the Nazis were able to identify and locate European Jewry. Until now, the pieces of this puzzle have never been fully assembled. The fact is, IBM technology was used to organize nearly everything in Germany and then Nazi Europe, from the identification of the Jews in censuses, registrations, and ancestral tracing programs to the running of railroads and organizing of concentration camp slave labor.
IBM and its German subsidiary custom-designed complex solutions, one by one, anticipating the Reich’s needs. They did not merely sell the machines and walk away. Instead, IBM leased these machines for high fees and became the sole source of the billions of punch cards Hitler needed (Black 2001).
The sorting of populations and individuals by forms of social difference including “race,” ability and sexual preference (Jews, Roma, homosexuals, people deemed mentally or physically handicapped) for the purposes of sending people who failed to meet Nazi eugenic criteria off to concentration camps to be dispossessed, humiliated, tortured and killed, means that some aspects of computer technology—here, the Search Engine—emerged from this particular social necessity sometimes called Nazism (Black 2001). The Philippine-American War, in which Americans killed between 1/10th and 1/6th of the population of the Philippines, and the Nazi-administered holocaust are but two world historical events that are part of the meaning of early computational automation. We might say that computers bear the legacy of imperialism and fascism—it is inscribed in their operating systems.
The mechanisms, as well as the social meaning of computation, were refined in its concrete applications. The process of abstraction hid the violence of abstraction, even as it integrated the result with economic and political protocols and directly effected certain behaviors. It is a well-known fact that Claude Shannon’s landmark paper, “A Mathematical Theory of Communication,” proposed a general theory of communication that was content-indifferent (1948, 379-423). This seminal work created a statistical, mathematical model of communication while simultaneously consigning any and all specific content to irrelevance as regards the transmission method itself. Like use-value under the management of the commodity form, the message became only a supplement to the exchange value of the code. Elsewhere I have more to say about the fact that some of the statistical information Shannon derived about letter frequency in English used as its ur-text, Jefferson The Virginian (1948), the first volume of Dumas Malone’s monumental six volume study of Jefferson, famously interrogated by Annette Gordon-Reed in her Thomas Jefferson and Sally Hemmings: An American Controversy for its suppression of information regarding Jefferson’s relation to slavery (1997).[7] My point here is that the rules for content indifference were themselves derived from a particular content and that the language used as a standard referent was a specific deployment of language. The representative linguistic sample did not represent the whole of language, but language that belongs to a particular mode of sociality and racialized enfranchisement. Shannon’s deprivileging of the referent of the logos as referent, and his attention only to the signifiers, was an intensification of the slippage of signifier from signified (“We, the people…”) already noted in linguistics and functionally operative in the elision of slavery in Jefferson’s biography, to say nothing of the same text’s elision of slave-narrative and African-American speech. Shannon brilliantly and successfully developed a re-conceptualization of language as code (sign system) and now as mathematical code (numerical system) that no doubt found another of its logical (and material) conclusions (at least with respect to metaphysics) in post-structuralist theory and deconstruction, with the placing of the referent under erasure. This recession of the real (of being, the subject, and experience—in short, the signified) from codification allowed Shannon’s mathematical abstraction of rules for the transmission of any message whatsoever to become the industry standard even as they also meant, quite literally, the dehumanization of communication—its severance from a people’s history.
In a 1987 interview, Shannon was quoted as saying “I can visualize a time in the future when we will be to robots as dogs are to humans…. I’m rooting for the machines!” (1971). If humans are the robot’s companion species, they (or is it we?) need a manifesto. The difficulty is that the labor of our “being” such that it is/was is encrypted in their function. And “we” have never been “one.”
Tara McPherson has brilliantly argued that the modularity achieved in the development of UNIX has its analogue in racial segregation. Modularity and encapsulation, necessary to the writing of UNIX code that still underpins contemporary operating systems were emergent general socio-technical forms, what we might call technologies, abstract machines, or real abstractions. “I am not arguing that programmers creating UNIX at Bell Labs and at Berkeley were consciously encoding new modes of racism and racial understanding into digital systems,” McPherson argues, “The emergence of covert racism and its rhetoric of colorblindness are not so much intentional as systemic. Computation is a primary delivery method of these new systems and it seems at best naïve to imagine that cultural and computational operating systems don’t mutually infect one another.” (in Nakamura 2012, 30-31; italics in original)
This is the computational unconscious at work—the dialectical inscription and re-inscription of sociality and machine architecture that then becomes the substrate for the next generation of consciousness, ad infinitum. In a recent unpublished paper entitled “The Lorem Ipsum Project,” Alana Ramjit (2014) examines industry standards for the now-digital imaging of speech and graphic images. These include Kodak’s “Shirley cards” for standard skin tone (white), the Harvard Sentences for standard audio (white), the “Indian Head Test Pattern” for standard broadcast image (white fetishism), and “Lenna,” an image of Lena Soderberg taken from Playboy magazine (white patriarchal unconscious) that has become the reference standard image for the development of graphics processing. Each of these examples testifies to an absorption of the socio-historical at every step of mediological and computational refinement.
More recently, as Chris Vitale, brought out in a powerful presentation on machine learning and neural networks given at Pratt Institute in 2016, Facebook’s machine has produced “Deep Face,” an image of the minimally recognizable human face. However, this ur-human face, purported to be, the minimally recognizable form of the human face turns out to be a white guy. This is a case in point of the extension of colonial relations into machine function. Given the racialization of poverty in the system of global apartheid (Federici 2012), we have on our hands (or, rather, in our machines) a new modality of automated genocide. Fascism and genocide have new mediations and may not just have adapted to new media but may have merged. Of course, the terms and names of genocidal regeimes change, but the consequences persist. Just yesterday it was called neo-liberal democracy. Today it’s called the end of neo-liberalism. The current world-wide crisis in migration is one of the symptoms of the genocidal tendencies of the most recent coalescence of the “practically” automated logistics of race, nation and class. Today racism is at once a symptom of the computational unconscious, an operation of non-conscious cognition, and still just the garden variety self-serving murderous stupidity that is the legacy of slavery, settler colonialism and colonialism.
Thus we may observe that the statistical methods utilized by IBM to find Jews in the Shtetl are operative in Weiner’s anti-aircraft cybernetics as well as in Israel’s Iron Dome missile defense system. But, the prevailing view, even if it is not one of pure mathematical abstraction, in which computational process has its essence without reference to any concrete whatever, can be found in what follows. As an article entitled “Traces of Israel’s Iron Dome can be found in Tech Startups” for Bloomberg News almost giddily reports:
The Israeli-engineered Iron Dome is a complex tapestry of machinery, software and computer algorithms capable of intercepting and destroying rockets midair. An offshoot of the missile-defense technology can also be used to sell you furniture. (Coppola 2014)[8]
Not only is war good computer business, it’s good for computerized business. It is ironic that te is likened to a tapestry and now used to sell textiles – almost as if it were haunted by Lisa Nakamura’s recent findings regarding the (forgotten) role of Navajo women weavers in the making of early transistor’s for Silicon Valley legend and founding father, as well as infamous eugenicist, William Shockley’s company Fairchild.[9] The article goes on to confess that the latest consumer spin-offs that facilitate the real-time imaging of couches in your living room capable of driving sales on the domestic fronts exist thanks to the U. S. financial support for Zionism and its militarized settler colonialism in Palestine. “We have American-backed apartheid and genocide to thank for being able to visualize a green moderne couch in our very own living room before we click “Buy now.”” (Okay, this is not really a quotation, but it could have been.)
Census, statistics, informatics, cryptography, war machines, industry standards, markets—all management techniques for the organization of otherwise unruly humans, sub-humans, posthumans and nonhumans by capitalist society. The ethos of content indifference, along with the encryption of social difference as both mode and means of systemic functionality is sustainable only so long as derivative human beings are themselves rendered as content providers, body and soul. But it is not only tech spinoffs from the racist war dividends we should be tracking. Wendy Chun (2004, 26-51) has shown in utterly convincing ways that the gendered history of the development of computer programming at ENIAC in which male mathematicians instructed female programmers to physically make the electronic connections (and remove any bugs) echoes into the present experiences of sovereignty enjoyed by users who have, in many respects, become programmers (even if most of us have little or no idea how programming works, or even that we are programming).
Chun notes that “during World War II almost all computers were young women with some background in mathematics. Not only were women available for work then, they were also considered to be better, more conscientious computers, presumably because they were better at repetitious, clerical tasks” (Chun 2004, 33)One could say that programming became programming and software became software when commands shifted from commanding a “girl” to commanding a machine. Clearly this puts the gender of the commander in question.
Chun suggests that the augmentation of our power through the command-control functions of computation is a result of what she calls the “Yes sir” of the feminized operator—that is, of servile labor (2004). Indeed, in the ENIAC and other early machines the execution of the operator’s order was to be carried out by the “wren” or the “slave.” For the desensitized, this information may seem incidental, a mere development or advance beyond the instrumentum vocale (the “speaking tool” i.e., a roman term for “slave”) in which even the communicative capacities of the slave are totally subordinated to the master. Here we must struggle to pose the larger question: what are the implications for this gendered and racialized form of power exercised in the interface? What is its relation to gender oppression, to slavery? Is this mode of command-control over bodies and extended to the machine a universal form of empowerment, one to which all (posthuman) bodies might aspire, or is it a mode of subjectification built in the footprint of domination in such a way that it replicates the beliefs, practices and consequences of “prior” orders of whiteness and masculinity in unconscious but nonetheless murderous ways.[10] Is the computer the realization of the power of a transcendental subject, or of the subject whose transcendence was built upon a historically developed version of racial masculinity based upon slavery and gender violence?
Andrew Norman Wilson’s scandalizing film Workers Leaving the Googleplex (2011), the making of which got him fired from Google, depicts lower class, mostly of color workers leaving the Google Mountain View campus during off hours. These workers are the book scanners, and shared neither the spaces nor the perks with Google white collar workers, had different parking lots, entrances and drove a different class of vehicles. Wilson also has curated and developed a set of images that show the condom-clad fingers (black, brown, female) of workers next to partially scanned book pages. He considers these mis-scans new forms of documentary evidence. While digitization and computation may seem to have transcended certain humanistic questions, it is imperative that we understand that its posthumanism is also radically untranscendent, grounded as it is on the living legacies of oppression, and, in the last instance, on the radical dispossession of billions. These billions are disappeared, literally utilized as a surface of inscription for everyday transmissions. The dispossessed are the substrate of the codification process by the sovereign operators commanding their screens. The digitized, rewritable screen pixels are just the visible top-side (virtualized surface) of bodies dispossessed by capital’s digital algorithms on the bottom-side where, arguably, other metaphysics still pertain. Not Hegel’s world spirit—whether in the form of Kurzweil’s singularity or Tegmark’s computronium—but rather Marx’s imperative towards a ruthless critique of everything existing can begin to explain how and why the current computational eco-system is co-functional with the unprecedented dispossession wrought by racial computational capitalism and its system of global apartheid. Racial capitalism’s programs continue to function on the backs of those consigned to servitude. Data-visualization, whether in the form of selfie, global map, digitized classic or downloadable sound of the Big Bang, is powered by this elision. It is, shall we say, inescapably local to planet earth, fundamentally historical in relation to species emergence, inexorably complicit with the deferral of justice.
The Global South, with its now world-wide distribution, is endemic to the geopolitics of computational racial capital—it is one of its extraordinary products. The computronics that organize the flow of capital through its materials and signs also organize the consciousness of capital and with it the cosmological erasure of the Global South. Thus the computational unconscious names a vast aspect of global function that still requires analysis. And thus we sneak up on the two principle meanings of the concept of the computational unconscious. On the one hand, we have the problematic residue of amortized consciousness (and the praxis thereof) that has gone into the making of contemporary infrastructure—meaning to say, the structural repression and forgetting that is endemic to the very essence of our technological buildout. On the other hand, we have the organization of everyday life taking place on the basis of this amortization, that is, on the basis of a dehistoricized, deracinated relation to both concrete and abstract machines that function by virtue of the fact that intelligible history has been shorn off of them and its legibility purged from their operating systems. Put simply, we have forgetting, the radical disappearance and expunging from memory, of the historical conditions of possibility of what is. As a consequence, we have the organization of social practice and futurity (or lack thereof) on the basis of this encoded absence. The capture of the general intellect means also the management of the general antagonism. Never has it been truer that memory requires forgetting – the exponential growth in memory storage means also an exponential growth in systematic forgetting – the withering away of the analogue. As a thought experiment, one might imagine a vast and empty vestibule, a James Ingo Freed global holocaust memorial of unprecedented scale, containing all the oceans and lands real and virtual, and dedicated to all the forgotten names of the colonized, the enslaved, the encamped, the statisticized, the read, written and rendered, in the history of computational calculus—of computer memory. These too, and the anthropocene itself, are the sedimented traces that remain among the constituents of the computational unconscious.
_____
Jonathan Beller is Professor of Humanities and Media Studies and Director of the Graduate Program in Media Studies at Pratt Institute. His books include The Cinematic Mode of Production: Attention Economy and the Society of the Spectacle (2006); Acquiring Eyes: Philippine Visuality, Nationalist Struggle, and the World-Media System (2006); and The Message Is Murder: Substrates of Computational Capital (2017). He is a member of the Social Text editorial collective..
[1]A reviewer of this essay for b2o: An Online Journal notes, “the phrase ‘digital computer’ suggests something like the Turing machine, part of which is characterized by a second-order process of symbolization—the marks on Turing’s tape can stand for anything, & the machine processing the tape does not ‘know’ what the marks ‘mean.’” It is precisely such content indifferent processing that the term “exchange value,” severed as it is of all qualities, indicates.
[2] It should be noted that the reverse is also true: that race and gender can be considered and/as technologies. See Chun (2012), de Lauretis (1987).
[3] To insist on first causes or a priori consciousness in the form of God or Truth or Reality is to confront Marx’s earlier acerbic statement against a form of abstraction that eliminates the moment of knowing from the known in The Economic and Philosophic Manuscripts of 1844,
Who begot the first man and nature as a whole? I can only answer you: Your question is itself a product of abstraction. Ask yourself how you arrived at that question. Ask yourself it that question is not posed from a standpoint to which I cannot reply, because it is a perverse one. Ask yourself whether such a progression exists for a rational mind. When you ask about the creation of nature and man you are abstracting in so doing from man and nature. You postulate them as non-existent and yet you want me to prove them to you as existing. Now I say give up your abstraction and you will give up your question. Or, if you want to hold onto your abstraction, then be consistent, and if you think of man and nature as non-existent, then think of yourself as non-existent, for you too are surely man and nature. Don’t think, don’t ask me, for as soon as you think and ask, your abstraction from the existence of nature and man has no meaning. Or are you such an egoist that you postulate everything as nothing and yet want yourself to be?” (Tucker 1978, 92)
[4] If one takes the derivative of computational process at a particular point in space-time one gets an image. If one integrates the images over the variables of space and time, one gets a calculated exploit, a pathway for value-extraction. The image is a moment in this process, the summation of images is the movement of the process.
[5] See Harney and Moten (2013). See also Browne (2015), especially 43-50.
[6] In practical terms, the Alternative Informatics Association, in the announcement for their Internet Ungovernance Forum puts things as follows:
We think that Internet’s problems do not originate from technology alone, that none of these problems are independent of the political, social and economic contexts within which Internet and other digital infrastructures are integrated. We want to re-structure Internet as the basic infrastructure of our society, cities, education, heathcare, business, media, communication, culture and daily activities. This is the purpose for which we organize this forum.
The significance of creating solidarity networks for a free and equal Internet has also emerged in the process of the event’s organization. Pioneered by Alternative Informatics Association, the event has gained support from many prestigious organizations worldwide in the field. In this two-day event, fundamental topics are decided to be ‘Surveillance, Censorship and Freedom of Expression, Alternative Media, Net Neutrality, Digital Divide, governance and technical solutions’. Draft of the event’s schedule can be reached at https://iuf.alternatifbilisim.org/index-tr.html#program (Fidaner, 2014).
[8] Coppola writes that “Israel owes much of its technological prowess to the country’s near—constant state of war. The nation spent $15.2 billion, or roughly 6 percent of gross domestic product, on defense last year, according to data from the International Institute of Strategic Studies, a U.K. think-tank. That’s double the proportion of defense spending to GDP for the U.S., a longtime Israeli ally. If there’s one thing the U.S. Congress can agree on these days, it’s continued support for Israel’s defense technology. Legislators approved $225 million in emergency spending for Iron Dome on Aug. 1, and President Barack Obama signed it into law three days later.”
Black, Edwin. 2001. IBM and the Holocaust: The Strategic Alliance between Nazi Germany and America’s Most Powerful Corporation. New York: Crown Publishers.
Browne, Simone. 2015. Dark Matters: On the Surveillance of Blackness. Durham: Duke University Press.
Césaire, Aimé. 1972. Discourse on Colonialism New York: Monthly Review Press.
Coppola, Gabrielle. 2014. “Traces of Israel’s Iron Dome Can Be Found in Tech Startups.” Bloomberg News (Aug 11).
Chun, Wendy Hui Kyong. 2004. “On Software, or the Persistence of Visual Knowledge,” Grey Room 18, Winter: 26-51.
Chun, Wendy Hui Kyong. 2012. In Nakamura and Chow-White (2012). 38-69.
De Lauretis, Teresa. 1987. Technologies of Gender: Essays on Theory, Film, and Fiction Bloomington, IN: Indiana University Press.
Millions of the sex whose names were never known beyond the circles of their own home influences have been as worthy of commendation as those here commemorated. Stars are never seen either through the dense cloud or bright sunshine; but when daylight is withdrawn from a clear sky they tremble forth. (Hale 1853, ix)
As this poetic quote by Sarah Josepha Hale, nineteenth-century author and influential editor, reminds us, context is everything. The challenge, if we wish to write women back into history via Wikipedia, is to figure out how to shift the frame of reference so that our stars can shine, since the problem of who precisely is “worthy of commemoration” so often seems to exclude women. This essay takes on one of the “tests” used to determine whether content is worthy of inclusion in Wikipedia, notability, to explore how the purportedly neutral concept works against efforts to create entries about female historical figures.
According to Wikipedia “notability,” a subject is considered notable if it “has received significant coverage in reliable sources that are independent of the subject.” (“Wikipedia:Notability” 2017) To a historian of women, the gender biases implicit in these criteria are immediately recognizable; for most of written history, women were de facto considered unworthy of consideration (Smith 2000). Unsurprisingly, studies have pointed to varying degrees of bias in coverage of female figures in Wikipedia compared to male figures. One study of Encyclopedia Britannica and Wikipedia concluded,
Overall, we find evidence of gender bias in Wikipedia coverage of biographies. While Wikipedia’s massive reach in coverage means one is more likely to find a biography of a woman there than in Britannica, evidence of gender bias surfaces from a deeper analysis of those articles each reference work misses. (Reagle and Rhue 2011)
Five years later, another study found this bias persisted; women constituted only 15.5 percent of the biographical entries on the English Wikipedia, and that for women born prior to the 20th century, the problem of exclusion was wildly exacerbated by “sourcing and notability issues” (“Gender Bias on Wikipedia” 2017).
One potential source for buttressing the case of notable women has been identified by literary scholar Alison Booth. Booth identified more than 900 volumes of prosopography published during what might be termed the heyday of the genre, 1830-1940, when the rise of the middle class and increased literacy combined with relatively cheap production of books to make such volumes both practicable and popular (Booth 2004). Booth also points out that, lest we consign the genre to the realm of mere curiosity, the volumes were “indispensable aids in the formation of nationhood” (Booth 2004, 3).
To reveal the historical contingency of the purportedly neutral criteria of notability, I utilized longitudinal data compiled by Booth which reveals that notability has never been the stable concept Wikipedia’s standards take it to be. Since notability alone cannot explain which women make it into Wikipedia, I then turn to a methodology first put forth by historian Mary Ritter Beard in her critique of the Encyclopedia Britannica to identify missing entries (Beard 1977). Utilizing Notable American Women, as a reference corpus, I calculated the inclusion of individual women from those volumes in Wikipedia (Boyer and James 1971). In this essay I extend that analysis to consider the difference between notability and notoriety from a historical perspective. One might be well known while remaining relatively unimportant from a historical perspective. Such distinctions are collapsed in Wikipedia, assuming that a body of writing about a historical subject stands as prima facie evidence of notability.
While inclusion in Notable American Women does not necessarily translate into presence in Wikipedia, looking at the categories of women that have higher rates of inclusion offers insights into how female historical figures do succeed in Wikipedia.My analysis suggests that criterion of notability restricts the women who succeed in obtaining pages in Wikipedia to those who mirror “the ‘Great Man Theory’ of history (Mattern 2015) or are “notorious” (Lerner 1975).
Alison Booth has compiled a list of the most frequently mentioned women in a subset of female prosopographical volumes and tracked their frequency over time (2004, 394–396). She made this data available on the web, allowing for the creation of Figure 1 which focuses on the inclusion of US historical figures in volumes published from 1850 to 1930.
Figure 1. US women by publication date of books that included them (image source: author)
This chart clarifies what historians already know: notability is historically specific and contingent. For example, Mary Washington, mother of the first president, is notable in the nineteenth century but not in the twentieth. She drops off because over time, motherhood alone ceases to be seen as a significant contribution to history. Wives of presidents remain quite popular, perhaps because they were at times understood as playing an important political role, so Mary Washington’s daughter-in-law Martha still appears in some volumes in the latter period. A similar pattern may be observed for foreign missionary Anne Hasseltine Judson in the twentieth century. The novelty of female foreign missionaries like Judson faded as more women entered the field. Other figures, like Laura Bridgman, “the first deaf-blind American child to gain a significant education in the English language,” were supplanted by later figures in what might be described as the “one and done” syndrome, where only a single spot is allotted for a specific kind of notable woman (“Laura Bridgman” 2017). In this case, Bridgman likely fell out of favor as Helen Keller’s fame rose.
Although their notability changed over time, all the women depicted in figure 1 have Wikipedia pages; this is unsurprising as they were among the most mentioned women in the sort of volumes Wikipedia considers “reliable sources.” But what about more contemporary examples? Does inclusion in a relatively recent work that declares women as notable mean that these women would meet Wikipedia’s notability standards? To answer this question, I relied on a methodology of calculating missing biographies in Wikipedia, utilizing a reference corpus to identify women who might reasonably be expected to appear in Wikipedia and to calculate the percentage that do not. Working with the digitized copy of Notable American Women in the Women and Social Movements database, I compiled a missing biographies quotient for individuals in selected sections of the “classified list of biographies” that appear at the end of the third volume of Notable American Women. The eleven categories with no missing entries offer some insights into how women do succeed in Wikipedia (Table 1).
Classification
% missing
Astronomers
0
Biologists
0
Chemists & Physicists
0
Heroines
0
Illustrators
0
Indian Captives
0
Naturalists
0
Psychologists
0
Sculptors
0
Wives of Presidents
0
Table 1. Classifications from Notable American Women with no missing biographies in Wikipedia
Characteristics that are highly predictive of success in Wikipedia for women include association with a powerful man, as in the wives of presidents, and recognition in a male-dominated field of science, social science and art. Additionally, extraordinary women, such as heroines, and those who are quite rare, such as Indian captives, also have a greater chance of success in Wikipedia.[1]
Further analysis of the classifications with greater proportions of missing women reflects Gerda Lerner’s complaint that the history of notable women is the story of exceptional or deviant women (Lerner 1975). “Social worker,” which has the highest percentage of missing biographies at 67%, illustrates that individuals associated with female-dominated endeavors are less likely to be considered notable unless they rise to a level of exceptionalism (Table 2).
Name
Included?
Dinwiddie, Emily Wayland
no
Glenn, Mary Willcox Brown
no
Kingsbury, Susan Myra
no
Lothrop, Alice Louise Higgins
no
Pratt, Anna Beach
no
Regan, Agnes Gertrude
no
Breckinridge, Sophonisba Preston
page
Richmond, Mary Ellen
page
Smith, Zilpha Drew
stub
Table 2. Social Workers from Notable American Women by inclusion in Wikipedia
Sophonisba Preston Breckinridge’s Wikipedia entry describes her as “an American activist, Progressive Era social reformer, social scientist and innovator in higher education” who was also “the first woman to earn a Ph.D. in political science and economics then the J.D. at the University of Chicago, and she was the first woman to pass the Kentucky bar” (“Sophonisba Breckinridge” 2017). While the page points out that “She led the process of creating the academic professional discipline and degree for social work,” her page is not linked to the category of American social workers (“Category:American Social Workers” 2015). If a female historical figure isn’t as exceptional as Breckinridge, she needs to be a “first” like Mary Ellen Richmond who makes it into Wikipedia as the “social work pioneer” (“Mary Richmond” 2017).
This conclusion that being a “first” facilitates success in Wikipedia is supported by analysis of the classification of nurses. Of the ten nurses who have Wikipedia entries, 80% are credited with some sort of temporally marked achievement, generally a first or pioneering role (Table 3).
Individual
Was she a first?
Was she a participant in a male-dominated historical event?
Was she a founder?
Delano, Jane Arminda
leading pioneer
World War I
founder of the American Red Cross Nursing Service
Fedde, Sister Elizabeth*
established the Norwegian Relief Society
Maxwell, Anna Caroline
pioneering activities
Spanish-American War
Nutting, Mary Adelaide
world’s first professor of nursing
World War I
founded the American Society of superintendents of Training Schools for Nurses
co-founded the National Association of Colored Graduate Nurses
* Fredde appears in Wikipedia primarily as a Norwegian LutheranDeaconess. The word “nurse” does not appear on her page.
Table 3. Classifications from Notable American Women with no missing biographies in Wikipedia
As the entries for nurses reveal, in addition to being first, a combination of several additional factors work in a female subject’s favor in achieving success in Wikipedia. Nurses who founded an institution or organization or participated in a male-dominated event already recognized as historically significant, such as war, were more successful than those who did not.
If distinguishing oneself, by being “first” or founding something, as part of a male-dominated event facilitates higher levels of inclusion in Wikipedia for women in female dominated fields, do these factors also explain how women from classifications that are not female-dominated succeed? Looking at labor leaders, it appears these factors can offer only a partial explanation (Table 4).
Individual
Was she a first?
Was she a participant in a male-dominated historical event?
Was she a founder?
Description from Wikipedia
Bagley, Sarah G.
“probably the first”
No
formed the Lowell Female Labor Reform Association
headed up female department of newspaper until fired because “a female department. … would conflict with the opinions of the mushroom aristocracy … and beside it would not be dignified”
Barry, Leonora Marie Kearney
“only woman” “first woman”
KNIGHTS OF LABOR
“difficulties faced by a woman attempting to organize men in a male-dominated society. Employers also refused to allow her to investigate their factories.”
Bellanca, Dorothy Jacobs
“first full-time female organizer”
No
0rganized the Baltimore buttonhole makers into Local 170 of the United Garment Workers of America, one of four women who attended founding convention of Amalgamated Clothing Workers of America
“ “men resented” her
Haley, Margaret Angela
“pioneer leader”
No
No
dubbed the “lady labor slugger”
Jones, Mary Harris
No
KNIGHTS OF LABOR
IWW
“most dangerous woman in America”
Nestor, Agnes
No
WOMEN’S TRADE UNION LEAGUE
founded International Glove Workers Union
O’Reilly, Leonora
No
WOMEN’S TRADE UNION LEAGUE
founded the Wage Earners Suffrage League
“O’Reilly as a public speaker was thought to be out of place for women at this time in New York’s history.”
O’Sullivan, Mary Kenney
the first woman AFL employed
WOMEN’S TRADE UNION LEAGUE
founder of the Women’s Trade Union League
Stevens, Alzina Parsons
first probation officer
KNIGHTS OF LABOR
Table 4. Classifications from Notable American Women with no missing biographies in Wikipedia
In addition to being a “first” or founding something, two other variables emerge from the analysis of labor leaders that predict success in Wikipedia. One is quite heartening: affiliation with the Women’s Trade Union League (WTUL), a significant female-dominated historical organization, seems to translate into greater recognition as historically notable. Less optimistically, it also appears that what Lerner labeled as “notorious” behavior predicts success: six of the nine women were included for a wide range of reasons, from speaking out publicly to advocating resistance.
The conclusions here can be spun two ways. If we want to get women into Wikipedia, to surmount the obstacle of notability, we should write about women who fit well within the great man school of history. This could be reinforced within the architecture of Wikipedia by creating links within a woman’s entry to men and significant historical events, while also making sure that the entry emphasizes a woman’s “firsts” and her institutional ties. Following these practices will make an entry more likely to overcome challenges and provide a defense against proposed deletion. On the other hand, these are narrow criteria for meeting notability that will likely not encompass a wide range of female figures from the past.
The larger question remains: should we bother to work in Wikipedia at all? (Raval 2014). Wikipedia’s content is biased not only by gender, but also by race and region (“Racial Bias on Wikipedia” 2017). A concrete example of this intersectional bias can be seen if the fact that “only nine of Haiti’s 37 first ladies have Wikipedia articles, whereas all 45 first ladies of the United States have entries” (Frisella 2017). Critics have also pointed to the devaluation of Indigenous forms of knowledge within Wikipedia (Senier 2014; Gallart and van der Velden 2015).
Wikipedia, billed as “the encyclopedia anyone can edit” and purporting to offer “the sum of all human knowledge,” is notorious for achieving neither goal. Wikipedia’s content suffers from systemic bias related to the unbalanced demographics of its contributor base (Wikipedia, 2004, 2009c). I have highlighted here disparities in gendered content, which parallel the well-documented gender biases against female contributors (“Wikipedia:WikiProject Countering Systemic Bias” 2017). The average editor of Wikipedia is white, from Western Europe or the United States, between 30-40, and overwhelmingly male. Furthermore, “super users” contribute most of Wikipedia’s content. A 2014 analysis revealed that “the top 5,000 article creators on English Wikipedia have created 60% of all articles on the project. The top 1,000 article creators account for 42% of all Wikipedia articles alone.” A study of a small sample of these super users revealed that they are not writing about women. “The amount of these super page creators only exacerbates the [gender] problem, as it means that the users who are mass-creating pages are probably not doing neglected topics, and this tilts our coverage disproportionately towards male-oriented topics” (Hale 2014). For example, the “List of Pornographic Actresses” on Wikipedia is lengthier and more actively edited than the “List of Female Poets” (Kleeman 2015).
The hostility within Wikipedia against female contributors remains a significant barrier to altering its content since the major mechanism for rectifying the lack of entries about women is to encourage women to contribute them (New York Times 2011; Peake 2015; Paling 2015). Despite years of concerted efforts to make Wikipedia more hospitable toward women, to organize editathons, and place Wikipedians in residencies specifically designed to add women to the online encyclopedia, the results have been disappointing (MacAulay and Visser 2016; Khan 2016). Authors of a recent study of “Wikipedia’s infrastructure and the gender gap” point to “foundational epistemologies that exclude women, in addition to other groups of knowers whose knowledge does not accord with the standards and models established through this infrastructure” which includes “hidden layers of gendering at the levels of code, policy and logics” (Wajcman and Ford 2017).
Among these policies is the way notability is implemented to determine whether content is worthy of inclusion. The issues I raise here are not new; Adrianne Wadewitz, an early and influential feminist Wikipedian, noted in 2013 “A lack of diversity amongst editors means that, for example, topics typically associated with femininity are underrepresented and often actively deleted”(Wadewitz 2013). Wadewitz pointed to efforts to delete articles about Kate Middleton’s wedding gown, as well as the speedy nomination for deletion of an entry for reproductive rights activist Sandra Fluke. Both pages survived, Wadewicz emphasized, reflecting the way in which Wikipedia guidelines develop through practice, despite their ostensible stability.
This is important to remember – Wikipedia’s policies, like everything on the site, evolves and changes as the community changes. … There is nothing more essential than seeing that these policies on Wikipedia are evolving and that if we as feminists and academics want them to evolve in ways we feel reflect the progressive politics important to us, we must participate in the conversation. Wikipedia is a community and we have to join it. (Wadewitz 2013)
While I have offered some pragmatic suggestions here about how to surmount the notability criteria in Wikipedia, I want to close by echoing Wadewitz’s sentiment that the greater challenge must be to question how notability is implemented in Wikipedia praxis.
_____
Michelle Moravec is an associate professor of history at Rosemont College.
[1] Seven of the eleven categories in my study with fewer than ten individuals have no missing individuals.
_____
Works Cited
Beard, Mary Ritter. 1977. “A Study of the Encyclopaedia Britannica in Relation to Its Treatment of Women.” In Ann J. Lane, ed., Making Women’s History: The Essential Mary Ritter Beard. Feminist Press at CUNY. 215–24.
Booth, Alison. 2004. How to Make It as a Woman: Collective Biographical History from Victoria to the Present. Chicago, Ill.: University of Chicago Press.
Boyer, Paul, and Janet Wilson James, eds. 1971. Notable American Women: A Biographical Dictionary. III vols. Cambridge, MA: Harvard University Press.
Gallart, Peter, and Maja van der Velden. 2015. “The Sum of All Human Knowledge? Wikipedia and Indigenous Knowledge.” In Nicola Bidwell and Heike Winschiers-Theophilus, eds., At the Intersection of Indigenous and Traditional Knowledge and Technology Design. Santa Rosa, CA: Informing Science Press. 117–34
Hale, Sarah Josepha Buell. 1853. Woman’s Record: Or, Sketches of All Distinguished Women, from “the Beginning” Till A.D. 1850. Arranged in Four Eras. With Selections from Female Writers of Every Age. Harper & Brothers.
Wajcman, Judy, and Heather Ford. 2017. “‘Anyone Can Edit’, Not Everyone Does: Wikipedia’s Infrastructure and the Gender Gap.” Social Studies of Science 47:4. 511-27.
Content Warning: The following text references algorithmic systems acting in racist ways towards people of color.
Artificial intelligence and thinking machines have been key components in the way Western cultures, in particular, think about the future. From naïve positivist perspectives, as illustrated by the Rosie the Robot maid from 1962’s TV show The Jetsons, to ironic reflections on the reality of forced servitude to one’s creator and quasi-infinite lifespans in Douglas Adams’s Hitchhiker’s Guide to the Galaxy’s Marvin the Paranoid Android, as well as the threatening, invisible, disembodied, cruel HAL 9000 in Arthur C. Clarke’s Space Odyssey series and its total negation in Frank Herbert’s Dune books, thinking machines have shaped a lot of our conceptions of society’s future. Unless there is some catastrophic event, the future seemingly will have strong Artificial Intelligences (AI). They will appear either as brutal, efficient, merciless entities of power or as machines of loving grace serving humankind to create a utopia of leisure, self-expression and freedom from the drudgery of labor.
Those stories have had a fundamental impact on the perception of current technologic trends and developments. The digital turn has increasingly made growing parts of our social systems accessible to automation and software agents. Together with a 24/7 onslaught of increasingly optimistic PR messages by startups, the accompanying media coverage has prepared the field for a new kind of secular techno-religion: The Church of AI.
A Promise Fulfilled?
For more than half a century, experts in the field have maintained that genuine, human-level artificial intelligence has always been just around the corner, has been “about 10 to 20 years away.” Today’s experts and spokespeople continue to express the same timeline for their hopes. Asking experts and spokespeople in the field, that number has stayed mostly unchanged until today.
In 2017 AI is the battleground that the current IT giants are fighting over: for years, Google has developed machine learning techniques and has integrated them into their conversational assistant which people carry around installed in their smart devices. It’s gotten quite good at answering simple questions or triggering simple tasks: “OK Google, how far is it from here to Hamburg,” tells me that given current traffic it will take me 1 hour and 43 minutes to get there. Google’s assistant also knows how to use my calendar and email to warn me to leave the house in time for my next appointment or tell me that a parcel I was expecting has arrived.
Facebook and Microsoft are experimenting with and propagating intelligent chat bots as the future of computer interfaces. Instead of going to a dedicated web page to order flowers, people will supposedly just access a chat interface of a software service that dispatches their request in the background. But this time, it will be so much more pleasant than the experience everyone is used to when calling automated calling systems. Press #1 if you believe.
Old science fiction tropes get dusted off and re-released with a snazzy iPhone app to make them seem relevant again on an almost daily basis.
Nonetheless, the promise is always the same: given the success that automation of manufacturing and information processing has had in the last decades, AI is considered to be not only plausible or possible but, in fact, almost a foregone conclusion. In support of this, advocates (such as Google’s Ray Kurzweil) typically cite “Moore’s Law,”[1] an observation about the increasing quantity and quality of transistors, as being in direct correlation to the growing “intelligence” in digital services or cyber-physical systems like thermostats or “smart” lights.
Looking at other recent reports, a pattern emerges. Google’s AI lab recently trained a neural network to do lip-reading and found it better than human lip-readers (Chung, et al. 2016): where human experts were only able to pick the right word 12.4% of the time, Google’s neural network could reach 52.3% when being pointed at footage from BBC politics shows.
Another recent example from Google’s research department, which just shows how many resources Google invests into machine learning and AI: Google has trained a system of neural networks to translate different human languages (in their example, English, Japanese and Korean) into one another (Schuster, Johnson and Thorat 2016). This is quite the technical feat, given that most translation engines have to be meticulously tweaked to translate between two languages. But Google’s researchers finish their report with a very different proposition:
The success of the zero-shot translation raises another important question: Is the system learning a common representation in which sentences with the same meaning are represented in similar ways regardless of language — i.e. an “interlingua”?….This means the network must be encoding something about the semantics of the sentence rather than simply memorizing phrase-to-phrase translations. We interpret this as a sign of existence of an interlingua in the network. (Schuster, Johnson and Thorat 2016)
Google’s researchers interpret the capabilities of the neural network as expressions of the neural network creating a common super-language, one language to finally express all other languages.
These current examples of success stories and narratives illustrate a fundamental shift in the way scientists and developers think about AI, a shift that perfectly resonates with the idea that AI has spiritual and transcendent properties. AI developments used to focus on building structured models of the world to enable reasoning. Whether researchers used logic or sets or newer modeling frameworks like RDF,[2] the basic idea was to construct “Intelligence” on top of a structure of truths and statements about the world. Intentionally modeled not by accident on basic logic, a lot of it looked like the first sessions in a traditional logic 101 lecture: All humans die. Aristotle is a human. Therefore, Aristotle will die.
But all these projects failed. Explicitly modeling the structures of the world hit a wall of inconsistencies rather early when natural language and human beings got involved. The world didn’t seem to follow the simple hierarchic structures some computer scientists hoped it would. And even when it came to very structured, abstract areas of life, the approach never took off. Projects like, for example, expressing the Canadian income tax in a prolog[3] model (Sherman 1987) never got past the abstract planning stage. RDF and the idea of the “semantic web,” the web of structured data allowing software agents to gather information and reason based on it, are still somewhat relevant in academic circles, but have failed to capture wide adoption in real world use cases.
And then came neural networks.
Neural networks are the structure behind most of the current AI projects having any impact, whether it’s translation of human language, self-driving cars or recognizing objects and people on pictures and video. Neural networks work in a fundamentally different way from the traditional bottom-up approaches that defined much of the AI research in the last decades of the 20th century. Based on a simplified mathematical model of human neurons, networks of said neurons can be “trained” to react in a certain way.
Say you need a neural network to automatically detect cats on pictures. First, you need an input layer with enough neurons to assign one to every pixel of the pictures you want to feed it. You add an output layer with two neurons, one signaling “cat” and one signaling “not a cat.” Now you add a few internal layers of neurons and connect them to each other. Input gets fed into the network through the input layer. The internal layers do their thing and make the neurons in the output layer “fire.” But the necessary knowledge is not yet ingrained into the network—it needs to be trained.
There are different ways of training these networks, but they all come down to letting the network process a large amount of training data with known properties. For our example, a substantial set of pictures with cats would be necessary. When processing these pictures, the network gets positive feedback if the right neuron (the one signaling the detection of a cat) fires and strengthens the connections that lead to this result. Where it has a 50/50 chance of being right on the first try, that chance will quickly improve to the point that it will reach very good results, given that the set of training data is good enough. To evaluate the quality of the network, it then is tested against different pictures of cats and pictures without cats.
Neural networks are really good at learning to detect structures (objects in images, sound patterns, connections in data streams) but there’s a catch: even when a neural network is really good at its task, it’s largely impossible for humans to say why: neural networks are just sets of neurons and their weighted connections. But what does the weight of 1.65 say about a connection? What are its semantics? What do the internal layers and neurons actually mean? Nobody knows.
Many currently available services based on these technologies can achieve impressive results. Cars are able to drive as well if not better and safer than human drivers (given Californian conditions of light, lack of rain or snow and sizes of roads), automated translations of language can almost instantly give people at least an idea of what the rest of the world is talking about, and Google’s photo service allows me to search for “mountain” and shows me pictures of mountains in my collection. Those services surely feel intelligent. But are they really?
Despite optimistic reports about another big step towards “true” AI (like in the movies!) that tech media keeps churning out like a machine, just looking at recent months the trouble with the current mainstream in AI has recently become quite obvious.
In June 2015, Google’s Photos service was involved in a scandal: their AI was tagging faces of people of color with the term “gorilla” (Bergen 2015). Google quickly pointed out how difficult image recognition was and “fixed” the issue by blocking its AI from applying that specific tag, promising a “long term solution.” And even just staying with the image detection domain, there have been, in fact, numerous examples of algorithms acting in ways that don’t imply too much intelligence: cameras trained on Western, white faces detect people with Asian descent as “blinking” (Rose 2010); algorithms employed as impartial “beauty judges” seemingly don’t like dark skin (Levin 2016). The list goes on and on.
While there seems to be a big consensus among thought leaders, AI companies, and tech visionaries that AI is inevitable and imminent, the definition of “intelligence” seems to be less than obvious. Is an entity intelligent if it can’t explain its reasoning?
John Searle already explained this argument in the “Chinese Room“ thought experiment (Searle 1980): Searle proposes a computer program that can act convincingly as if it understands Chinese by taking in Chinese input, transforming it in some algorithmic way to output a response of Chinese characters. Does that machine really understand Chinese? Or is it just an automaton simulating understanding Chinese? Searle continues the experiment by assuming that the rules used by the machine get translated to readable English for a person to follow. A person locked in a room with these rules, pencil and paper could respond to every Chinese text given to that person as convincingly as the machine could. But few would propose that that person does now “understand” Chinese in the sense that a human being who knows Chinese does.
Current trends in the reception of AI seem to disagree: if a machine can do something that used to be only possible for human cognition, it surely must be intelligent. This assumption of Intelligence serves as foundation for a theory of human salvation: if machines are already a little intelligent (putting them into the same category as humans) and machines only get faster and more efficient, isn’t it reasonable to assume that they will solve the issues that humans have struggled with for ages?
But how can a neural network save us if it can’t even distinguish monkeys from humans?
Thy Kingdom Come 2.0
The story of AI is a technology narrative only at first glance. While it does depend on technology and technological progress, faster processors, and cleverer software libraries (ironically written and designed by human beings), it is really a story about automation, biases and implicit structures of power.
Technologists who have traditionally been very focused on the scientific method, on verifiable processes and repeatable experiments have recently opened themselves to more transcendent arguments: the proposition of a neural network, of an AI creating a generic ideal language to express different human languages as one structure (Schuster, Johnson and Thorat 2016) is a first very visible step of “upgrading” an automated process to becoming more than meets the eye. The multi-language-translation network is not an interesting statistical phenomenon that needs reflection by experts in the analyzed languages and the cultures using them with regards to their structural, social similarities and the way they influence(d) one another. Rather, it is but a miraculous device making steps towards creating an ideal language that would have made Ludwig Wittgenstein blush.[4]
But language and translation isn’t the only area in which these automated systems are being tested. Artificial intelligences are being trained to predict people’s future economic performance, their shopping profile, and their health. Other machines are deployed to predict crime hotspots, to distribute resources and to optimize production of goods.
But while predicting crimes still gets most people feeling uncomfortable, the idea that machines are the supposedly objective arbiters of goods and services is met with far less skepticism. But “goods and services” can include a great deal more than ordinary commercial transactions. If the machine gives one candidate a 33% chance of survival and the other one 45%, who should you give the heart transplant to?
Computers cannot lie, they just act according to their programming. They don’t discriminate against people based on their gender, race or background. At least that’s the popular opinion that very happily assigns computers and software systems the role of the objective arbiter of truth and fairness. People are biased, imperfect, and error-prone, so why shouldn’t we find the best processes and decision algorithms and put them into machines to dispose fair and optimal rulings efficiently and correctly? Isn’t that the utopian ideal of a fair and just society in which machines automate not just manual labor but also the decisions that create conflict and attract corruption and favoritism?
The idea of computers as machines of truth is being challenged more and more each day, especially given new AI trends; in traditional algorithmic systems, implicit biases were hard-coded into the software. They could be analyzed, patched. Closely mirroring the scientific method, this ideal world view saw algorithms getting better, becoming fairer with every iteration. But how to address implicit biases or discriminations when the internal structure of a system cannot be effectively analyzed or explained? When AI systems make predictions based on training data, who can check whether the original data wasn’t discriminatory or whether it’s still suitable for use today?
One original promise of computers—amongst others—had to do with accountability: code could be audited to legitimize its application within sociotechnical systems of power. But current AI trends have replaced this fundamental condition for the application of algorithms with belief.
The belief is that simple simulacra of human neurons will—given enough processing power and learning data—evolve to be Superman. We can characterize this approach as a belief system because it has immunized itself against criticism: when an AI system fails horribly, creates or amplifies existing social discrimination or violence, the dogma of AI proponents often tends to be that it just needs more training, needs to be fed more random data to create better internal structures, better “truths.” Faced with a world of inconsistencies and chaos, the hope is that some neural network, given enough time and data, will make sense of it, even though we might not be able to truly understand it.
Religion is a complex topic without one simple definition to apply to things to decide whether they are, in fact, religions. Religions are complex social systems of behaviors, practices and social organization. Following Wittgenstein’s ideas about language games, it might not even be possible to completely and selectively define religion. But there are patterns that many popular religions share.
Many do, for example, share the belief in some form of transcendental power such as a god or a pantheon or even more abstract conceptual entities. Religions also tend to provide a path towards achieving greater, previously unknowable truths, truths about the meaning of life, of suffering, of Good itself. Being social structures, there often is some form of hierarchy or a system to generate and determine status and power within the group. This can be a well-defined clergy or less formal roles based on enlightenment, wisdom, or charity.
While this is in no way anywhere close to a comprehensive list of attributes of religions, these key aspects can help analyze the religiousness of the AI narrative.
Singulatarianism
Here I want to focus on one very specific, influential sub-group within the whole AI movement. And no other group within tech displays religious structure more explicitly than the singulatarians.
Singulatarians believe that the creation of adaptable AI systems will spark a rapid and ever increasing growth in these systems’ capabilities. This “runaway reaction” of cycles of self-improvement will lead to one or more artificial super-intelligences surpassing all human mental and cognitive capabilities. This point is called “the Singularity” which will be—according to singulatarians—followed by a phase of extremely rapid technological developments whose speed and structure will be largely incomprehensible to human consciousness. At this point the AI(s) will (and according to most singulatarians shall) take control of most aspects of society. While the possibility of the Super-AI taking over by force is always lingering around in the back of singulatarians’ minds, the dominant position is that humans will and should hand over power to the AI for the good of the people, for the good of society.
Here we see singulatarianism taking the idea that computers and software are machines of truth to its extreme. Whether it’s the distribution of resources and wealth, or the structure of the law and regulation, all complex questions are reduced to a system of equations that an AI will solve perfectly, or at least so close to perfectly that human beings might not even understand said perfection.
According to the “gospel” as taught by the many proponents of the Singularity, the explosive growth in technology will provide machines that people can “upload” their consciousness to, thus providing human beings with durable, replaceable bodies. The body, and with it death itself, are supposedly being transcended, creating everlasting life in the best of all possible worlds watched over by machines of loving grace, at least in theory.
While the singularity has existed as an idea (if not the name) since at least the 1950s, only recently did singulatarians gain “working prototypes.” Trained AI systems are able to achieve impressive cognitive feats even today and the promise of continuous improvement that’s—seemingly—legitimized by references to Moore’s Law makes this magical future almost inevitable.
It’s very obvious how the Singularity can be, no, must be characterized as a religious idea: it presents an ersatz-god in the form of a super-AI that is beyond all human understanding and reasoning. Quoting Ray Kurzweil from his The Age of Spiritual Machines: “Once a computer achieves human intelligence it will necessarily roar past it” (Kurzweil 1999). Kurzweil insists that surpassing human capabilities is a necessity. Computers are the newborn gods of silicon and code that—once awakened—will leave us, its makers, in the dust. It’s not a question of human agency but a law of the universe, a universal truth. (Not) coincidentally Kurzweil’s own choice of words in this book is deeply religious, starting with the title of the book.
With humans therefore unable to challenge an AI’s decisions, human beings’ goal is to be to work within the world as defined and controlled by the super-AI. The path to enlightenment lies in the acceptance of the super-AI and by helping every form of scientific progress to finally achieve everlasting life through digital uploads of consciousness on to machines. Again quoting Kurzweil: “The ethical debates are like stones in a stream. The water runs around them. You haven’t seen any biological technologies held up for one week by any of these debates” (Kurzweil 2003 ). Ethical debates are in Kurzweil’s perception fundamentally pointless with the universe and technology as god necessarily moving past them—regardless of what the result of such debates might ever be. Technology transcends every human action, every decision, every wish. Thy will be done.
Because the intentions and reasoning of the super-AI being are opaque to human understanding, society will need people to explain, rationalize, and structure the AI’s plans for the people. The high-priests of the super-AI (such as Ray Kurzweil) are already preparing their churches and sermons.
Not every proponent of AI goes as far as the singulatarians go. But certain motifs keep appearing even in supposedly objective and scientific articles about AI, the artificial control system for (parts of) human society probably being the most popular: AIs are supposed to distribute power in smart grids for example (Qudaih and Mitani 2011) or decide fully automatically where police should focus their attention (Perry et al 2013). The second example (usually referred to as “predictive policing”) illustrates this problem probably the best: all training data used to build the models that are supposed to help police be more “efficient” is soaked in structural racism and violence. A police trained on data that always labels people of color as suspect will keep on seeing innocent people of color as suspect.
While there is value to automating certain dangerous or error-prone processes, like for example driving cars in order to protect human life or protect the environment, extending that strategy to society as a whole is a deeply problematic approach.
The leap of faith that is required to truly believe in not only the potential but also the reality of these super-powered AIs doesn’t only leave behind the idea of human exceptionalism, (which in itself might not even be too bad), but the idea of politics as a social system of communication. When decisions are made automatically without any way for people to understand the reasoning, to check the way power acts and potentially discriminates, there is no longer any political debate apart from whether to fall in line or to abolish the system all together. The idea that politics is an equation to solve, that social problems have an optimal or maybe even a correct solution is not only a naïve technologist’s dream but, in fact, a dangerous and toxic idea making the struggle of marginalized groups, making any political program that’s not focused on optimizing[5] the status quo, unthinkable.
Singulatarianism is the most extreme form, but much public discourse about AI is based on quasi-religious dogmas of the boundless realizable potential of AIs and life. These dogmas understand society as an engineering problem looking for an optimal solution.
Daemons in the Digital Ether
Software services on Unix systems are traditionally called “daemons,” a word from mythology that refers to god-like forces of nature. It’s an old throw-away programmer joke that seems like a precognition of sorts looking at today.
Even if we accept that AI has religious properties, that it serves as a secular ersatz-religion for the STEM-oriented crowd, why should that be problematic?
Marc Andreessen, venture capitalist and one of the louder proponents of the new religion, claimed in 2011 that “software is eating the world.” (Andreessen 2011) And while statements about the present and future from VC leaders should always be taken with a grain of salt, given that they are probably pitching their latest investment, in this case Andreessen was right: software and automation are slowly swallowing increasing aspects of everyday life. The digitalization of even mundane actions and structures, the deployment of “smart” devices in private homes and the public sphere, the reality of social life happening on technological platforms all help to give algorithmic systems more and more access to people’s lives and realities. Software is eating the world, and what it gnaws on, it standardizes, harmonizes, and structures in ways that ease further software integration
The world today is deeply cyber-physical. The separation of the digital and the “real” worlds that sociologist Nathan Jurgenson fittingly called “digital dualism” (Jurgenson 2011) can these days be called an obvious fallacy. Virtual software systems, hosted “in the cloud,” define whether we will get health care, how much we’ll have to pay for a loan and in certain cases even whether we may cross a border or not. These processes of power were traditionally “running on” social systems, on government organs or organizations; or maybe just individuals are now moving into software agents, removing the risky, biased human factor, as well as checks and balances.
The issue at hand is not the forming of a new tech-based religion itself. The problem emerges from the specific social group promoting it, its ignorance towards this matter and the way that group and its paradigms and ideals are seen in the world. The problem is not the new religion but the way its supporters propose it as science.
Science, technology, engineering, math—abbreviated as STEM—currently take center stage when it comes to education but also when it comes to consulting the public on important matters. Scientists, technologists, engineers and mathematicians are not only building their own models in the lab but are creating and structuring the narratives that are debatable. Science as a tool to separate truth from falsehood is always deeply political, even more so in a democracy. By defining the world and what is or is not, science does not just structure a society’s model of the world but also elevates its experts to high and esteemed social positions.
With the digital turn transforming and changing so many aspects of everyday life, the creators and designers of digital tools are—in tandem with a society hungry for explanations of the ongoing economic, technological and social changes—forming their own privileged caste, a caste whose original defining characteristic was its focus on the scientific method.
When AI morphed from idea or experiment to belief system, hackers, programmers, “data scientists,”[6] and software architects became the high priests of a religious movement that the public never identified and parsed as such. The public’s mental checks were circumvented by the hidden switch of categories. In Western democracies the public is trained to listen to scientists and experts in order to separate objective truth from opinion. Scientists are perceived as impartial, only obligated to the truth and the scientific method. Technologists and engineers inherited that perceived neutrality and objectivity given their public words, a direct line into the public’s collective consciousness.
On the other hand, the public does have mental guards against “opinion” and “belief” in place that get taught to each and every child in school from a very young age. Those things are not irrelevant in the public discourse—far from it—but the context they are evaluated in is different, more critical. This protection, this safeguard is circumvented when supposedly objective technologists propose their personal tech-religion as fact.
Automation has always both solved and created problems: products became easier, safer, quicker or mainly cheaper to produce, but people lost their jobs and often the environment suffered. In order to make a decision, in order to evaluate the good and bad aspects of automation, society always relied on experts analyzing these systems.
Current AI trends turn automation into a religion, slowly transforming at least semi-transparent systems into opaque systems whose functionality and correctness can neither be verified nor explained. By calling these systems “intelligent” a certain level of agency is implied, a kind of intentionality and personalization.[7] Automated systems whose neutrality and fairness is constantly implied and reaffirmed through ideas of godlike machines governing the world with trans-human intelligence are being blessed with agency and given power removing the actual entities of power from the equation.
But these systems have no agency. Meticulously trained in millions of iterations on carefully chosen and massaged data sets, these “intelligences” just automate the application of the biases and values of the organizations deploying and developing them as many scientists as Cathy O’Neil in her book Weapons of Math Destruction illustrates:
Here we see that models, despite their reputation for impartiality, reflect goals and ideology. When I removed the possibility of eating Pop-Tarts at every meal, I was imposing my ideology on the meals model. It’s something we do without a second thought. Our own values and desires influence our choices, from the data we choose to collect to the questions we ask. Models are opinions embedded in mathematics. (O’Neil 2016, 21)
For many years, Facebook has refused all responsibility for the content on their platform and the way it is presented; the same goes for Google and its search products. Whenever problems emerge, it is “the algorithm” that “just learns from what people want.” AI systems serve as useful puppets doing their masters’ bidding without even requiring visible wires. Automated systems predicting areas of crime claim not to be racist despite targeting black people twice as often as white ones (Pulliam-Moore 2016). The technologist Maciej Cegłowski probably said it best: “Machine learning is like money laundering for bias.”
Amen
The proponents of AI aren’t just selling their products and services. They are selling a society where they are in power, where they provide the exegesis for the gospel of what “the algorithm” wants: Kevin Kelly, founder of Wired magazine, leading technologist and evangelical Christian, even called his book on this issue What Technology Wants (Kelly 2011) imbuing technology itself with agency and a will. And all that without taking responsibility for it. Because progress and—in the end—the singularity are inevitable.
But this development is not a conspiracy or an evil plan. It grew from a society desperately demanding answers and scientists and technologists eagerly providing them. From deeply rooted cultural beliefs in the general positivity of technological progress, and from the trust in the powers of truth creation of the artifacts the STEM sector produces.
The answer to the issue of an increasingly powerful and influential social group hardcoding its biases into the software actually running our societies cannot be to turn back time and de-digitalize society. Digital tools and algorithmic systems can serve a society to create fairer, more transparent processes that are, in fact, not less but more accountable.
But these developments will require a reevaluation of the positioning, status and reception of the tech and science sectors. The answer will require the development of social and political tools to observe, analyze and control the power wielded by the creators of the essential technical structures that our societies rely on.
Current AI systems can be useful for very specific tasks, even in matters of governance. The key is to analyze, reflect, and constantly evaluate the data used to train these systems. To integrate perspectives of marginalized people, of people potentially affected negatively even in the first steps of the process of training these systems. And to stop offloading responsibility for the actions of automated systems to the systems themselves, instead of holding accountable the entities deploying them, the entities giving these systems actual power.
Amen.
_____
tante (tante@tante.cc) is a political computer scientist living in Germany. His work focuses on sociotechnical systems and the technological and economic narratives shaping them. He has been published in WIRED, Spiegel Online, and VICE/Motherboard among others. He is a member of the other wise net work, otherwisenetwork.com.
[1] Moore’s Law describes the observation that the number of transistors per square inch doubles roughly every 2 years (or every 18 months, depending on which version of the law is cited) made popular by Intel co-founder Gordon Moore.
[3]Prolog is a purely logical programming language that expresses problems as resolutions to logical expressions
[4] In the Philosophical Investigations (1953) Ludwig Wittgenstein argued against language somehow corresponding to reality in a simple way. He used the concept of “language games” illustrating that meanings of language overlap and are defined by the individual use of language rejecting the idea of an ideal, objective language.
[5] Optimization always operates in relationship to a specific goal codified in the metric the optimization system uses to compare different states and outcomes with one another. “Objective” or “general” optimizations of social systems are therefore by definition impossible.
[7] The creation of intelligence, of life as a feat is traditionally reserved to the gods of old. This is another link to religious world views as well as a rejection of traditional religions which is less than surprising in a subculture that’s most of the fan base of current popular atheists such as Richard Dawkins or Sam Harris. Vocal atheist Sam Harris himself being an open supporter of the new Singularity religion is just the cherry on top of this inconsistency sundae.
O’Neil, Cathy. 2016. Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. New York: Crown.
Perry, Walter L, Brian McInnis, Carter C. Price, Susan C. Smith, and John S. Hollywood. 2013. Predictive Policing: The Role of Crime Forecasting in Law Enforcement Operations. RAND Corporation.
Searle, John R. 1980. “Minds, Brains, and Programs.” Behavioral and Brain Sciences 3:3. 417–424.
Sherman, D. M. 1987. “A Prolog Model of the Income Tax Act of Canada.” ICAIL ‘87 Proceedings of the 1st International Conference on Artificial Intelligence and Law. New York, NY, USA: ACM. 127-136.
The intersection of digital studies and Indigenous studies encompasses both the history of Indigenous representation on various screens, and the broader rhetorics of Indigeneity, Indigenous practices, and Indigenous activism in relation to digital technologies in general. Yet the surge of critical work in digital technology and new media studies has rarely acknowledged the centrality of Indigeneity to our understanding of systems such as mobile technologies, major programs such as Geographic Information Systems (GIS), digital aesthetic forms such as animation, or structural and infrastructural elements of hardware, circuitry, and code. This essay on digital Indigenous studies reflects on the social, historical, and cultural mediations involved in Indigenous production and uses of digital media by exploring moments in the integration of the Cherokee syllabary onto digital platforms. We focus on negotiations between the Cherokee Nation’s goal to extend their language and writing system, on the one hand, and the systems of standardization upon which digital technologies depend, such as Unicode, on the other. The Cherokee syllabary is currently one of the most widely available North American Indigenous language writing systems on digital devices. As the language has become increasingly endangered, the Cherokee Nation’s revitalization efforts have expanded to include the embedding of the Cherokee syllabary in the Windows Operating System, Google search engine, Gmail, Wikipedia, Android, iPhone and Facebook.
Figure 1. Wikipedia in Cherokee
With the successful integration of the syllabary onto multiple platforms, the digital practices of Cherokees suggest the advantages and limitations of digital technology for Indigenous cultural and political survivance (Vizenor 2000).
Our collaboration has resulted in a multi-voiced analysis across several essay sections. Hearne describes the ways that engaging with specific problems and solutions around “glitches” at the intersection of Indigenous and technological protocols opens up issues in the larger digital turn in Indigenous studies. Joseph Erb (Cherokee) narrates critical moments in the adoption of the Cherokee syllabary onto digital devices, drawn from his experience leading this effort at the Cherokee Nation language technology department. Connecting our conceptual work with community history, we include excerpts from an interview with Cherokee linguist Durbin Feeling—author of the Cherokee-English Dictionary and Erb’s close collaborator—about the history, challenges, and possibilities of Cherokee language technology use and experience. In the final section, Mark Palmer (Kiowa) presents an “indigital” framework to describe a range of possibilities in the amalgamations of Indigenous and technological knowledge systems (2009, 2012). Fragmentary, contradictory, and full of uncertainties, indigital constructs are hybrid and fundamentally reciprocal in orientation, both ubiquitous and at the same time very distant from the reality of Indigenous groups encountering the digital divide.
Native to the Device
Indigenous people have always been engaged with technological change. Indigenous metaphors for digital and networked space—such as the web, the rhizome, and the river—describe longstanding practices of mnemonic retrieval and communicative innovation using sign systems and nonlinear design (Hearne 2017). Jason Lewis describes the “networked territory” and “shared space” of digital media as something that has “always existed for Aboriginal people as the repository of our collected and shared memory. That hardware technology has made it accessible through a tactile regime in no way diminishes its power as a spiritual, cosmological, and mythical ‘realm’” (175). Cherokee scholar (and former programmer) Brian Hudson includes Sequoyah in a genealogy of Indigenous futurism as a representative of “Cherokee cyberpunk.” While retaining these scholars’ understanding of the technological sophistication and adaptability of Indigenous peoples historically and in the present, taking up a heuristic that recognizes the problems and disjunction between Indigenous knowledge and digital development also enables us to understand the challenges faced by communities encountering unequal access to computational infrastructures such as broadband, hardware, and software design. Tracing encounters between the medium specificity of digital devices and the specificity of Indigenous epistemologies returns us to the incommensurate purposes of the digital as both a tool for Indigenous revitalization and as a sociopolitical framework that makes users do things according to a generic pattern.
The case of the localization of Cherokee on digital devices offers insights into the paradox around the idea of the “digital turn” explored in this b2o: An Online Journal special issue—that on the one hand, the digital turn “suggests that the objects of our world are becoming better versions of themselves. On the other hand, it suggests that these objects are being transformed so completely that they are no longer the things they were to begin with.” While the former assertion is reflected in the techno-positive orientation of much news coverage of the Cherokee adoption on the iPhone (Evans 2011) as well as other Indigenous initiatives such as video game production (Lewis 2014), the latter description of transformation beyond recognizable identity resembles the goals of various historical programs of assimilation, one of the primary “logics of elimination” that Patrick Wolfe identifies in his seminal essay on settler colonialism.
The material, representational, and participatory elements of digital studies have particular resonance in Indigenous studies around issues of land, language, political sovereignty, and cultural practice. In some cases the digital realm hosts or amplifies the imperial imaginaries pre-existing in the mediascape, as Jodi Byrd demonstrates in her analyses of colonial narratives—narratives of frontier violence in particular—normalized and embedded in the forms and forums of video games (2015). Indigeneity is also central to the materialities of global digitality in the production and dispensation of the machines themselves. Internationally, Indigenous lands are mined for minerals to make hardware and targeted as sites for dumping used electronics. Domestically in the United States, Indigenous communities have provided the labor to produce delicate circuitry (Nakamura 2014), even as rural, remote Indigenous communities and reservations have been sites of scarcity for digital infrastructure access (Ginsburg 2008). Indigenous communities such as those in the Cherokee Nation are rightly on guard against further colonial incursions, including those that come with digital environments. Communities have concerns about language localization projects: how are we going to use this for our own benefit? If it’s not for our benefit, then why not compute in the colonial language? Are they going to steal our medicine? Is this a further erosion of what we have left?
Lisa Nakamura (2013) has taken up the concept of the glitch as a way of understanding online racism, first as it is understood by some critics as a form of communicative failure or “glitch racism,” and second as the opposite, “not as a glitch but as part of the signal,” an “effect of internet on a technical level” that comprises “a discursive act in itself, not an obstruction to that act.” In this article we offer another way of understanding the glitch as a window onto the obstacles, refusals, and accommodations that take place at an infrastructural level in Indigenous negotiations of the digital. Olga Goriunova and Alexei Shulgin define “glitch” as “an unpredictable change in the system’s behavior, when something obviously goes wrong” (2008, 110).
A glitch is a singular dysfunctional event that allows insight beyond the customary, omnipresent, and alien computer aesthetics. A glitch is a mess that is a moment, a possibility to glance at software’s inner structure, whether it is a mechanism of data compression or HTML code. Although a glitch does not reveal the true functionality of the computer, it shows the ghostly conventionality of the forms by which digital spaces are organized. (114)
Attending to the challenges that arise in Indigenous-settler negotiations of structural obstacles—the work-arounds, problem-solving, false starts, failures of adoption—reveals both the adaptations summoned forth by the standardization built into digital platforms and the ways that Indigenous digital activists have intervened in digital homogeneity. By making visible the glitches—ruptures and mediations of rupture—in the granular work of localizing Cherokee, we arrive again and again at the cultural and political crossroads where Indigenous boundaries become visible within infrastructures of settler protocol (Ginsburg 1991). What has to be done, what has to be addressed, before Cherokee speakers can use digital devices in their own language and their own writing system, and what do those obstacles reveal about the larger orientation of digital environments? In particular, new digital platforms channel adaptations towards the bureaucratization of language, dictating the direction of language change through conventions like abbreviations, sorting requirements, parental controls and autocorrect features.
Within the framework of computational standardization, Indigenous distinctiveness—Indigenous sovereignty itself—becomes a glitch. We can see instantiations of such glitches arising from moments of politicized refusal, as defined by Mohawk scholar Audra Simpson’s insight that “a good is not a good for everyone” (1). Yet we can also see moments when Indigenous refusals “to stop being themselves” (2) lead to strategies of negotiation and adoption, and even, paradoxically, to a politics of accommodation (itself a form of agency) in the uptake of digital technologies. Michelle Raheja takes up the intellectual and aesthetic iterations of sovereignty to theorize Indigenous media production in terms of “visual sovereignty,” which she defines as “the space between resistance and compliance” within which Indigenous media-makers “revisit, contribute to, borrow from, critique, and reconfigure” film conventions, while still “operating within and stretching the boundaries of those same conventions” (1161). We suggest that like Indigenous self-representation on screen, Indigenous computational production occupies a “space between resistance and compliance,” a space which is both sovereigntist and, in its lived reality at the intersection of software standardization and Indigenous language precarity, glitchy.
Our methodology, in the case study of Cherokee language technology development that follows, might be called “glitch retrieval.” We focus on pulse points, moments, stories and small landmarks of adaptation, accommodation, and refusal in the adoption of Sequoyah’s Cherokee syllabary to mobile digital devices. In the face of the wave of publicity around digital apps (“there’s an app for that!”), the story of the Cherokee adoption is not one of appendage in the form of downloadable apps but rather the localization of the language as “native to the device.” Far from being a straightforward development, the process moved in fits and starts, beset with setbacks and surprises, delineating unique minority and endangered Indigenous language practices within majoritarian protocols. To return to Goriunova and Shulgin’s definition, we explore each glitch as an instance of “a mess” that is also “a moment, a possibility,” one that “allows insight” (2008). Each of the brief moments narrated below retrieves an intersection of problem and solution that reveals Indigenous presence as well as “the ghostly conventionality of the forms by which digital spaces are organized” (114). Retrieving the origin stories of Cherokee language technology—the stories of the glitches—gives us new ways to see both the limits of digital technology as it has been imagined and built within structures of settler colonialism, and the action and shape of Indigenous persistence through digital practices.
Cherokee Language Technology and Mobile Devices
Each generation is crucial to the survival of Indigenous languages. Adaptation, and especially adaptation to new technologies, is an important factor in Indigenous language persistence (Hermes et al 2016). The Cherokee, one of the largest of the Southeast tribes, were early adopters of language technologies, beginning with the syllabary writing system developed by Sequoyah between 1809 and 1820 and presented to the Cherokee Council in 1821. The circumstances of the development of the Cherokee syllabary are nearly unique in that 1) the writing system originated from the work of one man, and in the space of a single decade; and 2) in the fact that it was initiated and ultimately widely adopted from within the Indigenous community itself rather than being developed and introduced by non-Native missionaries, linguists or other outsiders.
Unlike alphabetic writing based on individual phonemes, a syllabary consists of written symbols indicating whole syllables, which can be more easily developed and learned than alphabetic systems due to the stability of each syllable sound. The Cherokee Syllabary system uses written characters that represent consonant and vowel sounds, such as “Ꮉ”, which is the sound of “ma,” and Ꮀ, for the sound “ho.” The original writing of Sequoyah was done with a quill and pen, an inking process that involved cursive characters, but this handwritten orthography gave way to a block print character set for the Cherokee printing press (Cushman 2011). The Cherokee Phoenix was the first Native American newspaper in the Americas, published in Cherokee and English beginning in 1828. Since then, Cherokee people have adapted their language and writing system early and often to new technologies, from typewriters to dot matrix printers. This historical adaptation includes a millennial transformation from technologies that required training to access machines like specially-designed typewriters with Cherokee characters, to the embedding of the syllabary as a standard feature on all platforms for commercially available computers and mobile devices. Very few Indigenous languages have this level of computational integration—in part because very few Indigenous languages have their own writing systems—and the historical moments we present here in the technologization of the Cherokee language illustrate both problems and possibilities of language diversity in standardization-dependent platforms. In the following section, we offer a community-based history of Cherokee language technology in stories of the transmission of knowledge between two generations—Cherokee linguist Durbin Feeling, who began teaching and adapting the language in the 1960s, and Joseph Erb, who worked on digital language projects starting in the early 2000s—focusing on shifts in the uptake of language technology.
In the early and mid-twentieth century, churches in the Cherokee Nation were among the sites for teaching and learning Cherokee literacy. Durbin Feeling grew up speaking Cherokee at home, and learned to read the language as a boy by following along as his father read from the Cherokee New Testament. He became fluent in writing the language while serving in the US military in Vietnam, when he would read the Book of Psalms in Cherokee. His curiosity about the language grew as he continued to notice the differences between the written Cherokee usage of the 1800s—codified in texts like the New Testament—and the Cherokee spoken by his community in the 1960s. Beginning with the bilingual program at Northeastern University (translating syllabic writing into phonetic writing), Feeling worked on Cherokee language lessons and a Cherokee dictionary, for which he translated words from a Webster’s dictionary, on handwritten index cards, to a recorder. Feeling recalls that in the early 1970s,
Back then they had reel to reel recorders and so I asked for one of those and talked to anybody and everybody and mixed groups, men and women, men with men, women with women. Wherever there were Cherokees, I would just walk up and say do you mind if I just kind of record while you were talking, and they didn’t have a problem with that. I filled up those reel to reel tapes, five of them….I would run it back and forth every word, and run it forward and back again as many times as I had to, and then I would hand write it on a bigger card.
So I filled, I think, maybe about five of those in a shoe box and so all I did was take the word, recorded it, take the next word, recorded it, and then through the whole thing…
There was times the churches used to gather and cook some hog meat, you know. It would attract the people and they would just stand around and joke and talk Cherokee. Women would meet and sew quilts and they’d have some conversations going, some real funny ones. Just like that, you know? Whoever I could talk with. So when I got done with that I went back through and noticed the different kinds of sounds…the sing song kind of words we had when we pronounced something (Erb and Feeling 2016).
The project began with handwriting in syllabary, but the dictionary used phonetics with tonal markers, so Feeling went through each of five boxes of index cards again, labeling them with numbers to indicate the height of sounds and pitches.
Feeling and his team experimented with various machines, including manual typewriters with syllabary keys (manufactured by the well-known Hermes typewriter company), new fonts using a dot matrix printer, and electric typewriters with Cherokee syllabary in the ball key—the typist had to memorize the location of all 85 keys. Early attempts to build computer programs allowing users to type in Cherokee resulted in documents that were confined to one computer and could not be easily shared except through printing documents.
Figure 2. Typewriter keyboard in Cherokee (image source: authors)
Beginning around 1990, a number of linguists and programmers with interests in Indigenous languages began working with the Cherokee, including Al Webster, who used Mac computers to create a program that, as Feeling described it, “introduced what you could do with fonts with a fontographer—he’s the one who made those fonts that were just like the old print, you know way back in the eighteen hundreds.” Then in the mid-1990s Michael Everson began working with Feeling and others to integrate Cherokee glyphs into Unicode, the primary system for software internationalization. Arising from discussions between engineers at Apple and Xerox, Unicode began in late 1987 as a project to standardize languages for computation. Although the original goal of Unicode was to encode all world writing systems, major languages came first. Michael Everson’s company Evertype has been critical to broader language inclusion, encoding minority and Indigenous languages such as Cherokee, which was added to the Unicode Standard in 1999 with the release of version 3.0.
Having begun language work with handwritten index cards in the 1960s, and later typewriters available to only one or two people with specialized skills, Feeling saw Cherokee adopted into Unicode in 1999, and integrated into Apple computer operating systems in 2003. When Apple and the Cherokee Nation publicized the new localization of Cherokee on the 4.1 iPhone in December 2010, the story was picked up internationally, as well as locally among Cherokee communities. By 2013, users could text, email, and search Google in the syllabary on smartphones and laptops, devices that came with the language already embedded as a standardized feature and that were available at chain stores like Walmart. This development involved different efforts at multiple locations, sometimes simultaneously, and over time. While Apple adopted Unicode-compliant Cherokee glyphs to the Macintosh in 2003, the Cherokee Nation, as a government entity, used PC computers rather than Macs. PCs had yet to implement Unicode-compliant Cherokee Fonts, so there was little access to the writing system on their computers and no known community adoption. At the time, the Cherokee Nation was already using an adapted English font that displayed Cherokee characters but was not Unicode compliant.
One of the first attempts to introduce Unicode-compliant Cherokee font and keyboard came with the Indigenous Language Institute conference at Northeastern State University in Oklahoma in 2006, where the Institute made the font available on flash drives and provided training to language technologists at the Cherokee Nation. However, the program was not widely adopted due to anticipated wait times in getting the software installed on Cherokee Nation computers. Further, the majority of users did not understand the difference between the new Unicode compliant fonts and the non-Unicode fonts they were already using. The non-Unicode Cherokee font and keyboard adapted the same keystrokes, and looked the same on screen as the Unicode compliant system, but certain keys (especially those for punctuation) produced glyphs that would not transfer between computers, so files could not be sent and re-opened on another computer without requiring extensive corrections. The value of Unicode compliance involves the additional interoperability to move between systems, the crucial first step towards integration with mobile devices, which are more useful in remote communities than desktop computers. Addition to Unicode is the first of five steps—including development of CLDR, open source font, keyboard layout design, and a word frequency list—before companies can encode a new language into their platforms for computer operating systems. These five steps act as space of exchange between Indigenous writing systems and digital platforms, within which differences are negotiated.
CLDR
The Common Local Data Repository (CLDR) is a set of key terms for localization, including months, days, years, countries, and currencies, as well as their abbreviations. This core information is localized on the iPhone and becomes the base which calendars and other native and external apps feed from on the device. Many Indigenous languages, including Cherokee, don’t have bureaucratic language, such as abbreviations for days of the week, and need to create them—Translation Department and Language Technology Department worked together to create new Cherokee abbreviations for calendrical terms.
Figure 3. Weather in Cherokee (image source: authors)
Open Source Font
Small communities don’t have budgets to purchase fonts for their languages, and such fonts also aren’t financially viable for commercial companies to develop, so the challenge for minority language activists is to find sponsorship for the creation of an open source font that will work across systems, available for anyone to adopt into any computer or device system. Working with Feeling, Michael Everson developed the open source font for Cherokee. Plantagenet font (designed by Ross Mills) was the first to adopt Cherokee into Windows (Vista) and Mac (Panther). If there is no font on a Unicode-compliant device—that is, the device does not have the language glyphs embedded—then users will see a string of boxes, the default filler for Unicode points that are not showing up in the system.
Keyboard Layout
New languages need an input method, and companies generally want the most widely used versions made available in open source. Cherokee has both a QWERTY keyboard, which is a phonetically-based Cherokee language keyboard, and a “Cherokee Nation” layout using the syllabary. Digital keyboards for mobile technologies are more complicated to create than physical keyboards and involve intricate collaboration between language specialists and developers. When developing the Cherokee digital keyboard for the iPhone, Apple worked in conjunction with the Translation Department and Language Technology Department at the Cherokee Nation, experimenting with several versions to accommodate the 85 Cherokee characters in the syllabary without creating too many alternate keyboards (the Cherokee Nation’s original involved 13 keyboards, whereas English has 3). Apple ultimately adapted a keyboard that involved two different ways of typing on the same keyboard, combining pop-up keys and an autocomplete system.
Figure 4. Mobile device keyboard in Cherokee (image source: authors)
Word Frequency List
The word frequency list is a standard requirement for most operating systems to support autocorrect spelling and other tasks on digital devices. Programmers need a word database, in Unicode, large enough to adequately source programs such as autocomplete. In order to generate the many thousands of words needed to seed the database, the Cherokee Nation had to provide Cherokee documents typed in the Unicode version of the language. But as with other languages, there were many older attempts to embed Cherokee in typewriters and computers that predate Unicode, leading to a kind of catch 22: The Cherokee Nation needed to already have documents produced in Unicode in order to get the language into computer and operating systems and adopted for mobile technologies, but they didn’t have many documents in Unicode because the language hadn’t yet been integrated into those Unicode-compliant systems. In the end the CN employed Cherokee speakers to create new documents in Unicode—re-typing the Cherokee Bible and other documents—to create enough words for a database. Their efforts were complicated by the existence of multiple versions of the language and spelling, and previous iterations of language technology and infrastructure.
Translation
Many of the English language words and phrases that are important to computational concepts, such as “security,” don’t have obvious equivalents in Cherokee (or as Feeling said, “we don’t have that”). How does one say “error message” in Cherokee? The CN Translation Department invented words—striving for both clarity and agreement—in order to address coding concepts for operating systems, error messages, and other phrases (which are often confusing even in English) as well as more general language such as the abbreviations discussed above. Feeling and Erb worked together with elders, CN staff, and professional Cherokee translators to invent descriptive Cherokee words for new concepts and technologies, such as ᎤᎦᏎᏍᏗ (u-ga-ha-s-di) or “to watch over something” for security; ᎦᎵᏓᏍᏔᏅ ᏓᎦᏃᏣᎳᎬᎯ (ga-li-da-s-ta-nv da-ga-no-tsa-la-gv-hi) or “something is wrong” for error message; ᎠᎾᎦᎵᏍᎩ ᎪᏪᎵ (a-na-ga-li-s-gi go-we-li) or “lightning paper” for email; and ᎠᎦᏙᎥᎯᏍᏗ ᎠᏍᏆᏂᎪᏗᏍᎩ (a-ga-no-v-hi-s-di a-s-qua-ni-go-di-s-gi) or “knowledge keeper” for computers. For English words like “luck” (as in “I’m feeling lucky,” a concept which doesn’t exist in Cherokee), they created new idioms, such as “ᎡᎵᏊ ᎢᎬᏱᏊ ᎠᏆᏁᎵᏔᏅ ᏯᏂᎦᏛᎦ” (e-li-quu i-gv-yi-quu a-qua-ne-li-ta-na ya-ni-ga-dv-ga) or “I think I’ll find it on the first try.”
Sorting
When the Unicode-compliant Plantagenet Cherokee font was first introduced in Microsoft Windows OS in Vista (2006), the company didn’t add Cherokee to the sorting function (the ability to sort files by numeric or alphabetic order) in its system. When Cherokee speakers named files in the language, they arrived at the limits of the language technology. These limits determine parameters in a user’s personal computing, the point at which naming files in Cherokee or keeping a computer calendar in Cherokee become forms of language activism that reveal the underlying dominance of English in the deeper infrastructure of computational systems. When a user sent a file with Cherokee characters, such as “ᏌᏊ” (sa-quu, or “one”) and “ᏔᎵ” (ta-li or “two”), receiving computers could not put the file into one place or another because the core operating system had no sorting order for the Unicode points of Cherokee, and the computer would crash. Sorting orders in Cherokee were not added to Microsoft until Windows 8.
Parental Controls
Part of the protocol for operating systems involves standard protections like parental controls—the ability to enable a program to automatically censor inappropriate language. In order to integrate Cherokee into an OS, the company needed lists of offensive language or “curse words” that could be flagged in parental restrictions settings for their operating system. Meeting the needs of these protocols was difficult linguistically and culturally, because Cherokee does not have the same cultural taboos as English around words for sexual acts or genitals; most Cherokee words are “clean words,” with offensive speech communicated through context rather than the words themselves. Also, because the Cherokee language involves tones, inappropriate meanings can arise from alternate tonal emphases (and the tone is not reflected in the syllabary). Elder Cherokee speakers found it culturally difficult to speak aloud those elements of Cherokee speech that are offensive, while non-Cherokee speaking computer company employees who had worked with other Indigenous languages did not always understand that not all Indigenous languages are alike—“curse words” in one language are not inappropriate in others. Finally, almost all of the potentially offensive Cherokee words that certain technology companies sought not only did not carry the same offensive connotation as its translation in English, but also carried dual or multiple meanings, and if blocked would also block a common word that had no inappropriate meaning.
Mapping and Place Names
One of the difficulties for Cherokees working to create Cherokee language country names and territories was the Cherokee Nation’s own exclusion from the lists. Speakers translated the names of even tiny nations into Cherokee for lists and maps in which the Cherokee Nation itself did not appear. Discussion of terminologies for countries and territories were frustrating because Cherokee themselves were not included, making colonial erasure of Indigenous nationhood and territories visible to Cherokee speakers as they did the translations. Erb is currently working with Google Maps to revise their digital maps to show federally recognized tribal nations’ territories.
Passwords and Security
One of the first attempts to introduce Unicode-compliant Cherokee on computers for the Immersion School, ᏣᎳᎩ ᏧᎾᏕᎶᏆᏍᏗ (tsa-la-gi tsu-na-de-lo-qua-s-di), involved problems and glitches that temporarily set back adoption of Unicode systems. The CN Language Technology Department added the Unicode-compliant font and keyboards on an Immersion School curriculum developer’s computer. However, at the time computers could only accept English passwords. After the curriculum developer had been typing in Cherokee and left their desk, their computer automatically logged off (auto-logoff is standard security for government computers). Temporarily locked out of their computer, they couldn’t switch their keyboard back to English to type the English password. Other teachers and translators heard about this “lockout” and most decided against having the new Unicode compliant fonts on their computers. Glitches like these slowed the roll out of Unicode-compliant fonts and set back the adoption process in the short term.
Community Adoption
When computers began to enter Cherokee communities, Feeling recalls his own hesitation about social media sites like Facebook: “I was afraid to use that.” When in 2011 there was a contested election for Chief of the Nation, and social media provided faster updates than traditional media, many community members signed up for Facebook accounts so they could keep abreast of the latest news about the election.
Figure 5. Facebook in Cherokee (image source: authors)
Similarly, when Cherokee first became available on the iPhone 4.1, many Cherokee people were reluctant to use it. Feeling says he was “scared that it wouldn’t work, like people would get mad or something.” But older speakers wanted to communicate with family members in Cherokee, and they provided the pressure for others to begin using mobile devices in the language. Feeling’s older brother, also a fluent speaker, bought an iPhone just to text with his brother in Cherokee, because his Android phone wouldn’t properly display the language.
In 2009, the Cherokee Nation introduced Macintosh computers in a 1:1 computer-to-student ratio for the second and third grades of the Cherokee Immersion school, and gave students air cards to get wireless internet service at home through cell towers (because internet was unavailable in many rural Cherokee homes). Up to this point the students spoke in Cherokee at school, but rarely generalized their Cherokee language outside of school or spoke it at home. With these tools, students could—and did—get on FaceTime and iChat from home and in other settings to talk with classmates in Cherokee. For some parents, it was the first time they had heard their children speaking Cherokee at home. This success convinced many in the community of the worth of Cherokee language technologies for digital devices.
The ultimate community adoption of Cherokee in digital forms—computers, mobile devices, search engines and social media—came when the technologies were most applicable to community needs. What worked was not clunky modems for desktops but iPhones that could function in communities without internet infrastructure. The story of Cherokee adoption into digital devices illustrates the pull towards English-language structures of standardization for Indigenous and minority language speakers, who are faced with challenges of skill acquisition and adaptation; language development histories that involve versions of orthographies, spellings, neologisms and technologies; and problems of abstraction from community context that accompany codifying practices. Facing the precarity of an eroding language base and the limitations and possibilities digital devices, the Cherokee and other Indigenous communities have strategically adapted hardware and software for cultural and political survivance. Durbin Feeling describes this adaptation as a Cherokee trait: “It’s the type of people that are curious or are willing to learn. Like we were in the old times, you know? I’m talking about way back, how the Cherokees adapted to the English way….I think it’s those kind of people that have continued in a good way to use and adapt to whatever comes along, be it the printing press, typewriters, computers, things like that. … Nobody can take your language away. You can give it away, yeah, or you can let it die, but nobody can take it away.”
Indigital Frameworks
Our case study reveals important processes in the integration of Cherokee knowledge systems with the information and communication technologies that have transformed notions of culture, society and space (Brey 2003). This kind of creative fusion is nothing new—Indigenous peoples have been encountering and exchanging with other peoples from around the world and adopting new materials, technologies, ideas, standards, and languages to meet their own everyday needs for millennia. The emerging concept indigital describes such encounters and collisions between the digital world and Indigenous knowledge systems, as highlighted in The Digital Arts and Humanities (Travis and von Lünen 2016). Indigital describes the hybrid blending or amalgamation of Indigenous knowledge systems including language, storytelling, calendar making, and song and dance, with technologies such as computers, Internet interfaces, video, maps, and GIS (Palmer 2009, 2012, 2013, 2016). Indigital constructs are forms of what Bruno Latour calls technoscience (1987), the merging of science, technology, and society—but while Indigenous peoples are often left out of global conversations regarding technoscience, the indigital framework attempts to bridge such conversations.
Indigital constructs exist because knowledge systems like language are open, dynamic, and ever-changing; are hybrid as two or more systems mix, producing a third; require the sharing of power and space which can lead to reciprocity; and are simultaneously everywhere and nowhere (Palmer 2012). Palmer associates indigital frameworks with Indigenous North Americans and the mapping of Indigenous lands by or for Indigenous peoples using maps and GIS (2009; 2012; 2016). GIS is a digital mapping and database software used for collecting, manipulating, analyzing, and mapping various spatial phenomena. Indigenous language, place-names, and sacred sites often converge with GIS resulting in indigital geographic information networks. The indigital framework, however, can be applied to any encounter and exchange involving Indigenous peoples, technologies, and cultures.
First, indigital constructs emerge locally, often when individuals or groups of individuals adopt and experiment with culture and technology within spaces of exchange, as happens in the moments of challenge and success in the integration of Cherokee writing systems to digital devices outlined in this essay. Within spaces of exchange, cultural systems like language and technology do not stand alone as dichotomous entities. Rather, they merge together creating multiplicity, uncertainty, and hybridization. Skilled humans, typewriters, index cards, file cabinets, language orthographies, Christian Bibles, printers, funding sources, transnational corporations, flash drives, computers, and cell-phones all work to stabilize and mobilize the digitization of the Cherokee language. Second, indigital constructs have the potential to flow globally; Indigenous groups and communities tap into power networks constructed by global transnational corporations, like Apple, Google, or IBM. Apple and Google are experts at creating standardized computer designs while connecting with a multitude of users. During negotiations with Indigenous communities, digital technologies are transformative and can be transformed. Finally, indigital constructs introduce different ways that languages can be represented, understood, and used. Differences associated with indigital constructs include variations in language translations, multiple meanings of offensive language, and contested place-names. Members of Indigenous communities have different experiences and reasons for adopting or rejecting the use of indigital constructs in the form of select digital devices like personal computers and cell-phones.
One hopeful aspect in this process is the fact that Indigenous knowledge systems and digital technologies are combinable. The idea of combinability is based on the convergent nature of digital technologies and the creative intention of the artist-scientist. In fact, electronic technologies enable new forms from such combinations, like Cherokee language keyboards, Kiowa story maps and GIS, or Maori language dictionaries. Digital recordings of community members or elders telling important stories that hold lessons for future generations are becoming more widely available, made either using audio or visual devices or combination of both formats. Digital prints of maps can be easily carried to roundtables for discussion about the environment (Palmer 2016), with audiovisual images edited on digital devices and uploaded or downloaded to other digital devices and eventually connected to websites. The mapping of place-names, creation of Indigenous language keyboards, and integration of stories into GIS require standardization, yet those standards are often defined by technocrats far removed from Indigenous communities, with a lack of input from community members and elders. Whatever the intention of the elders telling the story or the digital artist creating the construction, this is an opportunity for the knowledge system and its accompanying information to be shared.
Ultimately, how do local negotiations on technological projects influence final designs and representations? Indigital constructions (and spaces) are hybrid and require mixing at least two things to create a new third construct or third space (Bhabha 2006). Creation of a new Cherokee bureaucratic language to meet the needs of the iPhone CLDR requirements for representing calendar elements, with the negotiations between Cherokee language specialists and computer language specialists, resulted in hybrid space-times; a hybrid calendar shared as a form Cherokee-constructed technoscience. The same process applied to the development of specialized and now standardized Cherokee fonts and keyboards for the iPhone. A question for future research might be how much Unicode standardization transforms the Cherokee language in terms of meaning and understanding. What elements of Cherokee are altered and how are the new constructs interpreted by community members? How might Cherokee fonts and keyboards contribute to the sustainability of Indigenous culture and put language into practice?
Survival of indigital constructs requires reciprocity between systems. Indigital constructions are not set up as one-way flows of knowledge and information. Rather, indigital constructions are spaces for negotiation, featuring the ideas and thoughts of the participants. Reciprocity in this sense means cross-cultural exchange on equal footing, as having too much power will consume any kind of rights-based approach to building bridges among all participants. One-way flows of knowledge are revealed when Cherokee or other Indigenous informants providing place-names to Apple, Microsoft, or Google realize that their own geographies are not represented. They are erased from the maps. Indigenous geographies are often trivialized as being local, vernacular, and particular to a culture which goes against the grain of technoscience standardization and universalization. The trick of indigital reciprocity is shared power, networking (Latour 2005), assemblages (Deleuze and Guattari 1988), decentralization, trust, and collective responsibility. If all these relations are in place, rights-based approaches to community problems have a chance of success.
Indigital constructions are everywhere—Cherokee iPhone language applications or Kiowa stories in GIS are just a few examples, and many more occur in film, video, and other digital media types not discussed in this article. Yet, ironically, indigital constructions are also very distant from the reality of many Indigenous people on a global scale. Indigital constructions are primarily composed in the developed world, especially what is referred to as the global north. There is still a deep digital divide among Indigenous peoples and many Indigenous communities do not have access to digital technologies. How culturally appropriate are digital technologies like video, audio recordings, or digital maps? The indigital is distant in terms of addressing social problems within Indigenous communities. Oftentimes, there is a fear of the unknown in communities like the one described by Durbin Feeling in reference to adoption of social media applications like Facebook. Some Indigenous communities consider carefully the implications of adopting social media or language applications created for community interactions. Adoption may be slow, or not meet the expectations of software developers. Many questions arise in this process. Do creativity and social application go hand in hand? Sometimes we struggle to understand how our work can be applied to everyday problems. What is the potential of indigital constructions being used for rights-based initiatives?
Conclusion
English-speakers don’t often pause to consider how their language comes to be typed, displayed, and shared on digital devices. For Indigenous communities, the dominance of majoritarian languages on digital devices has contributed to the erosion of their language. While the isolation of many Indigenous communities in the past helped to protect their languages, that same isolation has required incredible efforts for minority language speakers to assert their presence in the infrastructures of technological systems. The excitement over the turn to digital media in Indian country is an easy story to tell to a techno-positive public, but in fact this turn involves a series of paradoxes: we take materials out of Indigenous lands to make our devices, and then we use them to talk about it; we assert sovereignty within the codification of standardized practices; we engage new technologies to sustain Indigenous cultural practices even as technological systems demand cultural transformation. Such paradoxes get to the heart of deeper questions about culturally-embedded technologies, as the modes and means of our communication shift to the screen. To what extent does digital media re-make the Indigenous world, or can it function just as a tool? Digital media are functionally inescapable and have come to constitute elements of our self-understanding; how might such media change the way Indigenous participants understand the world, even as they note their own absences from the screen? The insights from the technologization of Cherokee writing engage us with these questions along with closer insights into multiple forms of Indigenous information and communications technology and the emergence of indigital creations, inventing the next generation of language technology.
_____
Joseph Lewis Erb is a computer animator, film producer, educator, language technologist and artist enrolled in the Cherokee Nation. He earned his MFA from the University of Pennsylvania, where he created the first Cherokee animation in the Cherokee language, “The Beginning They Told.” He has used his artistic skills to teach Muscogee Creek and Cherokee students how to animate traditional stories. Most of this work is created in the Cherokee Language, and he has spent many years working on projects that will expand the use of Cherokee language in technology and the arts. Erb is an assistant professor at the University of Missouri, teaching digital storytelling and animation.
Joanna Hearne is associate professor in the English Department at the University of Missouri, where she teaches film studies and digital storytelling. She has published a number of articles on Indigenous film and digital media, animation, early cinema, westerns, and documentary, and she edited the 2017 special issue of Studies in American Indian Literatures on “Digital Indigenous Studies: Gender, Genre and New Media.” Her two books are Native Recognition: Indigenous Cinema and the Western (SUNY Press, 2012) and Smoke Signals: Native Cinema Rising (University of Nebraska Press, 2012).
Mark H. Palmer is associate professor in the Department of Geography at the University of Missouri who has published research on institutional GIS and the mapping of Indigenous territories. Palmer is a member of the Kiowa Tribe of Oklahoma.
[*] The authors would like to thank Durbin Feeling for sharing his expertise and insights with us, and the University of Missouri Peace Studies Program for funding interviews and transcriptions as part of the “Digital Indigenous Studies” project.
_____
Works Cited
Bhabha, Homi K. and J. Rutherford. 2006. “Third Space.” Multitudes 3. 95-107.
Brey, P. 2003. “Theorizing Modernity and Technology.” In Modernity and Technology, edited by T.J. Misa, P. Brey, and A. Feenberg, 33-71. Cambridge: MIT Press.
Byrd, Jodi A. 2015. “’Do They Not Have Rational Souls?’: Consolidation and Sovereignty in Digital New Worlds.” Settler Colonial Studies: 1-15.
Cushman, Ellen. 2011. The Cherokee Syllabary: Writing the People’s Perseverance. Norman: University of Oklahoma Press.
Deleuze, Gilles, and Félix Guattari. 1988. A Thousand Plateaus: Capitalism and Schizophrenia. New York: Bloomsbury Publishing.
Feeling, Durbin and Joseph Erb. 2016. Interview with Durbin Feeling, Tahlequah, Oklahoma. 30 July.
Feeling, Durbin. 1975. Cherokee-English Dictionary. Tahlequah: Cherokee Nation of Oklahoma.
Ginsburg, Faye. 1991. “Indigenous Media: Faustian Contract or Global Village?” Cultural Anthropology 6:1. 92-112.
Ginsburg, Faye. 2008. “Rethinking the Digital Age.” In Global Indigenous Media: Culture, Poetics, and Politics, edited by Pamela Wilson and Michelle Stewart. Durham: Duke University Press. 287-306.
Goriunova, Olga and Alexei Shulgin. 2008. “Glitch.” In Software Studies: A Lexicon, edited by David Fuller. Cambridge, MA: MIT Press. 110-18.
Hearne, Joanna. 2017. “Native to the Device: Thoughts on Digital Indigenous Studies.” Studies in American Indian Literatures 29:1. 3-26.
Hermes, Mary, et al. 2016. “New Domains for Indigenous Language Acquisition and Use in the USA and Canada.” In Indigenous Language Revitalization in the Americas, edited by Teresa L. McCarty and Serafin M. Coronel-Molina. London: Routledge. 269-291.
Hudson, Brian. 2016. “If Sequoyah Was a Cyberpunk.” 2nd Annual Symposium on the Future Imaginary, August 5th, University of British Columbia-Okanagan, Kelowna, B.C.
Latour, Bruno. 1987. Science in Action: How to Follow Scientists and Engineers through Society. Cambridge, MA: Harvard University Press.
Latour, Bruno. 2005. Reassembling the Social: An Introduction to Actor-Network Theory. Oxford: Oxford University Press.
Lewis, Jason. 2014. “A Better Dance and Better Prayers: Systems, Structures, and the Future Imaginary in Aboriginal New Media.” In Coded Territories: Tracing Indigenous Pathways in New Media Art, edited by Steven Loft and Kerry Swanson. Calgary: University of Calgary Press. 49-78.
Manovich, Lev. 2002. The Language of New Media. Cambridge, MA: MIT Press.
Nakamura, Lisa. 2014. “Indigenous Circuits: Navajo Women and the Racialization of Early Electronic Manufacture.” American Quarterly 66:4. 919-941.
Palmer, Mark. 2016. “Kiowa Storytelling around a Map.” In Travis and von Lunen (2016). 63-73.
Palmer, Mark. 2013. “(In)digitizing Cáuigú Historical Geographies: Technoscience as a Postcolonial Discourse”. In History and GIS: Epistemologies, Considerations and Reflections, edited by A. von Lunen and C. Travis. Dordrecht, NLD: Springer Publishing. 39-58.
Palmer, Mark. 2012. “Theorizing Indigital Geographic Information Networks.“ Cartographica: The International Journal for Geographic Information and Geovisualization 47:2. 80-91.
Palmer, Mark. 2009. “Engaging with Indigital Geographic Information Networks.” Futures: The Journal of Policy, Planning and Futures Studies 41. 33-40.
Palmer, Mark and Robert Rundstrom. 2013. “GIS, Internal Colonialism, and the U.S. Bureau of Indian Affairs.” Annals of the Association of American Geographers 103:5. 1142-1159.
Raheja, Michelle. 2011. Reservation Reelism: Redfacing, Visual Sovereignty, and Representations of Native Americans in Film. Lincoln: University of Nebraska Press.
Simpson, Audra. 2014. Mohawk Interruptus: Political Life Across the Borders of Settler States. Durham: Duke University Press.
Travis, C. and A. von Lünen. 2016. The Digital Arts and Humanities. Basel, Switzerland: Springer.
Vizenor, Gerald. 2000. Fugitive Poses: Native American Indian Scenes of Absence and Presence. Lincoln: University of Nebraska Press.
Wolf, Patrick. 2006. “Settler Colonialism and the Elimination of the Native.” Journal of Genocide Research 8:4. 387-409.