• Mark Lipovetsky – A Culture of Zero Gravity (Review of Pomerantsev, Nothing Is True and Everything Is Possible: The Surreal Heart of the New Russia)

    Mark Lipovetsky – A Culture of Zero Gravity (Review of Pomerantsev, Nothing Is True and Everything Is Possible: The Surreal Heart of the New Russia)

    Peter Pomerantsev, Nothing Is True and Everything Is Possible: The Surreal Heart of the New Russia (First edition, London: Faber & Faber, 2014. Revised edition, New York: PublicAffairs, 2015)

    reviewed by Mark Lipovetsky

    This essay has been peer-reviewed by the boundary 2 editorial collective. 

    Peter Pomerantsev’s book Nothing Is True and Everything Is Possible: Adventures in Modern Russia offers a chain of seemingly disparate but conceptually tied, stories – about  the Kremlin ideologue Vladislav Surkov,  the former “king maker” and oligarch Boris Berezovsky, post-soviet TV networks, Moscow night clubs, the suicides of top models’, new religious sects, the victims of business wars between different branches of power, former gangsters-cum-TV producers, Western expats, the Night Wolves (an organization of bikers which has become an avant-garde of Putin’s supporters), and many other truly exciting subjects. Through these stories, written with a sharp, sometimes satirical pen,  Pomerantsev presents modern Russian as a specific type of cultural organism rather than  a projection of Putin’s or anybody else’s political manipulations and propaganda.

    Pomerantsev clearly rejects a stereotype shared by many contemporary political commentators but harkening back to Soviet times: a reduction of the entire society to the whims of its leaders (sometimes confronted only by a small group of brave and wise dissidents). Although Nothing Is True and Everything Is Possible portrays such “political technologists” as Surkov and depicts several figures of contemporary dissent, Pomerantsev clearly tries to deconstruct this cliche and deliver a much more complex vision. Notably, Putin is rarely mentioned in the book; he is designated simply as “the President,” which suggests that his personality is less important than his position within the system.

    Pomerantsev’s book methodically dismantles the myth about “the return of the Soviet” in recent years – the myth shared by many, within and outside Russia alike. While demonstrating the continuity between the late Soviet modus vivendi, the political compromises of the 1990s, and today’s radical changes, Pomerantsev consistently argues that we have to deal with a completely new kind of the political discourse, within which recognizably Soviet elements play a very different role and disguise rather than reveal what is actually happening.

    The third widespread stereotype that is splendidly absent in Pomerantsev’s book is the discourse on “the Russians’ love of the strong hand,” Russia’s innate gravitation to authoritarian regimes and leaders, and, most notoriously the alleged lack of a democratic tradition in Russia.  Unlike numerous publications about contemporary Russia, these Orientalizing and profoundly essentialist labels never appear in Nothing Is True and Everything Is Possible.[1] For Pomerantsev, Russia is not a backwards and isolated player looking up at the perfect Western world; on the contrary, his book directly leads to an opposite conclusion: “Today’s Kremlin might perhaps be best viewed as an avant-garde of malevolent globalization. The methods it pursues will be taken up by others, and […] the West has no institutional or analytical tools to deal with it” (Pomerantsev and Weiss 2015, 7).

    This quotation is borrowed from a special report, “The Menace of Unreality: How the Kremlin Weaponizes Information, Culture and Money,”  written by Pomerantsev together with Michael Weiss for Mikhail Khodorkovsky’s Institute of Modern Russia. The authors of the report ask: “How does one fight a system that embraces Tupac and Instagram but compares Obama to a monkey and deems the Internet a CIA invention? That censors online information but provides a happy platform to the founder of WikiLeaks, a self-styled purveyor of total ‘transparency’? That purports to disdain corporate greed and celebrates Occupy Wall Street while presiding over an economy as corrupt as Nigeria’s? That casts an Anschluss of a neighboring country using the grammar of both blood-and-soil nationalism and anti-fascism?” (Pomerantsev and Weiss 2015, 5).

    The report works with ideas which have been brewing in Nothing Is True and Everything Is Possible. Yet, this lively and observant book is less about politics per se, and more about culture as an effective form of politics. The reader of Pomerantsev’s book eventually cannot help but realize that Russia’s political turns and twists are born in night clubs and at parties, rather than in Kremlin offices,  that “the President,” despite his unconcealed hatred for western-style democracy, is indeed truly democratic, since his thoughts and acts are synchronized with the desires of the majority of the Russian people (many of his supporters are well-educated, well-travelled representatives of the newly-born middle class);  that in a society dependent on TV broadcasting – and the Russia depicted in the book is exactly such a society –the distance between the cultural and political phenomena is minimal, if existing at all. Although the first edition of the book appeared before Russia’s political turn of 2014, Pomerantsev only had to add a few pages to the 2015 version to reflect the new political reality after the annexation of Crimea. These pages do not stand out but look quite natural, since in the main body of Nothing Is True… Pomerantsev managed to pinpoint exactly those processes and tendencies that made the insanity possible.

    Freedom from stereotypes coupled with Pomerantsev’s spectacular ability to present complex ideas through vivid snapshots, makes his book fertile ground for the discussion of much broader subjects. First and foremost, Nothing Is True and Everything Is Possible raises questions about the role of cynicism in Soviet and post-Soviet culture and politics, as well as about the relation between cynicism, authoritarianism  and postmodernism in both the Russian and global contexts. I will try to present a “dialogical” reading of Pomerantsev’s book, sometimes problematizing its concepts, sometimes expanding on them, sometimes applying them to the material beyond the book’s content. It is a truly rare occasion when a journalistic reportage provokes historical and theoretical questions, which proves that Nothing Is True and Everything Is Possible is a phenomenon out of the ordinary.

     “Reality Show Russia”

    Petr Pomerantsev was born in Kiev in 1977. In 1978 his father, the well-known poet and journalist Igor’ Pomerantsev, emigrated with his family from the USSR and began working as a broadcaster, first at the BBC Radio Russian Service and from 1987 and until present at Radio Liberty. Pomerantsev Jr. recollects in his Newsweek article how he enjoyed playing in the hallways of the BBC Bush House in London (see Pomerantsev 2011a). The BBC Russian service was one of the most vibrant centers of anti-Soviet intellectual activity, so it is safe to assume (and the book confirms this impression) that the author of Nothing Is True and Everything Is Possible  has absorbed the ethos of late Soviet dissidents. This ethos might have served as a repellent in Russia of the 2000s, a country enraptured with nostalgic myths about Soviet imperial might and the stability of the Brezhnev era along with growing demonization of the Yeltsin period of democratic reforms, which strangely resonated with the rapidly increasing number of former and current officers of FSB (the KGB successor) taking up prestigious political, economic and media offices…

    In 2001, after graduating from Edinburgh University and some job experience at British TV, Pomerantsev decides to try himself in Russia – where he stays until 2010, working as a producer at the popular Russian entertainment TV channel TNT. Stays, because, as he explains,  Moscow in these years “was full of vitality and madness and incredibly exciting”; it was “a place to be” (Castle 2015). Along with the increasing monopolization of political, economic, and media power in the hands of the FSB-centered clique,  the 2000s was a period of a noticeable economic growth, when Russia’s cities became cleaner and safer, when ordinary people started to travel abroad on a regular basis, when one could hardly find a Russian-made car within a thick stream of urban vehicles, when restaurants flourished, book sales were on the rise and theatres were full every night … In short, when the economic reforms of the accursed Yeltsin years in combination with the skyrocketing oil and gas prices stated to bring long-awaited fruits (see Iasin 2005).

    While in Moscow Pomerantsev produced reality shows, documentaries, and generally had to bring the “western” style to the “news-free” – i.e., supposedly apolitical – broadcasts of the TNT channel. Nothing Is True and Everything Is Possible is in many ways a memoir about these years on Russian TV. The reality show was one of the genres Pomerantsev produced, so the metaphor of Russian politics as a reality show holds a central places in his book; the first part of the book is entitled: “Reality Show Russia”.

    One of Pomerantsev’s first discoveries associated with these – relatively free and diverse – years, concerns the blurring of the borderline between fact and fiction, between a staged show and the news, especially on  the Russian national channels united by the term “Ostankino” (the major TV studio in Moscow).  As a TV news anchor from Ostankino explained to him, a young foreigner, speaking fluent Russian and working on Russian TV: “Politics has got to feel like … like a movie!” (6)[2]. Pomeranstev’s explains how this motto works in practice: “… the new Kremlin won’t make the same mistake the old Soviet Union did: it will never let TV become dull… Twenty-first century Ostankino mixes show business and propaganda, ratings with authoritarianism […] Sitting in that smoky room,  I had the sense that reality was somehow malleable, that I was with Prospero who could project any existence they wanted onto post-Soviet Russia” (7).  However, his own career on a Russian entertainment channel serves as an illuminating example of the limits of “Prospero”’s power. Pomerantsev describes how he had been producing a reality show about people meeting and losing each other at the airport. Intentionally, he tried to avoid staged and scripted situations, seeking interesting characters and stories instead of sentimental effects. The result was quite predictable:  “The ratings for Hello-Goodbye had sucked. Part of the problem was that the audience wouldn’t believe the stories in the show were real. After so many years of fake reality, it was hard to convince them this was genuine” (73). Furthermore, when Pomerantsev made several documentaries addressing societal conflicts and problems, they all were rejected by the channel on the premise that its viewers did not want to see anything negative.

    Yet, this is only half of the picture. In the second half of the book, Pomerantsev  describes how he received a very tempting invitation to the federal First Channel. The head of programming, the best-selling author of self-help books (this is an important detail in the context of the book) offered him the chance “to helm a historical drama-documentary… With a real, big, mini-movie budget for actors and reconstructions and set designers… The sort of thing you make when you’re right at the top of the TV tree in the West…” (226). And the story was great: “about a Second World War admiral who defied Stalin’s orders and started the attack on the Germans, while the Kremlin was still in denial about Hitler’s intentions and hoped for peace. The admiral was later purged and largely forgotten. It’s a good story. It’s a really good story. It’s a dream project” (227). Most importantly, it was a true story that obviously defied  the newly-rediscovered admiration for Stalin’s politics in Russia’s public and media discourse (these days Putin even speaks highly about the Molotov-Ribbentrop pact). Yet, eventually Pomerantsev decided to decline this generous offer: “… I realise that though my film might be clean, it could easily be put next to some Second World War hymn praising Stalin and the President as his newest incarnation.  Would my film be the ‘good’ programme that validates everything I don’t want to be a part of? The one that wins trust, for that trust to be manipulated in the next moment?” (231). In other words: “In a world that really has been turned on its head, truth is a moment of falsehood,” as Guy Debord writes in The Society of the Spectacle (1995, 14).

    This is a very important realization, not only as the turning point in Pomerantsev’s Russian odyssey, but also as an insight into the logic of the Russian “society of spectacle”, itself resonant with Baudrillard’s almost forgotten concept of the “hyperreality of simulacra”.  What seemed to be an almost grotesque philosophic hyperbole, appears to be Pomerantsev’s and his colleagues’ practical experience in Nothing Is True and Everything Is Possible. As follows from this experience,  the capitalist society of the spectacle, unlike Debord’s conceptualization,  is not opposed to the communist social order but directly grows from it. Post-Soviet TV viewers remember and even nostalgically long for Soviet media where ideological images constantly produced their own spectacle,  perhaps not as attractive as the capitalist one, but still capable of fulfilling its main function: “By means of spectacle the ruling order discourses endlessly upon itself in an uninterrupted monologue of self-praise” (Debord 1995, 19). As to the “hyperreality of simulacra”,  it appears in Pomerantsev’s book  not only as a result of capitalist market forces (images that sell better, dominate), but as a horizon in which public demand for captivating (or entertaining, or horrifying) images and the political and economic interests of the ruling elite meet and happily fuse with each other. As follows from Nothing Is True…, the “hyperreality of simulacra” in its totality can be most successfully achieved not by capitalism alone, but by the blend of capitalism with post-soviet authoritarianism, accomplished through the  homogenization of the information flow.

    Back in the early 2000s, the prominent Russian sociologists Lev Gudkov and Boris Dubin, defined Russian society as “the society of TV viewers”. The society of TV-viewers had formed on the ruins of Soviet ideocracy, i.e. the society with a single official ideology which served either as an ally or as an opponent to multiple others non-official ones.  In this new cultural realm. political doctrines were replaced by entertainment which seemed to be apolitical, yet, (surprise, surprise!) were quite political indeed.  For example, in the 2000s appeared numerous TV series about heroic, charming and, yes,  suffering officers of the Cheka/NKVD/KGB: they were entertaining and even captivating, but eventually they have produced the figure of the representative of this organization as the epitome of the national destiny – who defends the motherland, takes the hit from his (always his!) native organization,  successfully overcomes the difficulties (temporary of courses) and triumphs over enemies (see Lipovetsky 2014).  In the scholars’ opinion, the mass dependence of Russian society on TV images signified the process opposite to the formation of the civil society: “Today’s social process of Russian ‘massovization’… is directed against differentiation and relies on the most conservative groups of the society” (Gudkov and Dubin 2001, 44).  The scholars argued that while promoting negative identification – through the figures of enemies and demonized “others”— television offered uplifting “participatory rituals of power” that substituted for actual politics while feeding the longing for national grandeur, heroic history and symbolic superiority.

    However, in the 1990s, the post-Soviet mediaspace was a battlefield of various competing discourses – liberal, neo-liberal, nationalist, nostalgic, statists, libertarian, etc. During the 2000s-2010s the full spectrum of these discourses gradually narrowed down toward cultural neo-traditionalism and political neo-conservatism (focalized on lost imperial glory, “Russia raising itself from its knees”, collapse of the USSR as “the greatest geopolitical catastrophe of the century”, etc.). Pomerantsev observes the completion of this process in the TV-orchestrated nationalist mass hysteria accompanying the Crimean affair and invasion of Ukraine in  2014: “… the Kremlin has finally mastered the art of fusing reality TV and authoritarianism to keep the great 140-million strong population entertained, distracted, constantly exposed to geopolitical nightmare that if repeated enough times can become infectious” (273)

    Without any competing media  (no more than 5% of the Russian population gets its news from internet), the homogenized narrative of post-Soviet TV not only shapes the opinions of the vast majority of Russian population – the notorious 85% that (allegedly) wholeheartedly support all of Putin’s initiatives.  The TV narrative becomes an ultimate reality symbolically superseding immediate everyday experience. In other words, the television offers neither a simulation of reality, nor a distortion of truth, but a parallel, and more real, world.

    Baudrillard wrote about “the desert of the real” (Natoli and Hutcheon 1993, 343), indicating that his hyperreality of simulacra was inseparable from the “metaphysical despair” evoked by “the idea that images concealed nothing at all” (345). On the contrary, Pomerantsev’s non-fictional characters, TV producers and “political technologists” feel no despair whatsoever, rather they enjoy their power over the “real” and celebrate the disappearance and malleability of any and all imaginable truth. In the formulation of Gleb Pavlovsky, a Soviet-time dissident, who became a leading “political technologist” of  “the Putin system” (although  eventually he was expelled from the circle of the Kremlin viziers):  “The main difference between propaganda in the USSR and the new Russia […] is that in Soviet times the concept of truth was important. Even if they were lying they took care to prove what they were doing was ‘the truth.’ Now no one even tries proving the ‘truth.’ You can just say anything. Create realities” (Pomerantsev and Weiss 2015, 9).

    At the same time, as one can see from the example with the offer received by Pomerantsev from the Ostankino boss, this system recognizes truth and even effectively employs discourses that might be uncomfortable for the dominant ideology. Yet, here these elements of credibility are instrumentalized as mere means for the performance of reality, a performance that neither its producers nor its consumers seem to judge by its truthfulness. Here, some other criteria matter more.  In the post-Soviet hyperreality of simulacra truth is triumphantly defied; it has been openly manipulated through the process of constant constructions, negations, and reconstruction in front of the viewer’s eyes.  This is why emphasis falls onto the flamboyance and virtuosity of the (reality) performance, be it the Olympics or the public burning of tons of imported cheese from countries sanctioning Russia. This may be the Achilles heel of contemporary Russian politics.  If performance supersedes reality, then invisible economic sanctions on Russian leadership are much less painful than a boycott of, say, the Football World Championship of 2018.

    “Postmodern Dictatorship”?

    Curiously, the vision of the malleable TV-dominated- reality in Pomerantsev’s book deeply resonates with Generation ‘P’ (Homo Zapiens in American version, Babylon  in British) by Viktor Pelevin, one of the most famous Russian postmodernist novels, published in 1999. The novel appeared before Putin was known to the broad public, and was initially perceived as a summation of  the Yeltsin period. Yet, it proved to be an prescient account of the ideological shifts in Putin’s decade. Even on a surface level, the novel presents a shrewd political forecast for the 2000s. In Generation P, a graduate from the Literary Institute  trained to translate poetry from languages he does not know, a character without features but with a “pile of cynicism,” Vavilen Tatarsky, becomes a copywriter, first for commercial advertisements, later for political ones,  eventually rising from mediocrity to become the supreme ruler of the media, the living god secretly ruling post-Soviet Russia. This plotline retroactively reads as a parody of Vladimir Putin’s ascent to the role of the “national leader”. With an uncanny acuity of foresight, Pelevin imagines the transformation of a non-entity into the “face of the nation”, in a diapason from the elimination of the “well-known businessman and political figure Boris Berezovsky” (2002,  249) – another character of Nothing Is True and Everything Is Possible — to a new cultural mainstream instigating nationalist nostalgia for the Soviet empire and novel and familiar forms of class hatred.  Pelevin even anticipated Russia’s newly-found desire to lead the reactionaries of the world (Pomerantsev and Weiss write about this in their memorandum)– in his commercial for Coca Cola Tatarsky appears as the frontrunner for the “congress of radical fundamentalists from all of the world’s major confessions” (2002, 249).

    In Generation P, a gangster commissions Vavilen to produce a Russian national idea: “Write me a Russian idea about five pages long. And a short version one page long. And lay it out like real life, without any fancy gibberish […] So’s they won’t think all we’ve done in Russia is heist the money and put up a steel door. So’s they can feel the same kind of spirit like in ’45 at Stalingrad, you get me?” (Pelevin 2002, 138) This request, albeit expressed in slightly different terms punctuates a wide spectrum of cultural debates about the national idea in Russia of the 1990s and 2000s, reflected in Pomerantsev’s book as well. However, when asked in 2008 if Russia had found its national idea in Putin, Pelevin responded affirmatively: “That’s precisely what Putin is” (Rotkirch 2008, 82). Following this logic, one may argue that although Vavilen failed to accomplish the task assigned him, his creator did not. Like Putin, Vavilen is a manifestation of Russia’s new national idea. He just isn’t sure what that truly is, since it is hyperreal and he himself created it.

    But let us pause for a second and ask whether the fusion of postmodernism and authoritarianism is possible at all? For Pomerantsev they are compatible.  He respectfully cites the Russian oligarch Oleg Deripaska saying: “This isn’t a country in transition but some sort of postmodern dictatorship that uses the language and institutions of democratic capitalism for authoritarian elites” (50). In 2011, Pomerantsev published in The London Review of Books the article “Putin’s Rasputin” that now reads as a seed from which the book was born (slightly altered, this text would be included into Nothing Is True…). The article describes Vladislav Surkov, a former deputy head of the President’s administration, Putin’s aid and vice-premier, the inventor of the concept of Russian “sovereign democracy” and builder of the United Russia Party;  currently one of the chief coordinators of both the “hybrid war” in Ukraine and its orchestrated representation in the Russian media.  In Surkov, who is also known as a novelist and song-writer, Pomerantsev sees (with good reason) the main designer of contemporary Russia’s political and societal system. Surkov, he contends, has fused authoritarianism with postmodernism, creating a completely new political system, which Pomerantsev tentatively defines as “postmodern authoritarianism”:

    Newly translated postmodernist texts give philosophical weight to the Surkovian power model. [Jean-] François Lyotard, the French theoretician of postmodernism, began to be translated in Russia only towards the end of the 1990s, at exactly the time Surkov joined the government. The author of Almost Zero [a postmodernist novel allegedly written by Surkov] loves to invoke such Lyotardian concepts as the breakdown of grand cultural narratives and the fragmentation of truth: ideas that still sound quite fresh in Russia. […] In an echo of socialism’s fate in the early 20th century, Russia has adopted a fashionable, supposedly liberational Western intellectual movement and transformed it into an instrument of oppression. (Pomerantsev 2011)

    This description continues in the book:

    Surkov likes to invoke the new postmodern texts just translated into Russian,  the breakdown of grand narratives, the impossibility of truth, how everything is only ‘simulacrum’ and ‘simulacra’… and then in the next moment he says how he despises relativism and lives conservatism, before quoting Allen Ginsberg’s ‘Sunflower Sutra’ in English and by heart […] Surkov’s genius has been to tear those associations apart,  to marry authoritarianism and modern art, to use the language of rights and representation to validate tyranny, to recut and paste democratic capitalism until it means the reverse of its original purpose. (87-88)

    Although, this way of reasoning seems to be a little naïve  (one man’s cultural convictions cannot be directly reproduced by the entire country or just Moscow) the question remains: how can one so easily marry postmodernism and authoritarianism? Similarities between what Pomerantsev depicts in his non-fiction and postmodernist theoretical models, as well as Russian postmodernist fiction are too obvious to be ignored.

    It should be noted that Russian postmodernism has been radically different from the model described by Fredric Jameson as the “cultural logic of late capitalism”. Although participants of late Soviet underground culture, had very fragmented, if any knowledge of Western theory,  their works embodied Lyotarian “incredulity towards grand narratives” in scandalously transgressive and liberating forms of  the counterculture, which had been subverting both Soviet official and intelligentsia’s hegemonies (see in more detail Lipovetsky 1999, 2008). Although acknowledged in the 1990s, postmodernist writers and artists like Dmitrii Prigov, Vladimir Sorokin, Lev Rubinshtein and their colleagues by underground circles by and large,  have preserved their critical position towards neo-traditionalist and neo-conservative ideologies and cultural trends.

    Notably, Vladimir Sorokin in 2006 wrote a postmodernist dystopian novel The Day of the Oprichinik (translated into English in 2011), in which, as readers and critics admit almost unanimously, predicted, outlined and exaggerated the actual features of the grotesque political climate of the 2010s. Lev Rubinshtein, the experimental poet famous for “the index cards poetry”, in the 2000s has become one of the most brilliant and influential political essayist of the anti-Putin camp. Dmitrii Prigov, one of the founding fathers of Moscow Conceptualism, also published political columns critical of the new conformism and nostalgia for the lost grand narratives. Most importantly, he has directly influenced protest art of the new generation: before his untimely death in 2007 he collaborated with the group Voina (War) famous for its radical political performances. The founder of Voina, Oleg Vorotnikov, called Prigov the inspiration for the group’s creation and activities, and the former member of Voina and spokesperson  for Pussy Riot, Nadezhda Tolokonnikova, repeatedly mentions Prigov as a deep influence, exposing her to contemporary, i.e., postmodernist, art and culture (see, for example, Volchek 2012).

    Although Pomerantsev does not write about these figures (he only briefly mentions Voina’s actions and “the great tricksters of the Monstration movement”[149]), it is with apparent tenderness that he describes the conceptualist artist Vladislav Mamyshev-Monro whose impersonations of various cultural and political celebrities, including “the President”, were at first perceived as a part of the culture of simulation but turned out to be its subversion incompatible with the new political freeze:  “Vladik himself was dead. He was found floating in a pool in Bali. Death by heart attack. Right at the end an oligarch acquaintance had made him an offer to come over to the Kremlin side and star in a series of paintings in which he would dress up and appear in a photo shoot that portrayed the new protest leaders sodomizing. Vladik had refused” (278).

    These examples, although admittedly brief, nevertheless complicate and problematize the picture of “postmodern dictatorship” painted by Pomerantsev. Minimally, they testify to the fact that postmodernism hosts dissimilar and even conflicting organisms, that postmodernist culture since the 1980s has been evolving in various directions, some of which lead to Surkov while others lead to Pussy Riot. An informative parallel might be made to Boris Groys’s conceptualization of Stalinism and its cultural manifestation Socialist Realism. In The Total Art of Stalinism (original title: Gesamtkunstwerk Stalin) Groys argued that Stalinism  adopted avant-garde aesthetic methods and substituted the avant-gardist demiurge with the state (and Stalin as  its personification): “… Socialist realism candidly formulates the principle and strategy of its mimesis: although it advocates a strictly ‘objective’, ‘adequate’ rendering of  external reality,  at the same time it stages or produces this reality. More precisely, it takes reality that has already been produced by Stalin and the party, thereby shifting the creative act onto reality itself, just as the  avant-garde had demanded” (1992, 55). Groys’s argument has been criticized by historians of Socialist Realism pointing to the antagonism between Socialist Realism and the avant-garde and its reliance on much more populist and traditionalist discourses (see for example, Dobrenko 1997). However, the very logic of the transformation of a liberatory aesthetics into sociocultural authoritarianism seems to be relevant to contemporary Russia. Despite Benjamin’s maxim, politics has been aestheticized since ancient times, but when the state acts as an artist, repression becomes inevitable.

    Although historical parallels can help to contour the phenomenon, by default they are never accurate. This is why, I believe that in the cultural situation described by Pomerantsev, we are dealing with something different: with the postmodernist redressing of a far more long-standing cultural and political phenomenon, which tends to change clothing every new epoch, and Nothing Is True… excels in describing its current Russian outfit.

    From the History of Cynicism

    Throughout his entire book, using very dissimilar examples, Pomerantsev demonstrates the functioning of one and the same cultural (political/social/psychological) mechanism: the coexistence of mutually exclusive ideologies/beliefs/discourses in one and the same mind/space/institution.  More accurately, it is not their co-existence, but the painless and almost artistic shifting from one side to the opposite; a process which never stops and is never is reflected upon as a problem.

    Consider just a few examples from Nothing Is True:

    About Moscow’s new architecture: “A new office center on the other side of the river from the Kremlin starts with a Roman portico, then morphs into medieval ramparts with spikes and gold-glass reflective windows, all topped with turrets and Stalin-era spires. The effect is at first amusing, then disturbing. It’s like talking to the victim of a multiple personality disorder.” (124).

    About politics of “a new type of authoritarianism”: “The Kremlin’s idea is to own all forms of political discourse, to not let any independent movement develop outside its walls. Its Moscow can feel like an oligarchy in the morning and a democracy in the afternoon, a monarchy for dinner and a totalitarian state by bedtime” (79).

    About new “spiritual gurus”: “Surkov had gathered together all political models to create a grand pastiche, or Moscow’s architecture tried to fill all styles of buildings onto one, Vissarion [a popular new “prophet”] had created a collage of all religions” (210)

    About media producers:

    The producers who work at the Ostankino channels might all be liberals in the private lives, holiday in Tuscany, and be completely European in their tastes. When I ask how they marry their professional and personal lives, they look at me as if I were a fool and answer: ‘Over the last twenty years we’ve lived through a communism,  we never believed in, democracy and defaults and mafia state and oligarchy, and we’ve realized they are illusions, that everything is PR.’ ‘Everything is PR’ has become the favorite phrase of the new Russia; my Moscow peers are filled with a sense that they are both cynical and enlightened. […] ‘Can’t you see your own governments are just as bad as ours?’ they ask me. I try to protest – but they just smile and pity me. To believe in something and stand by it in this world is derided, the ability to be a shape-shifter celebrated… conformism raised to the level of aesthetic act. (87)

    And once again about them:

    For when I talk to many of my old  colleagues who are still working in the ranks of Russian media or in state corporations, they might laugh off all the Holy Russia stuff as so much PR (because everything is PR!), but their triumphant cynicism in turn means they can be made to feel there are conspiracies everywhere; because if nothing is true and all motives are corrupt and no one is to be trusted, doesn’t it mean that some dark hand must be behind everything? (273)

    About social psychology:

    Before I used to think the two worlds were in conflict, but the truth is a symbiosis. It’s almost as if you are encouraged to have one identity one moment and the opposite one the next. So you’re always split into little bits, and can never quite commit to changing things […] But there is great comfort in these splits too: you can leave all your guilt with your ‘public’ self. That wasn’t you stealing that budget/making that propaganda show/bending your knee to the President, just a role you were playing: you’re a good person really. It’s not much about denial. It’s not even about suppressing dark secrets. You can see everything you do, all your sins. You just reorganize your emotional life so as not to care. (234)

    Indeed, “conformism raised to the level of aesthetic act” is a great definition of cynicism. Furthermore, the post-Soviet complex illuminated by Pomerantsev, deeply resonates with a brilliant description of the  modern cynic from Peter Sloterdijk’s famous book Critique of Cynical Reason:

    … the present-day servant of the system can very well do with the right hand what the left hand never allowed. By day, colonizer, at night, colonized; by occupation, valorizer and administrator, during leisure time, valorized and administered; officially a cynical functionary, privately a sensitive soul; at office a giver of orders, ideologically a discussant; outwardly a follower of the reality principle, inwardly a subject oriented towards pleasure; functionally an agent of capital, intentionally a democrat; with respect to the system a functionary of reification, with respect to Lebenswelt  (lifeworld), someone who achieves self-realization; objectively a strategist of destruction, subjectively a pacifist; basically someone who triggers catastrophes, in one’s own view, innocence personified <…> This mixture is our moral status quo.’ (1987, 113)

    Obviously, there is nothing specifically post-Soviet in this description. According to Sloterdijk, “a universal diffuse cynicism” (1987, 3) is the widespread cultural response to the heavy burden of modernity. He defines cynicism as “enlightened false consciousness” as opposed to Marx’s famous definition of ideology. Sloterdijk argues that cynicism offers the modern subject a strategy of pseudo-socialization to reconcile individual interest with social demands by the splitting their personality into unstable and equally false and authentic social masks. The constant switching of these masks is the strategy of cynical accommodation to modernity.  There is nothing specifically postmodern in this strategy either. Sloterdijk traces a genealogy of cynicism from ancient Greece to the twentieth century.

    However, he almost completely excludes the Soviet experience from his “cabinet of cynics.” Slavoj Zizek, probably, was the first to apply Sloterdijk’s concept to Stalinism. In The Plague of Fantasies  he argued that the Stalinist henchmen far exceeded the cynicism of their Nazi colleagues,  “The paranoiac Nazis really believed in the Jewish conspiracy, while the perverted Stalinists actively organized/ invented ‘counterrevolutionary conspiracies’ as a pre-emptive strikes. The greatest surprise for the Stalinist investigator was to discover that the subject accused of being a German or American spy really was a spy: in Stalinism proper,  confessions counted only as far as they were false and extorted … “(1997, 58). And in the book Did Somebody Say Totalitarianism? even in more detail he argued that in the Soviet political system “a cynical attitude towards the official ideology was what the regime really wanted – the greatest catastrophe for the regime would have been for its own ideology to be taken seriously, and realized by its subjects.” (2001, 92).

    On the other hand, as historians of Soviet civilization have demonstrated,  the authorities’ cynicism generated matching cynical methods of adaptation among ordinary Soviet citizens, the “broad masses” and intelligentsia alike (although, of course, it would be wrong to generalize and imagine all Soviet people as seasoned cynics). Oleg Kharkhordin in his The Collective and the Individual in Russia: A Study of Practices, which deals with the Soviet purges and the origins of the Soviet subjectivity writes about the results of this process: “Their double-faced life is not a painful split forced upon their heretofore unitary self; on the contrary, this split is normal for them because they originate as individuals by the means of split. […] One of the steps in this long development was individual perfection of the mechanism for constant switching between the intimate and the official, a curious kind of unofficial self-training, a process that comes later that the initial stage of dissimilation conceived as ‘closing off’ (pritvorstvo) and one that we may more aptly call dissimilation as ‘changing faces’ (litsemerie) – and, we might add, as its summation – cynicism”  (1998, 275, 278).  In her book Tear off the Masks! Identity and Imposture in Twentieth Century Russia, Sheila Fitzpatrick, the well-known social historian of Stalinism makes no reference to Sloterdijk, but uses many documents from the 1920s and 1930s to demonstrate the constantly shifting logic of class discrimination and how it compelled the average person to manipulate their own identity, Sloterdijk-style, rewriting the autobiography and seeking a place in the official and unofficial systems of social relations.

    The heyday of Soviet cynicism falls onto the post-Stalin period of late socialism when, according to Alexey Yurchak, the author of the seminal study Everything Was Forever, Until It Was No More: The Last Soviet Generation, ideological beliefs frayed into pure rituals, participation in which demonstrated one’s loyalty to the regime and secured social success without embodying true belief.  Pomerantsev in his book directly establishes the link between the late Soviet cynicism and today’s cultural reality:

    Whenever I ask my Russian bosses, the older TV producers and media types who run the system, what it was like growing up in the late Soviet Union, whether they believed in the communist ideology that surrounded them, they always laugh at me.

    ‘Don’t be silly,’ most answer.

    ‘So you were dissidents? You believed in finishing the USSR?”

    ‘No. It’s not like that. You just speak several languages at the same time. There’s like several you’s.’ (233-4)

    Having recognized the genealogical connection between late Soviet cynicism and the present day triumph of cynicism of Russia’s elites, Pomerantsev offers the following diagnosis: “Seen from this perspective, the great drama of Russia is not the ‘transition’ between communism and capitalism, between one fervently held set of beliefs and another, but that during the final decades of the USSR no  one believed in communism and yet carried on living as if they did, and now they can only create a society of simulations” (234).

    It sounds very logical but a little too straightforward to be accurate.  Besides, this logic fails to explain the internal shift that has resulted in the current state of affairs. What Pomerantsev disregards is the subversive power of cynicism, its insidiousness. In the late Soviet period, what may be defined as cynicism, or a cynical split in multiple I’s, was also responsible for numerous practices and discourses that Yurchak unites by the term “living vnye”:

    … the meaning of this term, at least in many cases, is closer to a condition of being simultaneously inside and outside of some context” (2006, 128) – here the system of Soviet ideas, expectations, scenarios, etc. The system of “vnye” discourses and milieus, in the scholar’s opinion, explains the sudden collapse of the invincible Soviet system: “although the system’s collapse had been unimaginable before it began, it appeared unsurprising when it happened. (1)

    Furthermore, Soviet culture, since the 1920s and until the 80s, has been creating a whole rogue’s gallery of attractive and winning cynics and non-conformists, brilliantly defeating the system. This cultural trope spectacularly manifested the “power of the powerless” to use Vaclav Havel’s famous formulation. Literary and cinematic works about such characters enjoyed cult status, while belonging to official and non-official culture alike, from personages like Ostap Bender from Ilf and Petrov’s diptych of satirical novels The Twelve Chairs (1928) and The Golden Calf (1932) – both the subjects of multiple films,  and the suite of demons accompanying the Devil Woland in Mikhail Bulgakov’s famous novel The Master and Margarita (written mainly in the 1930s, first published in 1966) to the authors and characters of Russian postmodernism, including the aforementioned Moscow to the End of the Line with its philosophizing trickster in the center, to Dmitrii Prigov, who had been constructing his cultural personality as a trickster’s ploy, throughout his entire career.

    In my book Charms of the Cynical Reason (Lipovetsky 2011), I argue that the figure of the trickster in Soviet culture played a dual role. On the one hand, s/he provided cultural legitimacy to Soviet cynicism, even lending it the aura of artistry. The cynical split- or multi-personality may have been essential to survive and endure enforced participation in the grey economy and society. But, as a rule, this was accompanied by feelings of guilt and shame, compounded by the official Soviet rhetoric which demonized bourgeois conformism and interest in material comfort. The charming and versatile Soviet tricksters removed the feelings of guilt that Soviet readers and spectators might experience, turning the battle for survival into a jolly game exposing contradictions between official Soviet rhetoric and mundane survival.

    On the other hand, in full correspondence to Sloterdijk’s thesis that “Cynicism can only be stemmed by kynicism, not by morality” (194), the artistic and non-pragmatic trickster playfully mocked and demolished widespread cynical discourses and practices. By the term “kynicism” the philosopher defines non-pragmatic, scandalous and artistic aspects of cynicism – exactly those that the Soviet tricksters embodied.  Thus, the lineage coming from Soviet tricksters finds its direct continuation in Sorokin’s scandalizing use of scatological and cannibalistic motifs in his writings. Voina’s cynical performances include such “cynical” acts as stuffing a frozen chicken up a vagina or simulating the lynching of immigrants in a supermarket. Pussy Riot with their punk-prayer in the main Moscow cathedral appear to be true  heirs to this tradition, adding new edges of political, religious, and gender critique to the trickster’s subversion.

    Thus, the “genius” of the Putin/Surkov system lies in the balancing of conformity and subversion associated with refurbished and even glamorized late Soviet cynicism.  Yet, neither Surkov, nor Pomerantsev realize that a balance based on cynicism is unstable precisely due to the self-subversive nature of the latter.  This balance was disturbed in 2011 by the excessive cynicism of Putin and Medvedev’s “switcheroo” that generated a wave of protests lasting until May 6,  2012 (the day prior to Putin’s third inauguration), when a rally at Bolotnaya Square was brutally dispersed by the police.  Notably, this protest movement was distinguished by a very peculiar – cynical or tricksterish –brand of humor. For example, when the President compared the white ribbons of the protestors with condoms, the crowd  responded by proudly carrying a huge, ten-meter long condom at the next protest. Pussy Riot’s punk prayer  requesting the Mather of God to force Putin away,  appeared as an inseparable part of this movement.[3]

    Troubled by these reactions, the regime responded in an increasingly aggressive and conservative way: from imprisoning Pussy Riot and the participants of peaceful demonstrations, to homophobic laws, from the introduction of elements of censorship to the discrediting, persecution, and  simply assassination of prominent liberal politicians, from the  elevation of the  Russian Orthodox Church as the ultimate authority in culture and morality, to the promotion of  a filtered “heroic history” as opposed to “negative representations” of the Russian and Soviet past;  from rabid anti-Americanism to the general promotion of anti-liberal sentiment, etc.  Spurred by the Ukrainian revolution, this trend in 2014 transformed into the active implementation of imperialist dreams, along with a nationalist and anti-Western media frenzy,  and a paranoid quest for enemies both foreign and domestic.

    All these tendencies clearly match Umberto Eco’s well-known criteria of Ur-Fascism. First and foremost, the cult of tradition (the infamous “spiritual bonds” about which Putin and his cronies like to speak so much), the  rejection of modernism (in this case, postmodernism  – as exemplified not only by the persecution of Pussy Riot, but also by pogroms of exhibitions of contemporary art, or the scandal around an experimental production of Wagner’s Tannhäuser in Novosibirsk); the macho cult of action for action’s sake (in a diapason from Putin;s naked torso to the Sochi Olympics as the main even in contemporary Russian history); “popular elitism” along with contempt for parliamentary democracy and liberalism (“Gayrope”!);  and of course,  nationalism enhanced by “the obsession with a plot, possibly an international one” (Eco 1995, 7).

    However, Eco did not mention that fascism can be born from an excess of cynicism turning on itself. This transition has been most illuminatingly described by Sloterdijk, when he discusses the birth of Nazism from the joyfully cynical atmosphere of the Weimar republic. He argues that fascism positions itself as the enemy of ambivalence, histrionics and deception, supposedly overcoming the cynical components of culture.  It does so through the promotion of a radically primitive and reductionist conservative mythology, which is presented as a modern tool capable of releasing modernity from its controversial and demoralizing effects.

    However, as the philosopher demonstrates, fascism also represent the highest manifestations of cynical culture.  First, according to Sloterdijk, Nazi mythology originates from the same philosophical premises as cynical culture: “In their approach, they are all chaotologists.  They all assume the precedence of the unordered, the hypercomplex, the meaningless, and that which demands too much of us.  Cynical semantics … can do nothing other than to charge order to the account of cultural caprice or the coercion toward a system” (399).  Second, in totalitarian culture, theatricality becomes a crucial weapon of political warfare through the orchestrated representation of the leader and the aesthetics of mass political spectacles. The performance of the power’s transcendental status, which is guaranteed by messianic ideology, as well as by spectacles of national unity that cover up constant, “tactical” ideological shifts, struggles within the upper echelons of power, the appropriation of “alien” ideological doctrines and practices etc., is no less important here.

    Pomerantsev’s  analysis deeply correlates with these observations. Russian media generously uses the word “fascists” typically applying it to the Ukrainian authorities and sometimes to Western countries, yet they use this word as an empty signifier, a universal label for everything “alien” and dangerous.  Pomerantsev never uses this word, yet, Nothing Is True…  compellingly documents the rise of fascism in Post-Soviet Russia (however postmodern it might be).

    Pomerantsev’s book is about fascism of a new kind, which existing political radars fail to detect and thus overlook, which is able to mimic western discourse, while thoroughly opposing it.  This fascism is armed with the “hyperreality of simulacra” (instead of mere theater) and promotes its “traditional values” with an openly cynical smirk. It also effectively transforms the cynical negation of truth into a foundation for a new political paranoia, and masterfully adopts a liberal rhetoric when needed. In Pomerantsev’s words:  “This is a new type of Kremlin propaganda, less about arguing against the West with a counter-model as in the Cold War, more about slipping inside its language to play and taunt it from inside” (57).

    Only on the surface does this new fascism resemble Stalinism or late Soviet culture, in fact, it is a new phenomenon: unlike them, it is deeply embedded in capitalist economic, media and cultural regimes. It is no longer based on a clear ideology, but to use Pomerantsev’s incisive formula, on “the culture of zero gravity”, it  successfully utilizes capitalist mechanisms and liberal rhetoric,  donning fashionable masks, including postmodern ones. Pomerantsev’s book warns the Western world that a monster has arisen within its own global cultural discourse.  This monster rises in contemporary Russia, but it can rise elsewhere: this is why Pomerantsev and Weiss call Russia “an avant-garde of malevolent globalization”.  At the very least, this means that the country and its current situation deserves very close and very well informed attention and that those resisting this new fascism within Russia, in culture, politics, or society, deserve the whole-hearted support and understanding of the rest of the world.

    Notes

    [1] Tellingly in The New York Times  review of Pomerantsev’s book, Miriam Elder noticed the absence of Putin, but nevertheless reduced stories of its diverse characters to the cliché: “they’re characters playing parts in the Kremlin’s script” (Elder 2014). It is little wonder that the reviewer chastises Pomerantsev for not writing about “Russia’s long and tortured history with authoritarianism”, i.e., Russia’s alleged authoritarian “habit”.

    [2] For all quotations from Pomerantsev’s book, I am using the 2015 edition.

    [3] Ilya Gerasimov in his brilliant analysis of a popular rock singer Sergei Shnurov, who has been considered an epitome of post-Soviet cynicism, shows how his songs of 2012-14 have transformed cynicism into a self-critical discourse (see Gerasimov 2014).

    Works Cited

    • Stephen Castle 2015. “A Russian TV Insider Describes a Modern Propaganda Machine.” The New York Times, February 13, http://www.nytimes.com/2015/02/14/world/europe/russian-tv-insider-says-putin-is-running-the-show-in-ukraine.html
    • Guy Debord 1995. The Society of the Spectacle, transl. by Donald Nicholson-Smith (New York: Zone Books).
    • Masha Gessen 2014. Words Will Break Cement: The Passion of Pussy Riot (New York: Riverhead Books).
    • Lev Gudkov and Boris Dubin 2001.  “Obshchestvo telezritelei: massy i massovye kommunikatsii v Rossii kontsa 1990-kh, “ Monirtoring obshchestvennogo mneniia,  2 (52),  March-April: 31-44.
    • Miriam Elder 2014. “Nothing Is True and Everything Is Possible by Petr Pomerantsev. The New York Times, November 25, http://www.nytimes.com/2014/11/30/books/review/nothing-is-true-and-everything-is-possible-by-peter-pomerantsev.html?_r=0
    • Umberto Eco 1995. “Ur-Fascism,” The New York Review of Books,  June 22, http://www.pegc.us/archive/Articles/eco_ur-fascism.pdf
    • Evgeny Dobrenko 1997. The Making of the State Reader: Social and Aesthetic Contexts of the Reception of Soviet Literature (Stanford: Stanford University Press).
    • Sheila Fitzpatrick 2005. Tear Off the Masks!: Identity and Imposture in Twentieth-Century Russia (Princeton: Princeton University Press).
    • Ilya Gerasimov 2014. “Lirika epokhi tsinicheskogo razuma,” http://net.abimperio.net/node/3353
    • Evgenii Iasin 2005. Prizhivetsia li demokratiia v Rossii. Moscow: Novoe izdatel’stvo.
    • Nadia Kalachova 2014. “Piter Pomerantsev: ‘Zapad legko verit v to, chto Ukrainy ne sushchestvuet’.” LB.ua, April 1.  http://www.interpretermag.com/wp-content/uploads/2015/07/PW-31.pdf
    • Oleg Kharkhordin 1999. The Collective and the Individual in Russia: A Study of Practices. (Berkeley, Los Angeles, London: University of California Press).
    • Mark Lipovetsky 1999.  Russian Postmodernist Fiction: Dialogue with Chaos (Armonk, NY: M.E. Sharpe).
    • Mark Lipovetsky 2008.  Paralogies:  Transformations of the (Post)Modernist Discourse in Russian Culture of the 1920s-2000s (Moscow: Novoe Literaturnoe Obozrenie).
    • Mark Lipovetsky 2011. Charms of the Cynical Reason: The Transformations of the Trickster Trope in Soviet and Post-Soviet Culture (Boston: Academic Studies Press).
    • Mark Lipovetsky 2014. “Breaking Cover: How the KGB became Russia’s favorite TV heroes?”  The Calvert Journal,  April 30, http://calvertjournal.com/comment/show/2433/the-rise-of-kgb-television-series
    • Joseph Natoli and Linda Hutcheon 1993. A Postmodern Reader (New York: State University of New York).
    • Viktor Pelevin 2002. Homo Zapiens, transl. by Andrew Bromfield (New York and London: Viking).
    • Petr Pomerantsev 2011. “Putin’s Rasputin.” London Review of Books.  October (30:20), http://www.lrb.co.uk/v33/n20/peter-pomerantsev/putins-rasputin
    • Petr Pomerantsev 2011a. “The BBC’s Foreign Language Cuts Are Britain’s Loss.” Newsweek, April 3. http://www.newsweek.com/bbcs-foreign-language-cuts-are-britains-loss-66439
    • Petr Pomerantsev and Michael Weiss 2015. “The Menace of Unreality: How the Kremlin Weaponizes Information, Culture and Money.” A Special Report presented by The Interpreter, a project of the Institute of Modern Russia. http://www.interpretermag.com/wp-content/uploads/2015/07/PW-31.pdf
    • Kristina Rotkirch 2008. Contemporary Russian Fiction: A Short List. Russian Authors Interviewed by Kristina Rotkirch, ed. by Anna Ljunggren, transl. by Charles Rougle (Evanston: Northwestern University Press).
    • Peter Sloterdijk 1987. Critique of Critical Reason, trans. Michael Eldred (Minneapolis: University of Minnesota Press).
    • Dmitrii Volchek 2012. “Iskusstvo, pobezhdaiushchee strakh,” Radio Svoboda, May 16, www.svoboda.mobi/a/__/24583384.html
    • Slavoj Žižek 1997. The Plague of Fantasies (London, New York: Verso).
    • Slavoj Žižek 2001. Did Somebody Say Totalitarianism? Five Interventions in the (Mis)Use of a Notion (London, New York: Verso).

    Mark Lipovetsky is Professor of Russian Studies at the University of Colorado-Boulder. He is the author of more than a hundred of articles and eight books. Among his monographs are Paralogies: Transformation of (Post)modernist Discourse in Russian Culture of the 1920s-2000s (2008) and Charms of Cynical Reason: The Transformations of the Trickster Trope in Soviet and Post-Soviet Culture (2011).

  • Eric Wertheimer – The Telegraphed Republic: Semaphores and the New Executive

    Eric Wertheimer – The Telegraphed Republic: Semaphores and the New Executive

    by Eric Wertheimer

    This essay has been peer-reviewed by the boundary 2 editorial collective. 

    Paris is quiet and the good citizens are content.

    —Napoleon Bonaparte’s first message delivered by optical telegraph after seizing power in 1799

    Maybe it should not surprise us that among the first messages of consequence sent by telegraph is an exacting statement about the silent satisfaction of the “people.”[1] Contrasted with Samuel Morse’s portentously open-ended “What hath God wrought?”, the cool administrative authority of Napoleon Bonaparte’s dispatch is rendered all the more acute. A republican concern for “contentment” —the Enlightenment affection for happiness even—is condensed into a secure, omniscient, and unambiguous transmission meant for deferred public consumption. It is a mode we might call “executive”—political information traveling the secure means of public power, assuming public consent as confirmation of its administrative reach; fitting that Napoleon inherited this state technology from the republic’s ruling Directoire Exécutif (Executive Directory). The victory message encodes and relays, rather than prints and broadcasts, the certainty of order and noiselessness, presuming authority over a defined geography.

    That certainty, in turn, can be understood as a product of the telegraphic process itself, a new, though not completely un-theorized, thing.[2] Prior to the telegraph, during the period of the republican nationalizing of print, a wave of information networks sought to align the aggregate of public discourse with the apparatus of state power. Napoleon’s telegraphy, however, represents a point of inflection away from this aggregation toward state power, inviting us to think about why the telegraph caused a change in republican governance. Consider that republican theories of oration and writing imagined an expansive and egalitarian field of persuasion and critique that, like a self-leveling body of water, would spread just behind, or ahead of, the nation itself. Coextensive with the information springs that diffuse its imagined selves, the new republic guaranteed a check on the balance of power between the public and civil authority. But, framed by republican ideology and the necessities of war, it became apparent during the transformations of Napoleonic statecraft that information would be both critical to public welfare and increasingly restricted within its compass. Networks of information would no longer be isomorphic with public self-conceptions—if they ever were. Republicanism’s sphere of mediation, an idealized reflecting surface, also started to become a linear network of self-locking channels—dark chambers and webs as opposed to mirrors and funnels. To be clear, telegraphy did not supplant these forms of mediation and political circuitry; rather, it added to the range of techniques available to the new republican state.

    When we begin to consider the semaphoric landscape of national communications, we see the various ways the republic of letters was not so much a network of transparent arguments and narratives about national integration or even regional identity, or a stage upon which the powerful exercised their arguments anonymously in service to the public good. During the early Federal period of the 1790s, it was increasingly a republic of inverted publicity, with the relays, compressions, and decompressions of texts serving to disrupt the economic and informational systems of print, signal-making, and manuscript. And this is true not only of everyday signal-making within networked New England (cf. Matt Cohen), but also of the highest and most canonical of American Enlightenment thinkers and politicians. Understanding the technology as a transatlantic republican transfer, we can begin to understand the republic of letters in its full range of mediated symbolic economies.

    *

    The chambering effect of the earliest semaphore telegraphs was architecturally visible in the structure of the telegraph stations themselves, and in the patterns such structures made upon the geography of the modern national republic. Figure 1 shows a Binary Panel telegraph from 1794, which diagrammatically charges each space with discrete intelligence, at once connected to and sealed off from an integrated whole.

    Figure 1: Binary panel telegraph. From: Louis Figuier, Les Merveilles de la science, 1867–1891, Tome 2.djvu, Image 21.m

    Designed as a set of relational binaries, the networked physical space combines autonomous coding with a necessary extension of that space into repeated linkages. Each linkage requires privileged access to interpretation and to the overall design of the coding structure. Individually and taken together, these stations signaled an open-air puzzle that invaded public space with private meanings about public things. Under such conditions—these coded assemblages—readers and writers form a publically private sphere, or what we might call a crypto-public.

    That geometry of segmentation and networking repeated itself at larger scales. Indeed, such segmenting is evident in the regularized rhizome or web that becomes the French station map from 1792 until 1852 (see Figure 2). At the risk of over-metaphorizing the pattern, this webbed rhizome forms a networked humanoid (the King’s ghosted body?) in its telegraph system, revealing in the process a kind of early infographic of modern power.

    Figure 2: Chappe telegraph system

    Paul Virilio has described the post-Roman segmentarity of territory as essential to the logic of modern states, imposing linearity in its militarized necessity. Perhaps that is discernible here with the return of an updated Roman militarism, very much in keeping with an echoed Napoleonic neoclassical and royal poetics, but here with the modern executive as its austere figurative counterpart. The map reveals the telos of this transformation’s capacity to make meaning visibly transportable across the French topography and adherent to the needs of the new republican executive.[3]

    We can begin to evaluate this transformation, particularly visible in the 1790s, by asking questions not just of the broader shapes of networking and mediation, but of our dispositions as readers of both ambiguity and precision. The new telegraphic sphere transformed the methods of reading itself, taking the rage for cryptography and ciphers into a more systematic and pervasive (because more public and open) mode of coded transmission. Words, clauses, phrases, and sentences (which can helpfully be classified as “intelligence,” “information,” “data,” or “plaintext”) were condensed into graphemes, and then sequenced for reception that resists confusion; we might even think of it as among the earliest forms of pure information. And yet we should remember that the pure informatics of the telegraph does not exclude confusion from the structure of transmission—indeed, confusion (as with all cryptography) is a feature, not a bug. Confusion fills the role of the sealed envelope in open field communication like telegraphy, keeping the visible illegible, but packaged nonetheless. The conversion of plaintext into a smoother temporal package, able to splice distance, divides legibility into the indecipherable and the solvable. Depending on one’s access to the network of interpretive competence, one’s intention and position in the signifying relay, what appears opaque can be perfectly legible.

    Telegraphy’s early binary hermeneutics had its sources in a worry about the degrading of meaning across distance and about the sufficiency of human agency itself. As a result of perceptual and interpretive decay, uncertainty emerges as an unfortunate consequence of the limitations of human acuity, a crippling nearsightedness. Telegraphy thus demanded a larger perceptual apparatus that could be prosthetically attached to the human eye, and it is given form both in the structures of the transponder stations that were built during the period of Napoleonic republicanism, and in the political state that necessitates and organizes those stations. The contingencies of visibility made signification’s emblems layered and rich, and they can be read as meaningful artifacts in both their patterns of projective mechanics and their metaphors for explaining political effects.

    The quest for a secure broadcast meant rethinking the terms of internal signification, transmission and external reception—a reclassifying of the perceptual mechanics of communication. Some of that work had already been theorized as a merging of the “sudden” with the “certain.” In 1684, Robert Hooke hailed the “certainty” of a proposed telegraphic “intelligence” in a “Discourse to the Royal Society. . . shewing a Way how to communicate one’s Mind at great Distances”: “That. . . [is] a Method of discoursing at a Distance, not by Sound, but by Sight. . . . ’tis possible to convey Intelligence from any one high and eminent Place, to any other that lies in Sight of it. . . in as short a Time almost, as a Man can write what he would have sent, and as suddenly to receive an Answer” (Hooke 1964, 142). Suddenness is Hooke’s word for automatism, or perhaps “telepresence.” Data imprints itself onto a computational, agent-less consciousness, forgoing faculties of judgment and interest that delay immediate understanding and make demands on memory. Distance and recall become what I would call negative fields of exchange, and only important to the degree that they are made to recede.[4] It is the origin of text as pure data rather than argumentation or even “thought”—non-conversational, foreclosing dialogue and memory, a text that is constrained and defined by its linear velocity, particularly well-suited to the purposes of the executive state. That executive state stood to recover what its nationalized representational technology might dissolve in the circuitry.

    That executive reclamation, at the dawn of the constitutional era of republics, is driven by the semiotics of technological migrations and deployments. I assemble this story out of disparate fixtures that may seem wholly apart. That is, it may seem this essay is bifurcated nationally and conceptually—France and America, telegraphic structures and telegraphic metaphors—but I want to contend that the two are essential to understanding each function. While this essay is largely about French telegraphes and then American “telegraphs,” it is really about how the new world of executive informatics in the transatlantic republican era came to recognize itself as a political technology and, in so doing, made room for a new kind of communicative regime.

    *

    Edward Cave’s London-based Gentleman’s Magazine in 1794 ran a long item about the deployment of the semaphore telegraph in France, a piece of reportage which was reprinted in US periodicals not long after. What is particularly telling about Cave’s piece is its appending to the report a quotation from Ovid, “Fas est ab hoste doceri [It’s right to learn from one’s enemy].” Cave signals to his readers that the import of the technological breakthrough is as much in its usefulness in defining enemies as in its power as a means of communication.

    Mr. Urban                               Sept 11.

    The telegraph was originally the invention of William Amontons. . . [who] contracted such a deafness as obliged him to renounce all communications with mankind. . . . This philosopher also first pointed out a method to acquaint people at a great distance, and in a very little time, with whatever one pleased. This method was as follows: let persons be placed in several stations, at such distances from each other, that, by the help of a telescope, a man in one station may see a signal made by the next before him; he immediately repeats this signal, which is again repeated through all the intermediate stations. This, with considerable improvements, has been adopted by the French, and denominated a Telegraphe; and, from the utility of the invention, we doubt not but it will be soon introduced in this country. Fas est ab hoste doceri. (Cave 1794)

    And so indeed the French enemy speaks of telegraphy as the key to republican governmentality: in this iteration, the invention is quickly attached to the purposes of a voracious political program that is, in turn, alleged to be in the service of shadowy power-hungry persons.

    To Cave’s account is appended a much-reprinted copy of the notorious Jacobin Bertrand Barere’s report to the French Convention of August 15, 1794, which describes the telegraph’s value to the new Republic:

    By this invention the remoteness of distances almost disappear; and all the communications of correspondence are effected with the rapidity of the twinkling of an eye. The operations of Government can be…facilitated by this contrivance, and the unity of the Republick can be the more consolidated by the speedy communication with all its parts. The greatest advantage which can be derived from this correspondence is. . . its object shall only be known to certain individuals. (Cave 1794, 815–16)

    Barere’s advertisement in the English-speaking press for the new technology is decidedly political but blind to its ideological contradictions—“certain” individuals who make use of certainty, executives, seeking the unity of its dispersed publics. The telegraph is immediately conceived in such quarters in executive mode, exclusive to what Friedrich Kittler (2010) in Optical Media calls a “militant elite” (74), despite the radical democratic alignment of the partisans who are accused of deploying it.[5]

    The capacity to make information a spectacular but exclusive part of modern control, and a certain kind of public comfort, was immediately evident to all who had a stake in political power. Consider the telegraph in Figure 3, which Claude and Ignace Chappe perfected as a synchronized pendulum system between 1790 and 1791.[6] Despite its anachronism as a depiction (c. 1868) it is a useful rendering of the original system as first used on March 2, 1791, with synchronized towers in Brulon and Parce. In the distance (16 kilometers) is the synced analog clock in Brulon, distinguished by the popping face of phosphorous white set on the edge of a calm green horizon.

    Figure 3: Visual telegraph system, 1791. Sheila Terry/Science Photo Library

    A modified clock face in guillotine-like frame is the inaugural emblem of telegraphic coding—here, divided temporality looms over all aspects of its earliest methods. More precisely, the clock face is a dead metaphor, no hands inexorably rotating through a fixed template. It has become, instead, an analogic key for the alphabet (the severed head still speaking) divided into sixteen points that indicate letters through serial combinations of position and sound. The reliance upon numerals underscores the medium’s crypto-publicity, even as it bespeaks time as something to be erased or repositioned. Temporality has been transferred from the “clock” to the spaces between, with the linearity of time made literal in its acoustical and graphical run over space, the natural world modernized by technical structures; indeed, now clocks could be made to “speak” to one another, from station to station, over the mute countryside. The speech of such machines across backcountries was—even as early as 1795 in the following press report on the surrender of Conde-sur-Lescaut (see Figure 4)—referred to as “lines,” well before the more obvious metonym deriving from the cables and wiring of electronic telegraphy.

    Figure 4: Press report on the surrender of Conde-sur-Lescaut. Source: Readex (Readex.com), a division of NewsBank, inc.

    The stations were not meant for public squares, as in the somewhat fanciful context in the rendering above (see Figure 3), but more like the austere passive-voiced reportorial description here (see Figure 4). Indeed, the public should be imagined as a potential decoder in the countryside, waiting passively or aggressively beneath the flow of information, a confusion of unseen seers unable to see—“copying the words so expeditiously, and for throwing such a body of light as to make them visible. . . does not yet appear”. There is not only a deliberate confusion engendered by the encrypted signals, but also a confusion that arises from the non-human agency of transponders—bodies of light and bodies of human copiers are effectively the same.

    In fact, that initial puzzlement or wonderment about the grafting of data over demographics was soon turned to distrust of the elitist, undemocratic origins and ends of executive transponders. This disorientation—the national imaginary threatened by encrypted units of time and space, or what Patricia Crain (2003) describes as “paranoia” about the semaphore telegraph’s “seemingly self-authorized power”—is not discernible in any of the early transatlantic accounts (67). But we know that the successor to Chappe’s panel telegraph was destroyed twice by mobs who apparently feared that the telegraph was being used to aid royalist forces within France.

    Barely two years later, after the synchronized pendulum and then binary panels, the French had established the first telegraph line, an optical semaphore, that used a four-part armature, the angles of which could be regulated (see Figure 5).

    Figure 5: French semaphore telegraph.

     

    On view here is a muted sociability, with blackened windows in the tower, the crowd absent from a landscape that suggests its utilitarian purpose with diagrammatic rigor. A single man stands with a horse, back to us, presumably part of the postal system, ready to dispatch an end note. The clock face has been replaced by a purely angular semiotics, one whose capacity for coded permutations was multiplied greatly, with sixty-three possibilities. In the armature of the new semaphore, time of transmission is squeezed into smaller intervals, smoothed out and compressed, abbreviation disguised as executive technique. Another way to put it is that executive communication via telegraph was “steganographic,” a term borrowed from diplomatic ciphering’s system for managing confusion. Steganography provided for the polity to be policed and protected via codes.

    *

    Steganography, the communicative science of secrecy, is one way telegraphy relayed across the Atlantic, before the infamous nineteenth-century story of the heroic laying of the cable. For instance, Benjamin Franklin Bache’s 1797 English-French reprinting of P. R. Wouve’s fragmentary Tableau Syllabique et Steganographique/A Syllabical and Steganographical Table figures encryption as a form of telegraphy, stressing the way steganography guarantees the “safety of the secret” (Wouve 1787, repr.1797).

    Figure 6: Title page of French steganography text, published in Philadelphia.

    Part of a mini-genre of ciphering and cryptography tables in the 1790s, the manual assembled methods for encrypting information numerically so that it could travel publicly without detection as to alphabetic meaning. As Wouve’s (1787, repr. 1797) manual puts it, “Experience will soon bring to a general demonstration, that the method here employed, is not only conducive to secure the secret of any kind of written intercourse, but is even advantageously applicable to all telegraphical purposes, as well as to all sorts of reconnoitering signals, either for the sea or land service.” The word “telegraphical” here, in its adjectival charge, tropes automatism and brevity, as the republic begins to experiment with abbreviation and secrecy as civic activities.

    In 1793, about four months after George Washington delivered the shortest inaugural address in American history (a speech entirely given over to a brief of republican executive caution: depose me if I abuse power), several newspapers reported the demonstration of Chappe’s telegraph as part of a nationalized system. The reportage picked up considerably with news of its instrumentality in the French capture of Conde-sur-L’Escaut from the Austrians in 1794. But rather than the lightning of Napoleonic informatics, telegraphy actually crossed the Atlantic in the slow boat of metaphor, as a second-order signal about easily-relayed public information, rather than a viable state technology of applied steganography. As with other new technologies finding their places within uneven national development, telegraphy and steganography became devices for conceiving a new cultural and political capability, rather than a story about what the device itself does. In the United States, telegraphy became a commercial curiosity inserted into print culture in a way that was either part of an enthusiastic endorsement of, or deep suspicion about, republicanism and its aggressive mutations in Jacobin and early-Napoleonic France. What we notice most is the specter of a technology being used quite openly to ideological ends—republicans praising the possibility of the telegraphic state and Federalists decrying the militarized appropriation of its capacity for public information. Both postures were paranoid, but not entirely unwarranted.

    Telegraphy entered the burgeoning print press of the early United States as a metaphor about how dissemination of news across the republic might take place—“telegraphing,” as Barere made clear, keeps information current even at great distances, to the benefit of executive and public alike. In the process of metaphorical transposition, the military was quickly conflated with the political. It joined other technophilic tropes of distance-, time-, or mechanical-conquest in newspapers that incorporated words like “telescope,” “time piece,” “balance,” and “orrery” into their nameplates. This needful over-reach in the naming of newspapers even became the object of satire. On the millennial date of January 1, 1800, the Massachusetts Mercury made sport of the practice of media metaphors in a poem, targeting competitors like The Aurora, The Argus, and The Bee:

    That ‘RORA’s’ cloudy, ‘ARGUS’ squints;

    That the pert ‘BEE’’s a worthless drone,

    Sans sting or honey of his own.

    But dropping anger, loud you laugh,

    At signal false of ‘TELEGRAPH.’   

    Despite the manifest epidemic of journalistic over-selling, the telegraphic brand in the US preserved a naïve belief in the transparency of print messaging—it was the same, only faster, and so its use in key parts of the new republic was reassuringly not ambiguous or threatening. As long as it was subsumed within print networks, the telegraph was only indirectly threatening to those who feared republicanism and its radical political agenda derived from revolutionary militancy.

    An assortment of rural towns and cities outside the northern periphery had papers that adopted the title. Charleston, SC, was North America’s first. With only four issues of the Telegraph, published from March 16, 1795 to March 20, 1795, and a variant title of The Telegraph and Charleston Daily Advertiser: a proposed Baltimore Telegraphe was advertised in the Aurora General Advertiser in February 1795, but never published. Perhaps the most infamous and influential early “telegraph” (and in many ways an anomaly to the regional and demographic rule at work here) was Boston’s arch-Republican twice weekly, Constitutional Telegraph.[7] Aside from Boston’s Constitutional Telegraph, what one notices is that the majority of “telegraphs” were removed from the metropolitan centers, either in suburbs or frontier towns, across regions north to south, mid-Atlantic to west. Examples include: the Telegraph in Georgetown, KY[8]; the American Telegraphe in Newfield, CT, with a variant of American Telegraphe & Fairfield County Gazette[9]; and Maine’s Wiscasset Telegraph.[10]

    The use of the technologized press metaphor tended to be a signal about the party affiliations of the printer, as was the case of Brookfield, Massachusetts’s The Moral and Political Telegraphe.[11] It was produced as a successor to the venerable Isaiah Thomas’s Worcester Intelligencer by Thomas’s apprentice Elisha H. Waldo. Upon taking it over for its run as the Telegraphe, Waldo shifted the paper’s allegiance to republican France, which, as a front-page report of an anti-Jacobin conspiracy put it, had “warm American friends.” There is something fitting about the great printer’s pupil paradigmatically adopting the telegraphic metaphor to politically realign and modernize the news. Allegiance, whether changed or emphasized, moral or political, was publicized by subtle orthographic signals: Boston’s The Constitutional Telegraph changed its spelling to the French in 1802, becoming The Constitutional Telegraphe (see Figures 7 and 8). After the War of 1812, and, arguably, the onset of a Napoleonic executive posture in US political culture, the telegraph secures its place firmly in a variety of nameplates. For instance, the Hillsboro Telegraph[12]; the Columbian Telegraph[13]; the Rochester Telegraph[14]; the Telegraph[15]; and the American Telegraph published in Brownsville, PA.[16]

    Figure 7: The English Constitutional Telegraph. Source: Readex (Readex.com), a division of NewsBank, inc.

    And the Constitutional Telegraphe, three years later transformed with the French spelling and the addition of a flourish of an iconic masthead (see Figure 8):

    Figure 8: The French Constitutional Telegraphe, three years later. Source: Readex (Readex.com), a division of NewsBank, inc.

    The press motif of a telegraphed republic had a life outside the means of bringing both news and a politicized Napoleonic semiotics to the suburbs and rural outposts of the United States. Indeed its metaphorical purposes did cultural, and even literary, heavy lifting beyond the journalistic. It was both an emblem of progressive inventiveness in pro-French propaganda and a warning about dystopic ruination in Federalist satire. Of the pro-French variety, telegraphy became a glorious artifact and a feature of historical prophecy: Virginia republican St. George Tucker’s 1794 replica model/homage to Chappe’s telegraph was soon after included in Peale’s Museum. And the Aurora/Gazette of 1795 hails the telegraph bringing news at the speed of “lightning” of the “disgrace of our enemies” and the “towering flights of our glories,” positing the conflation of the technology with the triumphantly political. English historian John Gifford produced a grand hagiography called The History of France (1792), praising the telegraph and Napoleon’s military genius that was re-published by the Aurora’s republican printer and editor, William Duane.

    Of the latter, anti-Jacobin genre: John Lowell Jr.’s The Antigallican [sic]; or, The Lover of His Own Country: In a Series of Pieces Partly Heretofore Published and Partly New, Wherein French Influence, and False Patriotism, Are Fully and Fairly Displayed,(1797), retails the purported schemes in which Jacobins were wont to use technology to bypass rational debate and aim straight for the passions.

    By pompous professions of their own purity, and by an over zealous crimination of their opposers, the Jacobins always aim at exciting the passions of the people, before their understandings have opportunity to examine into the truth. They know that public clamor is like a torrent, which in its destructive course, sweeps away every vestige of human wisdom. . . . By exciting it therefore they hope to overwhelm the monument of law, order and public authority. Thus in the case of the treaty with Great Britain, no arts, no intrigues, no falsehoods were omitted, to excite the prejudice and inflame the passions of the people. . . . They distributed it with the rapidity of the telegraph, and promoted instant discussions of it in illegal assemblies…proud ambition leagued with stupid folly. (32)

    Lowell’s “telegraph” is a symbol, part of a political jeremiad meant to appropriately dispose the public to a threat. It is accurate insofar as it presages President James Madison’s rather overt preying on public passions fifteen years later, during the War of 1812. It is also a moment that arguably ushered in the new age of war-making as executive action, when the telegraph’s secretive modalities do indeed feed a politics that is really neither republican nor Federalist, but executive. And while Lowell might have been justified in worrying the Democratic–Republican tilt to France, he need not have fretted the communicative means to that end because the telegraph’s military mode was similarly disdainful of the public and its passions; passions could be political, executive telegraphy was merely strategic.

    *

    The realization of the telegraph as a militarized national project took some time, but of course it did happen just early enough for the United States to weigh in formally against British anti-Bonapartism. But the actual building of telegraph networks stayed relatively modest in the first decade of the nineteenth century. It is first mentioned as a possible means of “dart[ing] information” in hypothetical land and naval maneuvers in John Dickinson’s 1798 pro-French tract, A Caution: or, Reflections on the Present Contest between France and Great-Britain: “If intelligence from places more remote is wanted, telegraphs can dart it, with any requisite velocity.” Not long after, Jonathan Grant of Belchertown, MA, obtained a patent for an improved semaphore telegraph in 1800 and ran a line between Martha’s Vineyard and Boston (see O’Rielly 1869, 262–69).

    In 1813, in the midst of North America’s extension of Napoleonic war, civil engineer Christopher Colles reverse-engineered the telegraphic metaphor and published Description of the Numerical Telegraph for Communicating Unexpected Intelligence by Figures, Letters, Words, and Sentences (see also O’Rielly 1869, 262–69). The word “unexpected” here replays the sense of suddenness and even automatism suggested in Hooke’s plans for a medium that would allow two minds to surpass distance. The condition of war becomes a crystal in the fluid of the saturated medium of political necessity. In the midst of the War of 1812, this is the first attempt to really theorize and diagram a new kind of semaphore telegraph, which would be put to the uses of the American state (see Figure 9). Colles’s plan, first advertised in the Columbian in 1812, was realized in a forty-seven-mile line between New York City and Sandy Hook, NJ.

    Figure 9: Christopher Colles’s semaphore telegraph

    Colles’s semiotic machine was a hybrid of the semaphore, the ratcheting gear, and the clock face, suggesting for the first time a structure not wedded to the land, in the manner of a house, but a mobile, if clunky, device. Moreover, Colles’s telegraph is DIY-modular, and as with the French telegraphic station map it is comically homologous to the human form—a promising stick figure for the execution and administration of post-Enlightenment “unexpected intelligence.” The austere new agent of meaning is fit for Madisonian executive pragmatism—or bumbling, as the case may be. Unlike the French homology of state power, here the telegraphic pattern is coded at a smaller scale, individualized and repurposed for the public sphere of dispersed American political regionalism.

    We are learning in the process of historicizing “new media” that nothing is especially new under certain conditions of modernity (see Pingree and Gitelman 2003), and this is particularly true for telegraphy as a medium for political discourse and organized statecraft. The history of the telegraph is far from discretely composed by the period I have chosen to examine in this essay; understanding a range of phenomena—print culture, republicanism, early American regional allegiances—all resolve to greater clarity in the wake of the telegraph’s semiotic migrations. The hardnosed political deployment of the technology in France, the rhetorical circulation of an idea in the absence of state support in the United States, and then the effects of republican executive theory in the galvanizing moment of war—all form a proto-apparatus for militarized communication.

    And there is a deeper connection worth following and remembering that has to do with how cultural memory works and how political techniques maintain critical visibility to the public: During the 1790s, the telegraphic metaphor lodged itself within republican print culture benignly and visibly, removed from (but still encoded with) the terms of militarized informatics. That metaphor of telegraphic power emerged as a discernible self-consciousness about the technological shifts and executive ascendancy that began to fundamentally reorder the print sphere and its publics; but that self-consciousness did not extend to shifts in how executive semiotics reordered political life as a function of its form. The early technologists saw all symbol and nation, but not form or the implications of form. Telegraphy is, as such, part of the story of the undoing of the political order of print and the making of something newly invisible both to the public and to itself. Such technology, in which data is joined to politics, communicates its own “natural” ascendancy.

    I’d like to thank Sohinee Roy and Christopher Hanlon for their help in preparing this manuscript for publication.

    References

    Beauchamp, Ken. 2008. A History of Telegraphy. London: Institution of Engineering and Technology.

    Cave, Edward. 1794. Gentleman’s Magazine and Historical Chronicle 64, Part 2: 815-16.

    Cohen, Matt. 2009. The Networked Wilderness: Communicating in Early New England.  Minneapolis:  University of Minnesota Press.

    Colles, Christopher. 1813. Description of the Numerical Telegraph for Communicating Unexpected Intelligence by Figures, Letters, Words, and Sentences. Brooklyn, NY: Alden Spooner.

    Crain, Patricia. 2003. “Children of Media, Children as Media: Optical Telegraphs, Indian Pupils, and Joseph Lancaster’s System for Cultural Replication.” In New Media: 1740–1915, edited by Lisa Gitelman and Geoffrey B. Pingree, 61–90. Cambridge, MA: MIT Press.

    Dickinson, John. 1798. A Caution: or, Reflections on the Present Contest between France and Great-Britain. Philadelphia, PA: Benjamin Franklin Bache.

    Giedion, Sigfried. 2014. Mechanisation Takes Command: A Contribution to Anonymous History. Minneapolis: University of Minnesota Press. First published 1948.

    Gifford, John. 1792. The History of France. London:  C. Lowndes and W. Locke.

    Holzmann Gerard J. and Björn Pehrson. 2003. The Early History of Data Networks. Hoboken, NJ: Wiley.

    Hooke, Robert. 1967. Philosophical Experiments and Observations of the Late Eminent Dr. Robert Hooke. Edited by William Derham. London: Frank Cass and Co.

    Innis, Harold A. 1951 The Bias of Communication. Toronto: University of Toronto Press.

    Kittler, Friedrich. 2010. Optical Media: Berlin Lectures 1999. Translated by Anthony Enns. Malden, MA: Polity.

    Locke, John. 1689. Essay Concerning Human Understanding. Oxford:  Oxford University Press.

    Lowell Jr., John. 1797. The Antigallican [sic]; or, The Lover of His Own Country: In a Series of Pieces Partly Heretofore Published and Partly New, Wherein French Influence, and False Patriotism, Are Fully and Fairly Displayed. Philadelphia, PA: William Cobbett.

    Manovich, Lev. 2001.  The Language of New Media. Cambridge:  MIT Press.

    O’Rielly, Henry. 1869. The Historical Magazine, Notes and Queries Concerning the Antiquities, History, and Biography of America. Vol. V, Second Series. Morrisania, NY: Henry B. Dawson, 1869.

    Petre, F. Loraine. 2003. Napoleon and the Archduke Charles. Whitefish, MT: Kessinger.

    Pingree, Geoffrey B., and Lisa Gitelman. 2003. “Introduction: What’s New About New Media?” In New Media, 1740–1915, edited by Lisa Gitelman and Geoffrey B. Pingree, xi–xxii. Cambridge, MA: MIT Press.

    Wertheimer, Eric. 2012. “Pretexts: Some Thoughts on the Militarization of Print Rationality in the Early Republic.’ Canadian Review of American Studies 42, no. 1: 21–35.

    Wouves, P. R. 1787, repr. 1797. Tableau Syllabique et Steganographique/A Syllabical and Steganographical Table. Philadelphia, PA: Benjamin Franklin Bache.

    Notes

    [1] See Patricia Crain’s (2003) brief but seminal discussion of Napoleon’s message in “Children of Media, Children as Media.” Napoleon’s communiqué of 1799 was not, of course, the first message sent by nationalized telegraph. The very first message came years earlier, in 1794, when Napoleon relayed word of the capture of Quesnoy, an intriguingly precise post-revolutionary report about the surrender of “slaves”: “Austrian garrison of 300 slaves has laid down its arms and surrendered at discretion” (Petre 2003, 65 original emphasis)

    [2] Much of my thinking about this set of historically resonant design issues arises from a vein of media theory and history that originates in the work of Harold A. Innis in The Bias of Communication (1951) which begins to connect empire with media history and Sigfried Giedion, whose Mechanisation Takes Command (1948, repr. 2014) articulates a way to think about design as a signifier with deep meaning embedded in social (what he calls “anonymous”) history. I am aware my own terminology here is indebted to the new directions in media history, which take chances with analogies and homologies of form—thus the words “computational” and “binary” in my discussion of the early semaphore telegraph. I view this chance-taking as a way to theorize the semaphore as part of ideologies of representation that are playing out now, in fields like circuit design and social media analysis, but have distinct and identifiable origins in the past. Others in this lineage are of course, Friedrich Kittler (2010) and Lev Manovich, though I am wary of fully agent-less iterations of media history. That wariness is why I am interested in the political figure of the executive within this interpretive and representational network.

    [3] No accident it would seem that Napoleon spent part of 1795 in the Department of Topography. It is also worth mentioning that 1795 was also the year he produced the fragments of what would become his novella, Clisson et Eugenie, an example of a remarkably direct style of novelistic narrative.

    [4] This is a view of language largely shared by Hooke’s contemporary John Locke in his Essay Concerning Human Understanding (1689); Locke is deeply impatient with the inefficiencies of language as a means to certainty in communication. For further elaborations on the work of Hooke as a key figure in the history of telegraphic representation, see Wertheimer (2012).

    [5] Kittler (2010) uses this phrase in referring to Anathasius Kircher’s lanterna magica, pointing out that Kircher developed the projecting device as part of military research (74).

    [6] According to Gerard J. Holzmann and Björn Pehrson (2003) in The Early History of Data Networks, the earliest experiments with the concept of the modern telegraph systems occurred in the French navy. Captain Decourrejolles, in 1783, used a coastal network of semaphore stations to transmit enemy positions. In A History of Telegraphy, Ken Beauchamp (2008) has an in-depth description of the development of electrostatic telegraphy, which predated Chappe’s invention, but remained for much of the  eighteenth century a parlor spectacle rather than a viable state technology.

    [7] Published 276 issues between October 2, 1799 and May 22, 1802.

    [8] Published 5 issues between September 25, 1811 and December 22, 1813.

    [9] Published 128 issues between April 8, 1795 and December 28, 1796.

    [10] Published 41 issues between December 3, 1796 and March 9, 1799, with minor variant titles. It later had the title Lincoln Telegraph, which published 78 issues between April 27, 1820 and October 18, 1821.

    [11] Published 67 issues between May 6, 1795 and August 17, 1796.

    [12] Published 159 issues between January 1, 1820 and July 13, 1822 in New Hampshire.

    [13] Published 11 issues between August 19, 1812 and December 25, 1812 in  Norwich, NY.

    [14] Published 144 issues between July 7, 1818 and December 26, 1820 in Rochester, NY.

    [15] Published 77 issues between January 13, 1813 and July 30, 1814 in Norwich, NY (it was also known as The Telegraph and the Newton Telegraph).

    [16] Published 159 issues between November 9, 1814, and March 4, 1818.

  • David Tomkins – Assuming Control: Spielberg Rewires Ready Player One

    David Tomkins – Assuming Control: Spielberg Rewires Ready Player One

    by David Tomkins

    I.

    Ernest Cline’s bestselling novel Ready Player One (2011) envisions a future ravaged by war, climate change, famine, and disease in which most lived experience takes place in an immense multi-player virtual reality game called the OASIS. Created by James Halliday, an emotionally stunted recluse obsessed with 1980s pop culture, the OASIS promises relief from real world suffering by way of a computer-generated alternative reality overflowing with ‘80s pop culture references. Cline’s novel follows teenager Wade Watts on an adventure to locate the digital “Easter egg” that Halliday buried deep within the OASIS shortly before his death. Those seeking the egg must use three hidden keys (made of copper, jade, and crystal, respectively) to open secret gates wherein players face challenges ranging from expertly playing the arcade game Joust to flawlessly reenacting scenes from Monty Python and the Holy Grail. The first person to find the egg will inherit Halliday’s fortune, gain controlling stock in his company Gregarious Solutions, and assume control of Halliday’s virtual homage to the ‘80s, the OASIS.

    Rich in the ‘80s nostalgia saturating popular entertainment in recent years, and with a particular reverence for Steven Spielberg’s ‘80s corpus, Cline’s novel attracted legions of readers upon publication and became an instant best seller.[1] The signing of Spielberg to direct and produce the film version of Ready Player One underscored the treatment of Spielberg’s films in the novel as quasi-sacred texts, and generated a kind of closed feedback loop between textual and visual object.[2] Shortly before the film went into production, Cline told Syfy.com that it was “still hard for [him] … to wrap [his] … head around Steven Spielberg directing this movie,” in part, because the director showed himself to be such a huge fan of Cline’s novel, arriving at pre-production meetings with a paperback copy of Ready Player One containing dozens of notes regarding moments he wanted to include in the film.[3]

    But none of these moments, it turns out, included references to Spielberg’s earlier films. In fact, Spielberg made it a point to remove such references from the story. In 2016, Spielberg told Collider.com that he decided to make Ready Player One because it “brought [him] back to the ‘80s” and let him “do anything [he wanted] … except for with [his] own movies.”[4] “Except for the DeLorean and a couple of other things that I had something to do with,” [5] Spielberg added, “I cut a lot of my own references out [of the film].”[6] One can read Spielberg’s decision simply as an attempt to avoid self-flattery—a view Spielberg appears keen to popularize in interviews.[7] But equally compelling is the idea that Spielberg felt at odds with the version of himself celebrated in Cline’s novel, that of the marketable and broadly appealing director of block-busters like Jaws, E.T., and Raiders of the Lost Ark—in other words, the Spielberg of the ‘80s. Over the last twenty-five years, Spielberg has largely moved away from pulp genres toward a nominally more “serious,” socially conscious direction as a filmmaker (recent family-friendly films such as The BFG and The Adventures of Tintin notwithstanding). Ready Player One, however, a science fiction movie about teenage underdogs coming of age, sits comfortably among the films of Spielberg’s early canon—the deeply sentimental, widely appealing family-oriented films generally understood to have shaped the landscape of contemporary Hollywood.

    The tension between early and late Spielberg in Ready Player One is among the driving forces shaping the director’s adaption of Cline’s novel. By removing most references to himself from the film, Spielberg not only rewrites an important aspect of the source material, he rewrites American cinematic history of the last 40 years. Jaws, Close Encounters, E.T., the Indiana Jones films—these works are in certain ways synonymous with ‘80s pop culture. And yet, in making a movie about ‘80s nostalgia, Spielberg begins by pointing this nostalgia away from its most famous and influential director. This self-effacing act, which effectively erases the Spielberg of the ‘80s from the film, and by extension from the era it commemorates, belies the humility animating Spielberg’s public comments on self-reference. Spielberg saturates Ready Player One—as Halliday does the OASIS—with a meticulously crafted self-image, and what’s more, affords himself total control over the medium wherein (and from which) that image is projected. Spielberg paradoxically rewrites popular memory as a reflection of his own preoccupations, making Ready Player One a film in which the future the audience is asked to escape into is defined by Spielberg’s rewriting of the cinematic past.

    Central to Spielberg’s project of recasting ‘80s nostalgia in Ready Player One is an attempt to recuperate figures of corporate or governmental power—entities unlikely to have faired well in his ‘80s work. From the corrupt Mayor Vaughn in Jaws, to the pitiless government scientists in Close Encounters and E.T., to the bureaucrats who snatch Indy’s prize at the end of Raiders of the Lost Ark, figures in elite institutional positions typically pose a threat in early Spielberg. What’s more, in E.T., as well as Spielberg-produced films like The Goonies, these figures commit acts that compel young characters to take heroic, rebellious action. But in portraying Halliday as a meek, loveable nerd in Ready Player One, Spielberg introduces something new to the classic Spielbergian playbook that has implications not only for how we understand Spielberg’s ‘80s films, but also for how Ready Player One situates itself vis-à-vis contemporary pop culture. In Cline’s novel, Halliday comes across as a trickster figure in the mold of Willy Wonka—so much so that one of the first rumors to emerge about Spielberg’s adaptation of Ready Player One was that the director had coaxed Gene Wilder out of retirement to perform the role. Not only did that rumor turn out to be baseless, but the characterization of Halliday in Spielberg’s film neutralizes the faintly sinister underpinnings of Cline’s portrayal, replacing them with a goofy innocence, and an insistence—informed as much by contemporary celebrity worship as by Spielberg’s own status as an elder statesman of Hollywood—that the audience sympathize with, rather than despise, the all-powerful multi-billionaire.

    Halliday’s vast corporate empire, his incalculable wealth, the extraordinary political and cultural power he undoubtedly wields as the creator of an entertainment technology juggernaut—none of these things factor into Spielberg’s portrayal. Rather, Spielberg’s film compels us to pity Halliday, to see him as someone who has suffered, someone whose genius has denied him the kind of emotional life that we, the audience, take for granted (or that Spielberg wants us to take for granted, as the rich emotional interiority he imagines is itself a construct). Given that both Halliday (in Cline’s novel and Spielberg’s film) and Spielberg (in the real world) share global renown as authors of popular entertainment, it’s unsurprising that Spielberg would sympathize with the character. After all, the name Spielberg, whether cited in a production or directorial capacity, or as a generic descriptor (“Spielbergian”), suggests  “a mélange of mass appeal, sentiment and inchoate childlike wonder”—a description one could easily imagine applied to the OASIS.[8] But what is surprising is that Spielberg redeploys the sentimentality of his early work in Ready Player One to affirm the vertical social organization and imperialist ideology those films, at least on the surface, appear to attack.

    The truth-to-power ethos of Spielberg’s ‘80s corpus is enlisted in Ready Player One to sentimentalize the corporate overlord’s yearning to protect his product and control his legacy. Similar to how the rebel struggle against the evil empire in George Lucas’s Star Wars films ultimately reinforces another corporate empire (Lucasfilm), Spielberg’s early rebellions—which were never all that “radical” to begin with, given Spielberg’s fondness for traditional hetero-normative social structures—fold in on themselves in Ready Player One, readjusted to serve the film’s confirmation of neoliberal ideology and corporate sovereignty. What looks superficially in Ready Player One like a toning down of Spielberg and a celebration of Cline is in fact Spielberg through and through, but with the ironic upshot being the recuperation of institutional and corporate power, the affirmation of existing class structures, and a recasting of the heroic rebellions one finds in Spielberg’s early work as far more conservative.

    Unlike Spielberg’s film, Cline’s novel focuses a great deal on Halliday’s astonishing wealth, and it’s clear that for “gunters”—characters like Wade in search of Halliday’s egg—the acquisition of Halliday’s wealth is easily as important as gaining control of the OASIS. Wade, like most characters in Cline’s novel, is dirt poor: he, like millions of others, lives in a broken down mobile home stacked, along with dozens of others, hundreds of feet high. The world Cline describes is one of abject poverty: while the vast majority of people have next to nothing, Halliday, and a handful of corporate overlords like him, possess all the wealth, and wield all the power. This is not to overstate Cline’s interest in class in Ready Player One; indeed, he spends precious little time exploring the penurious world outside Halliday’s OASIS. Like his characters, it’s clear that Cline can’t wait to get back to the OASIS. But in Spielberg’s film, the at-best perfunctory acknowledgement of class dynamics seen in Cline’s novel is utterly ignored. Instead, Spielberg asks us to empathize with Halliday, maybe even to identify with him as much as—if not more than—we do with Wade.

    Rather than encouraging us to revile the corporate overlord responsible for impoverishing the world and controlling the lives of the story’s youthful heroes, Ready Player One stands out among Spielberg’s oeuvre (and recent Hollywood films generally) for the way it recasts the “innocent” teenager Spielberg marketed so effectively as an implicit bulwark against oppressive powers in the ‘80s as a figure sympathetic to the dominant, unassailable corporate forces of the future.[9] Whereas in Cline’s novel Wade suggests using his newfound wealth to “feed everyone on the planet,” and to “make the world a better place,” Spielberg glosses over Wade’s windfall entirely, focusing instead on what Wade’s acquisition of the OASIS allows him to take away from—rather than give to—the powerless masses. In effect, the wayward teenagers of Spielberg’s corpus mature into a kind of “ghost in the machine” of capital.

    The control Spielberg wishes to exert—over audiences, the film, the ‘80s—is perhaps most evident in the final moments of Ready Player One. As the film draws to a close, main character Wade speaks of disengaging from the OASIS to delight in the sensory and emotional experiences accessible only in the real world. In the novel, Cline similarly concludes with Wade revealing that “for the first time in as long as [he] could remember, [he] … had absolutely no desire to log back in to the OASIS”.[10] But in Spielberg’s hands, Wade’s newfound ambivalence about the OASIS has broader implications, as Wade, who ultimately wins control of the OASIS, sets limits on its availability, effectively forcing the tech-addled masses of 2045 to accept, as Wade now does, that “people need to spend more time in the real world.”[11]

    However, the restrictions that Wade—and by extension Spielberg—puts in place fail to do this; rather, they reveal the film’s great irony: that Spielberg asks audiences to discover an empathetic, authentic reality in the context of a simulated world that he, Spielberg, creates (and, it is implied, that he alone could create). By adding to Wade’s character a strong inclination toward hetero-normative romantic connection in the real world, and by directing Wade to downgrade public access to the OASIS so that its millions of users may find “real” love, Spielberg invites his audience to seek out and prioritize “authentic” humanity in contrast to that offered in the OASIS. But Spielberg does so by positing as authentic a simulation of human connection, which he then presents as the corrective not only to the film’s characters’ obsession with technology, but also to that of contemporary western society. In doing so, Spielberg attempts to situate himself apart from peddlers of artificiality like Halliday (with whom he nevertheless clearly identifies). But instead he succeeds, despite his lifelong preoccupation with celebrating and stirring human connections and emotions, in becoming the master generator of simulacra. Ultimately the film’s viewers find themselves absorbed into the position of the creator of the OASIS, so that the absence of specific references to Spielberg’s early films conceals a remaking of the entire world of the film in Spielberg’s image.

    II.

    In the film’s final scene, Spielberg assembles numerous sentimental cues to soften Wade’s mandate that users henceforth limit their time in the OASIS, thus making his demands appear more altruistic than draconian. As the camera pans across what appears to be Wade’s spacious new apartment (a significant step up from the cramped trailer he lived in previously), we see Wade and his recently acquired girlfriend Samantha sharing a kiss as he explains in a voice-over his plans for the OASIS, and as the ‘80s pop of Hall and Oates’s “You Make My Dreams Come True” gradually dominates the soundtrack. While neither the voice-over nor the establishment of the romantic couple are particularly common tropes among Spielberg’s endings overall, the collision of familial sentimentality and budding romance we see in Ready Player One nevertheless recalls several of Spielberg’s endings from the late ‘70s and early ‘80s in films like Close Encounters, E.T., and Indiana Jones and the Temple of Doom.[12]

    In anticipation of this highly sentimental ending, the film drastically accelerates the pace of the pair’s relationship: in the novel, they don’t meet in the real world until the last few pages, and their relationship—at least as far as Samantha is concerned—seems at best a work in progress. But Spielberg brings Wade and Samantha together in the real world halfway through the film, and makes their romantic connection a central concern. In doing so, and in explicitly depicting them in the final shot as a romantic couple, Spielberg creates contextual support for the argument he clearly wishes to make: that real world romance, rather than virtual game play, makes “dreams come true.” But even if this is so for Wade and Samantha, there’s little evidence to suggest that OASIS addicts around the world have had a similar experience. The suggestion, of course, is that they will—once they’re forced to.

    Not only do Wade’s new rules for the OASIS disregard the social upheaval that the narrative all but ensures would take place, they also aggressively elide the anti-social foundation upon which the OASIS was conceived. In an earlier scene, Halliday reveals that he created “the OASIS, because [he] … never felt at home in the real world,” adding that he “just didn’t know how to connect with the people there.” Whether simulations of Atari 2600 games like Adventure or of movie characters like Freddie Kruger, the contents of the OASIS are not only replicas, they’re replicas of replicas—virtual manifestations of Halliday’s adolescent obsessions placed in a world of his own making, and for his own pleasure. One “wins” in the OASIS by collecting virtual inventories of Halliday’s replicas, and gains social significance—in and outside the OASIS—according to what (and how much) one has collected.

    Despite cautioning Wade to avoid “getting lost” in the OASIS and revealing that, for him, the real world is “still the only place where you can get a decent meal,”[13] Halliday stops short of amending the central function of the OASIS as a replacement of, rather than supplement to, human interaction in the real world. Meanwhile, Wade’s parting words in the film limiting access to the OASIS point spectators toward an artificial reality of Spielberg’s making that is deeply invested in hiding its own artifice, and that punctuates a series of rewritings that remove Spielberg references from the film while simultaneously saturating it with his presence. At the same time, Spielberg ensures that the spectator’s sense of the ‘80s conforms to his own preoccupations, which themselves took hold in the context of the increasingly aggressive corporatization of the film industry that took place during this period. Consequently, the nostalgic universe generated in the film offers no exit from Spielberg, despite the absence of his name from the proceedings.

    The film rehearses this paradox once again in its treatment of Halliday’s end, which differs significantly from that of the novel. Arguably Spielberg attempts to secure his controlling presence—both in the film, and of cinematic history—by leaving ambiguous the fate of the OASIS’s creator. Although in both the book and the movie Halliday’s avatar Anorak appears to congratulate Wade (known as Parzival in the OASIS) on acquiring the egg, Cline describes an elaborate transfer of powers the film all but ignores. Upon taking Anorak’s hand, Wade looks down at his own avatar to discover that he now wears Anorak’s “obsidian black robes” and, according to his virtual display, possesses “a list of spells, inherent powers, and magic items that seemed to scroll on forever.”[14] Halliday, now appearing as he often did in the real world with “worn jeans and a faded Space Invaders T-shirt” comments, “Your avatar is immortal and all-powerful.”[15] Moments later Cline writes:

    Then Halliday began to disappear. He smiled and waved good-bye as his avatar slowly faded out of existence … Then he was completely gone.[16]

    Under Spielberg’s direction, this scene—and in particular Halliday’s exit—play out very differently. While we are made aware of Wade’s victory, his avatar’s appearance remains unchanged and there’s no mention of Wade gaining all-powerful immortality. And whereas Cline explicitly refers to the image of Halliday that Wade encounters as an avatar—and therefore a program presumably set to appear for the benefit of game’s victor—the film goes out of its way to establish that this image of Halliday is nothing of the sort. When Wade asks if Halliday is truly dead, the image responds in the affirmative; when asked if he’s an avatar, the image replies no, and doesn’t respond at all to Wade’s final question, “what are you?” Instead Halliday’s image, accompanied by another image of his childhood self, walks silently through a doorway to an adjacent room and closes the door.

    Rather than supplanting himself with a younger overlord and “fading out of existence” as he does in the novel, Spielberg’s Halliday remains part of the world he created, hesitant to relinquish full control. Closing the door behind him may signify an exit, but it doesn’t preclude the possibility of a return, especially given that neither he nor Wade locks the door. In place of closure, Halliday’s departure, along with his acknowledgement that he’s neither real nor simulation, suggests a more permanent arrangement whereby Halliday remains the animating essence within the OASIS. Halliday cannot “fade out of existence” in the OASIS because he effectively is the OASIS—its memory, its imagination, the means through which its simulations come to life. Whereas in the novel, Anorak’s transferal of power to Wade/Parzival suggests an acquisition of unadulterated control, the film proposes an alternative scenario in which Halliday’s creative powers are not fully transferable. In order for the OASIS to function, the film implies, Halliday must somehow remain within it as a kind of guiding force—a consciousness that animates the technological world Halliday created.

    By replacing the simulation of Halliday that Wade encounters at the end of the novel with a mysterious deity figure taking up permanent residence inside the OASIS, Spielberg betrays a level of affection for the multi-billionaire world builder reminiscent of his treatment of the John Hammond character in Jurassic Park (1993). In that film, Spielberg spares the life of the deadly park’s obscenely wealthy creator and CEO—portrayed with jolly insouciance by Richard Attenborough—despite being ripped to pieces by dinosaurs in Michael Crichton’s novel of the same name. In Ready Player One Spielberg ups the ante, allowing the world builder and corporate overlord to ascend to godly status, therefore ensuring that while the OASIS exists, so will its creator Halliday.

    III.

    In contrast to the clear-cut usurpation of the eccentric billionaire by the indigent but tenderhearted teenager seen in Cline’s novel, the movie version of Ready Player One asks audiences to accept a more opaque distribution of controlling interests. While on the one hand the film presents the OASIS as a site of emotional suppression wherein users—following Halliday’s example—favor artificial stimulation over real world emotional connection, it also insists viewers recognize that Halliday created the OASIS in response to real world emotional trauma. The film uses this trauma to neutralize the class distinctions between Wade and Halliday that the novel highlights, and asks spectators to view both characters through a lens of universalized emotional vulnerability. The film then uses this conception of emotional trauma to encourage spectators to sympathize and identify with the corporate billionaire, welcome his transcendence into technological deification, and accept Wade not as a usurper but as an administrator of Halliday’s corporate vision.

    But by magnifying the role social anxiety and fear of human intimacy played in creating the OASIS, the film also sets up the OASIS itself as, ultimately, a site of redemption rather than emotional suppression. Nowhere is this reworking of the OASIS more striking than during Wade’s attempt to complete Halliday’s second challenge. In a total overhaul of the novel, Wade (as Parzival) seeks clues unlocking the whereabouts of the Jade Key by visiting Halliday’s Journals, a virtual reference library located inside the OASIS. In the novel, gunters carefully study a digital text known as Anorak’s Almanac, an encyclopedia of ‘70s and ‘80s pop culture memorabilia compiled by Halliday and named after his avatar. The film replaces the almanac with a “physical” archive holding various pop culture artifacts of importance to Halliday, as well as memories of actual events in Halliday’s life. Crucially, like everything else in the OASIS, the contents of Halliday’s Journals are simulations created by Halliday based on his own memories.

    These memories appear as images carefully re-imagined for cinematic display: gunters watch Halliday’s “memory movies” via a large rectangular screen through which (or on which) the images themselves appear (or are projected) as a kind of three-dimensional hologram. Looking at the screen is to look into the environment in which the events occurred, as if looking through a wall. In the memory containing Halliday’s one and only reference to Karen Underwood—his one-time date, and the future wife of his former business partner Ogden Morrow—Halliday approaches what is essentially the “fourth wall” and, while not necessarily “breaking” it, peers knowingly into the void, signaling to gunters—and thus to spectators—that recognizing the significance of this “leap not taken” regarding his unrealized affection for Karen is crucial to completing the second challenge. Spielberg latches on to Halliday’s failure with Karen, making this missed romantic opportunity the site of significant lifelong emotional trauma, and the de facto cause of Halliday’s retreat into creating and living in the OASIS.

    Halliday’s archive also contains all of his favorite ‘80s movies, which appear as immersive environments that gunters may explore. Upon learning that Halliday, on his one and only date with Karen, took her to see Stanley Kubrick’s 1980 adaptation of Stephen King’s novel The Shining (1980), Wade (again, as Parzival) and his comrades (and the film’s audience) enter the lobby of the Overlook Hotel exactly as it appears in Kubrick’s film. The ensuing sequence is particularly revelatory in that we witness the camera gleefully roaming the interiors of Kubrick’s Overlook Hotel. Spielberg clearly delights in this scene, in the same way that Halliday, in Cline’s novel, relishes simulating the cinematic worlds of War Games and Monty Python and the Holy Grail. But in those cases, OASIS players must adopt one of the film’s characters as an avatar in order to show reverence by reciting dialogue and participating in scenes. In Cline’s novel, Halliday is interested in using reenactment to measure the depth of players’ devotion to Halliday’s favorite films.

    In Spielberg’s adaptation, however, Parzival enters Halliday’s simulation of The Shining not as part of the story, but as a spectator. In one sense, Spielberg’s Halliday opens cinema up to players, enabling them to remain “themselves” while interacting with cinematic environments to discover clues leading to the jade key and therefore victory in Halliday’s second challenge. The theory of spectatorship that the film seems to advance during this sequence insists that the real pleasure of cinema lies not in the passive watching of it, but in its imaginative regeneration and exploration. The spectator’s imagination has the ability to call up a cinematic memory, and to stage one’s own stories or scenes in the environments recalled there. To connect with a film is to hold it in one’s memory in such a way that in can be explored repeatedly, and in different ways.

    But while this conception of spectatorship appears to give viewers the ability to make cinema broadly their own, in fact, with Spielberg’s inhabiting of The Shining, we witness a specific transmutation of Kubrick’s text into an entry in Spielberg’s own corpus. In The Shining, Kubrick crushes those aspects of Stephen King’s narrative that would have importance for Spielberg, namely King’s interest in family trauma and intergenerational conflict. For Kubrick, the family is a scene of a pure violence that infects and corrodes the human capacity for empathy and rationality, thereby forcing violent action recursively back on itself. Kubrick’s film is clearly anti-Spielbergian in this sense, and yet in his replay of The Shining in Ready Player One, Spielberg does his own violence to Kubrick’s vision, taking control of the simulacrum and re-producing The Shining as a site of redemption, whereas in Kubrick it’s certainly not.

    After a series of gags that play some of Kubrick’s most haunting images—the twin sisters, the torrent of blood, the decaying women in room 237—for laughs, Wade finds himself in the ballroom of the hotel. Once there, the simulation of Kubrick’s film gives way to a new setting completely unique to Halliday’s imagination, wherein dozens of decomposing zombies dance in pairs, with a simulation of Halliday’s never-to-be love, Karen Underwood, being passed from zombie to zombie. To win the challenge, a player must figuratively make the “leap” that confounded Halliday, using small, suspended platforms, as well as zombie shoulders and heads, to make his way to Karen, whom he must then ask to dance. This challenge reveals to players, and to the audience, Halliday’s emotional vulnerability, highlighting his regret, and foreshadowing the lesson Spielberg imposes on viewers at the film’s end: namely, that audiences should see Halliday’s story as a cautionary tale warning against using technology to repress the need to connect with other human beings.

    Spielberg begins his adaption of Cline’s novel with another radical revision, substituting an action set piece—a car race—for an elaborate two-tier challenge wherein Wade must best a Dungeons and Dragons character playing the classic arcade game Joust and later recite dialogue from the ‘80s movie War Games starring Mathew Broderick. After several failed attempts, Wade discovers that in order to win the race he must travel backwards, a move clearly highlighting the film’s nostalgic turn to the ‘80s. Although this sequence features the film’s most overt reference to Spielberg’s ‘80s corpus in the form of Wade’s car, a replica of Marty McFly’s DeLorean from the Spielberg-produced film Back to the Future, more significant is the extremity of the challenge’s revision, and the fact that nothing within the film or Cline’s novel suggests that a big action spectacle with lots of fast cars might be at all in keeping with Halliday’s ‘80s pop culture preoccupations.

    More likely, given the affinity Spielberg shows throughout the film for redressing Halliday’s world in his own image, is that this sequence pays homage to Spielberg’s friend (and fellow Hollywood elder) George Lucas, whose own early corpus was defined, in part, not only by his film American Graffiti, but by his trademark directorial note, “faster and more intense”—a note this sequence in Ready Player One takes to heart. With this scene and the others mentioned previously, Ready Player One recasts “classic Spielberg” by shifting emphasis away from teenage innocents and toward corporate overlords with whom the story’s young heroes are complicit in the project of subjugation. What emerges is the supremacy and permanence of the corporate overlord whom Spielberg both identifies with and wishes to remake in his own image in such a way that the overlord’s world becomes a site for the Spielbergian values of homecoming and redemption rather than emotional repression aided by escape into simulacra. The irony being that the world of homecoming and redemption he offers is itself nothing other than cinema’s simulation.

    Bibliography

    Breznican, Anthony. “Steven Spielberg Vowed to Leave His Own Movies Out of ‘Ready Player One’—His Crew Vowed Otherwise.” Ew.com, March 22, 2018, http://ew.com/movies/2018/03/22/ready-player-one-steven-spielberg-references/.

    Cabin, Chris. “’Ready Player One’: Steven Spielberg Says He’s Avoiding References to His Own Movies.” Collider.com, June 22, 2016, http://collider.com/ready-player-one-steven-spielberg-easter-eggs/.

    Cline, Ernest. Ready Player One. Broadway Books: New York, 2011.

    Hunter, I.Q. “Spielberg and Adaptation.” A Companion to Steven Spielberg. Ed. Nigel Morris. Wiley Blackwell: West Sussex, 2017. 212-226.

    Kramer, Peter. “Spielberg and Kubrick.” A Companion to Steven Spielberg. Ed. Nigel Morris. Wiley Blackwell: West Sussex, 2017. 195-211.

    Nealon, Jeffrey T. Post-Postmodernism or, The Cultural Logic of Just-in-Time Capitalism. Stanford, CA: Stanford UP, 2012.

    Russell, James. “Producing the Spielberg ‘Brand.’” A Companion to Steven Spielberg. Ed. Nigel Morris. Wiley Blackwell: West Sussex, 2017. 45-57.

    Spielberg, Steven, Dir. Ready Player One. 2018.

    Walker, Michael. “Steven Spielberg and the Rhetoric of an Ending.” A Companion to Steven Spielberg. Ed. Nigel Morris. Wiley Blackwell: West Sussex, 2017. 137-158.

    Watkins, Denny. “Ernest Cline Geeks Out About Spielberg Adapting ‘Ready Player One.’” Syfy.com, May 2, 2016, http://www.syfy.com/syfywire/ernest-cline-geeks-out-about-spielberg-adapting-ready-player-one.

    Vinyard, Papa, “Be Ready for ‘Ready Player One’ in December 2017.” Ain’t it Cool News, August 6, 2015, http://www.aintitcool.com/node/72613.

    Notes

    [1] From remakes (The Karate Kid, Clash of the Titans) and sequels (Tron: Legacy, Wall Street: Money Never Sleeps) to original TV shows drawing on ‘80s cultural influences (Stranger Things, The Americans), ‘80s nostalgia has been exceedingly popular for the better part of a decade. Addressing the current ubiquity of the ‘80s, Jeffery T. Nealon argues that “it’s not so much that the ‘80s are back culturally, but that they never went anywhere economically,” adding, “the economic truisms of the ‘80s remain a kind of sound track for today, the relentless beat playing behind the eye candy of our new corporate world” (Post-Postmodernism).

    [2] When it was announced that Spielberg would adapt Ready Player One, entertainment journalists rejoiced, describing the move as a “return to ‘blockbuster filmmaking’” for Spielberg that would give Cline’s story both “street cred and … mainstream appeal” (Vinyard, “Be Ready for ‘Ready Player One’”).

    [3] Watkins, “Ernest Cline Geeks Out”

    [4] Cabin, “‘Steven Spielberg Says”

    [5] In both the novel and film, Wade’s avatar, known as Parzival in the OASIS, drives a simulation of the DeLorean featured in the Back to the Future films, which Spielberg produced.

    [6] Cabin, “Steven Spielberg Says”

    [7] Spielberg remarked in 2016 that, “[he] was very happy to see there was enough without [him] that made the ‘80s a great time to grow up” (Cabin, “Spielberg Says”), and in a 2018 interview with Ew.com Spielberg insisted, “I didn’t corner the ‘80s market … there’s plenty to go around” (Breznican, “Steven Spielberg Vowed”).

    [8] Russell, “Producing the Spielberg ‘Brand.’”

    [9] While it’s true, in both the novel and the film, that prohibiting corporatist Nolan Sorrento from acquiring the OASIS is a priority for Wade, what motivates him is not antipathy to capitalist enterprise, but rather the desire to preserve the “pure” capitalist vision of the OASIS’s corporate creator, Halliday. Averse to Sorrento’s plans to further monetize the OASIS by opening up the platform to infinite numbers of advertisers, Wade simply prefers Halliday’s more controlled brand of corporatism, which appears rooted in what Nealon would describe as “the dictates of ‘80s management theory (individualism, excellence, downsizing)” (5, Post-Postmodernism). The film likewise shares an affinity for heavily centralized, individualized, and downsized corporate control.

    [10] Cline, Ready Player One, 372.

    [11] Spielberg, Ready Player One.

    [12] Walker, “Rhetoric of an Ending,” 144-145, 149-150.

    [13] Spielberg, Ready Player One

    [14] Cline, Ready Player One, 363.

    [15] Cline, Ready Player One, 363.

    [16] Cline, Ready Player One, 364.

  • Gretchen Soderlund — Futures of Journalisms Past (or, Pasts of Journalism’s Future)

    Gretchen Soderlund — Futures of Journalisms Past (or, Pasts of Journalism’s Future)

    Gretchen Soderlund

    Journalists might be chroniclers of the present, but two decades of books, conferences, symposia, interviews, talks, special issues, and end-of-year features on the future of news suggests they are also preoccupied with what lies ahead. Still, few of today’s media workers are as prescient as William T. Stead, the English journalist and amateur occultist who came close to predicting the 1912 Titanic disaster twenty years before he died in it. In his 1893 short story, “From the Old World to the New,” a transatlantic ocean liner collides with an iceberg and erupts in flames, leaving the vessel’s desperate passengers clinging to a sheet of ice. Unlike the Titanic, everyone in the story lives. Two passengers on a nearby ship receive telepathic distress signals. One has haunting visions of the accident in her sleep, and the other finds a written plea for help in the handwriting of a friend travelling aboard the sinking ship. The clairvoyants relay this information to their captain, who steers a perilous course through the icebergs and rescues the shipwrecked passengers. In 1893 wireless telegraphy, the early term for radio, did not yet exist (even if, as an idea, it electrified the Victorian imagination). By the time of the Titanic’s maiden voyage, radio was a standard maritime communication device. The technology helped, but was no panacea: the closest ship to receive the Titanic’s SOS signals arrived too late for Stead and many of his fellow passengers.

    Stead was at the forefront of thinking about new technologies as well as his own demise. He also had a keen interest in journalism’s future, one shared by many of today’s news workers. Even people who failed to predict the collision of twentieth-century news models with the Web are now regularly called upon to forecast the profession’s future. Answering the future-of-news question requires experts to project past experience and current knowledge onto a forthcoming period of time. But does this question have a history of its own? Did earlier news workers prognosticate as often and with the same urgency? What anxieties or opportunities provoked past future thought? To answer these questions, I explore some future-oriented predictions, assessments, and directives of nineteenth and twentieth-century reporters, editors, and media entrepreneurs in the United States and England. Their claims about the future of journalism serve as windows into the relationship between technology and news work at different historical moments and offer insights into today’s prognoses.

    The Current Crisis

    In the U.S., mainstream news agencies have been dealt a series of technological, economic, and political blows that have changed the way news is written, distributed, consumed, funded, and understood. Anxiety about the future can be understood in light of three interrelated challenges to the post-World War II information order: twenty years of digital technological disruption, the 2008 economic crisis, and politically and economically motivated challenges to the industrial news media.

    By now it is a truism that screen-based digital technologies have transformed journalism. Newspapers, in particular, have experienced an advertising and readership decline more existentially threatening than the threat posed to print from radio in the 1920s or from television in the 1950s. The net presented a challenge to print media even before it became a major platform for news; in the mid-1990s, Craigslist disrupted the long-standing classified ad revenue streams of daily papers and newspapers (Seamans and Zhu 2013). The incorporation of print news functions into the digital has only intensified since then. Internet saturation in U.S. households is at 84 percent and climbing (Pew Research Center 2015). News consumers are no longer tethered to a small set of news organizations; sixty-two percent read disparate stories they happen across on social media and Twitter feeds and do not subscribe to a single newspaper or news magazine (Gottfried and Shearer 2016).

    Newspapers were already on shaky ground when the 2008 financial crisis struck. Economic downturn coupled with technological displacement led to a crisis of near Darwinian proportions for an industry that had seen outsized profit margins for much of the twentieth century. Closures, bankruptcies, and mergers ensued. Historic papers like the Rocky Mountain News and Ann Arbor News shut their doors, and many other dailies and weeklies reverted to web-only formats (Rogers 2009). Over a hundred papers ceased publication between 2004 and 2016 (Barthel 2016). Papers that endured the techno-economic struggles of the 2000s had to rethink the nature of the news enterprise from the ground up, devising survival strategies in a new Mad Max-style advertising and subscriber-depleted media terrain.

    Journalism never regained its footing after the financial crisis. As a Pew Research Center study suggests, “2015 might as well have been a recession year” for the traditional news media (Barthel 2016). The study paints a grim picture of the news industry. In 2014 and 2015, the number of print media consumers continued to drop. Even revenue from digital ads fell as advertisers migrated to social media sites like Facebook. And full-time jobs in journalism continued their steady decline: today there are 39 percent fewer positions than there were two decades ago. News consumption also began to shift from personal computers to mobile devices. Readers increasingly access news items on their phones, while standing in line, waiting at red lights, and at other spare moments of the day. In a metric-driven world, mobile news consumption has a silver lining: many sites are receiving more visits than before. However, the average mobile-device reader spends less time with each article than they did on PCs (Barthel 2016). Demand for news exists, albeit in ever-smaller and dislocated chunks.

    At the same time, insurgent news entrepreneurs have altered the media field by leveraging weaknesses in the system and taking advantage of emerging technological possibilities. Just as the most successful nineteenth-century “startups” were enabled by new technologies like the steam press that sped up and lowered the cost of printing,[1] today’s media insurgents – people like Matt Drudge, Steve Bannon, the late Andrew Brietbart, and others – moved straight to digital news and data formats without prior institutional baggage. Since initial start-up costs on the Web are low and news production and dissemination is relatively easy, they were able to offer a trimmed-down model of news production that did not require reporting in the strict sense.

    Some of these insurgents imagine a future for news unfettered by past or existing structures. They claim they want to take a sledgehammer to old media, but it really serves as their foil. In the current context, the terms old media, establishment media, and mainstream media are thrown around by new media players jockeying for position in a changing media field. The White House is currently engaged in a hostile yet mutually beneficial battle with mainstream news outlets, and it echoes the position that the news media is a liberal monolith that censors alternative positions.[2] At the same time, establishment journalism is enjoying a period of unpredicted growth due to the Trump bubble, and has been reinventing and reimagining itself as the Fourth Estate in the wake of the 2016 election.

    Future-of news experts reduce professional and public uncertainty in times of flux (Lowery and Shan, 2016). But it is important to note that not all contemporary observers are worried. The late David Carr, for instance, believed Web startups like Buzzfeed would eventually become more like traditional news outlets. “The first thing they do when they get a little money is hire some journalists,” he said in 2014. He was confident news audiences had an intrinsic desire for quality and that the business end of things would eventually sort itself out.

    Similarly, people who express anxieties about the state of journalism are more likely to have experienced journalism as a stable and predictable field, and to have lost something when the old model collapsed. Those who are concerned worry that a digital-age business model will never arise to solve journalism’s funding problem. They worry that automation will replace journalists. They fear ideological bubbles and distracted audiences. They lament eroding legitimacy and credibility in an era of so-called fake news. And they hope prognosticators possess special knowledge or have more crystalline vision than others in the profession. But did past reporters and editors worry about the fate of their profession in the same way?

    The Nineteenth Century

    In the nineteenth century, journalism was a wide-open, experimental field on both sides of the Atlantic. Literacy rates were climbing. Print technologies had improved. Paper was cheaper to produce than ever before. Newspapers, book publishers, and the public were experiencing the power of mass dissemination. By the second half of the nineteenth century, newspapers’ social standing had improved. Some observers believed they were institutions on the ascent that would eventually play a social role on par with educators, clergy, or government officials.

    However, concerns about the accelerated pace of newspaper work, the constant demand for “newness,” and the unremitting imperative to scoop rival papers were refrains in nineteenth-century journalistic commentary. In his biography of Henry Raymond, the journalist and author Augustus Maverick characterized news work in 1840s New York as an unceasing “treadmill”:

    Only those who have been placed upon the treadmill of a daily newspaper in New York know the severity of the strain it imposes on the mental and physical powers. ‘There is no cessation,’ one newsman explained. ‘A good newspaper never publishes that which is technically denominated ‘old news,’ – a phrase so significant in journalism as to be invested with untold horrors. All must be daily fresh, daily complete, daily polished and perfect; else the journal falls into disrepute, is distanced by its rivals, and, becoming ‘dull,’ dies. (1870, 220)

    I will return to the issue of acceleration later in the paper. For now, it is important to note that perceptions of speedup and fears of being outmoded were embedded in the experience of journalism as early as the 1840s.

    Despite journalism’s daily stresses, Maverick felt the quality and legitimacy of papers was on the rise. The press had successfully overcome early-nineteenth century threats to credibility like partisanship and the sensationalism of the penny press, which printed fantastical, fabricated stories like the New York Sun’s Great Moon Hoax. Maverick believed this progress would continue unabated:

    Accepting the promise of the Present, the prospect of the Future brightens. For, as men come to know each other better, through the rapid annihilation of time and space, they will be plunged deeper into affairs of trade and finance and commerce, and be burdened with a thousand cares, – and the Press, as the reflector of the popular mind, will then take a broader view, and reach forth towards a higher aim; becoming, even more than now, the living photograph of the time, the sympathetic adviser, the conservator, regulator, and guide of American society. (1870, 358)

    Maverick envisioned a future in which the press would both facilitate and temper the social changes wrought by connectivity (changes that he analyzed in his 1858 book on the telegraph).

    The same year Maverick predicted a role for the press as guide and advisor in an increasingly complex and interconnected world, William T. Stead began his career as a fledgling reporter. Few journalists tested, challenged, and wielded the power of the press quite like Stead. In his essay “The Future of Journalism” (1887), he envisioned radical and expansive new plans for the press. His own journalistic experiments had convinced him that editors “could become the most permanently influential Englishmen in the Empire.” But to ascend to this level one had to become a “master of the facts – especially the most dominant fact of all, the state of public opinion.” Editors guessed at public opinion, but had no way of gauging it. To remedy this, Stead suggested journalists be allowed twenty-four hour access to everyone “from the Queen downward.” His news workers of the future would be intimately connected to public opinion across the social system. They would have unfettered access to powerful people, which would diminish the unquestioned authority and privacy of the aristocracy.

    Since the system Stead imagined would be impossible for one person to manage, it would be held in place by travelers who would preach the importance of journalistic work with a missionary zeal. The travelers would eventually be “entrusted the further and more delicate duty of collecting the opinions of those who form the public opinion of their locality.” Stead was certain the enactment of his plan would result in the greatest “spiritual and educational and governing agency which England has yet seen.”

    “The Future of Journalism” demonstrates a keen awareness of print’s power in an era of mass distribution and rapid news diffusion. It was grandiose because it imagined a far greater political role for journalists than they would ever possess. In some respects, though, Stead was a superior prognosticator. In 1887, the communications field was undifferentiated. His journalistic travelers and major-generals would ultimately manifest themselves in the twentieth century as pollsters, social scientists, and public relations specialists. But the editor would not sit at the helm, overseeing these efforts. Instead, journalist/editors would report their findings and beliefs, and serve as conduits in the flow of ideas between these professionals and the public. Despite their inadequacies, Stead’s writings on the future were more prescriptive and imaginative than many of today’s commentaries on the topic.

    Twentieth-Century Futures

    Nineteenth-century commentators on the news profession lamented acceleration, railed against partisanship, and decried certain forms of sensationalism, but they also believed in progress. This changed in the twentieth century. Frank Munsey’s career began by selling low-cost magazines and pulp fiction. In 1889 he launched the popular general-interest magazine Munsey’s Magazine, and he went on to amass a fortune between 1900 and 1920 purchasing and selling ten different newspapers, including The New York Daily News, The Boston Journal, and The Washington Times. He was a businessman first and journalist second. Munsey’s contemporaries viewed him as journalism’s undertaker: his very appearance on the scene heralded a newspaper’s demise. His contemporary, Oswald Garrison Villard, described him as “a dealer in dailies – little else and little more” (1923, 81).[3]

    Munsey’s “Journalism of the Future” appeared in 1903 in Munsey’s Magazine. In it, he suggests that the common editors’ refrain about “lack of good men” misses the real problem. The threat facing journalism is not a lack of well-trained workers, but the size of daily papers. Newspapers, which had been expanding since the 1890s, contained more sections, lengthier features, and larger Sunday editions than ever before. As papers grew, readers became rushed. The problem with news circa 1903 was that there was too much to write about and too much to read. Because they had to absorb so much, readers’ attention was at all all-time low (a concern that resonates with today’s news producers). For Munsey, the solution to the problem of the rushed and inattentive reader lay in condensation and conglomeration. Predicting extreme media consolidation long before it occurred, Munsey speculated that within four years (i.e., by 1907) the entire media field would be whittled down to three or four firms that would publish every newspaper, periodical, magazine, and book:

    The journalism of the future will be of a higher order than the journalism of the past or the present. Existing conditions of competition and waste, under individual ownership, make the ideal newspaper impossible. But with a central ownership big enough and strong enough to encompass the whole country, our newspapers can afford to be independent, fearless, and honest. (1903, 830)

    For Munsey, consolidation, quality, and independence are linked through the efficiency and scope of large-scale production and the nationalization of mass audiences. He does not foresee problems caused by monopolization or threats to newspapers from radio. He imagines technology only as it relates to its effects on the productive capacity of print news, which he thought was fettered by local ownership.

    Writing during World War I, Willard Grosvenor Bleyer, founder of the University of Wisconsin journalism school and advocate of professional training, took a more modest view of journalism’s future. His primary concern was wartime press censorship and the spread of propaganda through semi-official news agencies. However, he considered these developments temporary deviations from the normal function of the press in a democratic society: eventually the profession would return to its pre-war normalcy. “The world war,” he wrote, “has given rise to peculiar problems, none of which, however, seems likely to have permanent effects on our newspapers” (1918, 14). Wartime austerity, especially the high price of paper, posed problems for the news industry. But there was a bright side. People wanted news from Europe, so the higher cost of newspapers had not decreased circulation rates.

    Some early-twentieth century observers were concerned about sensationalism and editorial independence or the effects of war on the press, while others worried about the future of democracy in the context of Munsey-wrought newspaper industry mergers. Oswald Villard, writer for The Nation and The NY Evening Post, founder of the American Anti-Imperialist League, and the first treasurer of the National Association for the Advancement of Colored People, argued that consolidation threatened democracy. Most newspapers lacked commercial independence and were beholden to advertisers who limited what they could publish. He was also concerned about the political implications of audience fragmentation: “Not today can one, no matter how trenchant their pen, be in a garret and expect to reach the conscience of a public by seventy millions larger than the America of Garrison and Lincoln.” Villard, however, held out hope that the views of ‘great men’ would find an audience, even if it meant bypassing the press. He did not predict new media forms, but looked back at old ones: “the prophet of the future will make his message heard, if not by a daily, then by a weekly; if not by a weekly, then by pamphleteering in the manner of Alexander Hamilton; if not by pamphleteering then by speech in the market-place” (1923, 315).

    After World War II, journalism experienced a period of stability that gave it an aura of permanence, as if media institutions were constants amidst other economic, social, and cultural changes. Future concerns during this period centered on issues of technology and media consolidation. In 1947, for example, the Hutchins Commission on Freedom of the Press predicted that newspapers would soon be sent from FM radio stations to personal facsimile machines. These devices would print, fold, and deposit them in the hands of U.S. householders each morning (34-45). News workers and industry analysts predicted that technologies as diverse as citizens band radio, cable TV, camcorders, and CD ROMS would, for better or worse, alter the production or consumption of news and either enhance or impede democratic processes (Curran 2010a). In the 1980s and 90s, journalists and media critics pointed to the pernicious effects of monopolization in national and regional markets. They feared the one-newspaper town and the absorption of local newspapers by media franchises. Michael Kinsley recalls that, in the pre-Internet period, “at symposia and seminars on the Future of Newspapers, professional worriers used to worry that these monopoly or near-monopoly newspapers were too powerful for society’s good” (2014).

    Time, Space, and Journalism

    Time is not a natural resource that springs from the Earth, but a cultural and social construct imagined and experienced in multiple ways (Fabien 1983).[4] Some social theorists argue that the sensation of rapid acceleration is a key feature of the modern experience of time (Crary 2013; Rosa 2013). Harmut Rosa, for example, has argued that time compression has reached a point where the hamster wheel or treadmill has become an apt metaphor for modern life. Work speedups and technological immersion are necessary just to maintain social stasis, without the possibility of advancement or breaking free (Rosa 2010). For Rosa and other accelerationists, acceleration leaves you mired in the present, anticipating the future with a sense of dread. The reality is that there is no uniform experience of time; our experience depends upon our position within circuits of information and capital (Sharma 2014). But when it comes to technological and economic speedup, journalism may be the canary in the mine. Reporters like Maverick experienced this treadmill effect as early as the 1840s. In 1918, Francis Leupp described the quickening pace of news work in the electric age:

    We must reckon with the progressive acceleration of the pace of our 20th century life generally. Where we walked in the old times we run in these; where we ambled then, we gallop now. In the age of electric power, high explosives, articulated steel frames, in the larger world; of the long-distance telephone, the taxicab, and the card-index, in the narrower. The problem of existence is reduced to terms of time-measurement. (39)

    Like Maverick, Leupp experienced the dynamism of modern life and the dual pressures of accuracy and speed in journalism.

    It makes sense that journalism would experience the present this way. As the quintessential modern form, news embodies planned obsolescence (Schwartz 1999). Journalism has undergone two centuries of shrinking intervals of newness and relevance: six-months, a week, a day, an hour. With the rise of social media and Twitter, the intervals between news cycles have grown even shorter. In the twentieth century, edition release times and broadcast schedules helped carve the day into identifiable units with firm deadlines. But in a context where news can be posted around the clock and updated every minute, the clock is no longer a structuring device for journalism. Minutes, seconds, and the calendar click-over from one day to the next are the only salient units of time. News stories that were relevant and new last week often seem ancient a week later. A newsworthy event like President Trump pulling out of the Paris climate agreement can feel as distant as the Vietnam War the following week. New communication forms like Twitter coupled with strategies of disinformation and the routinization of scandal shatter perceptions of continuity. What we are experiencing now is not the death of history, as was proclaimed after the fall of the Berlin Wall, but the death of the present. In news, rapid acceleration has amnestic effects, similar to the experience of sleep deprivation.

    If the main time/space vectors in journalism used to be deadlines and beats, the latter may also be losing their importance, giving way to a more fluid cut-and-run style of journalism. For example, the Washington Post’s Chris Cilizza suggests that young reporters should not decline stories saying, “that’s not my beat” (2016). Rather, in a context of dwindling opportunities, journalists should pursue any story available, whether or not it fits into the old-fashioned logic of beat work or the range of competence of individual journalists.[5] But while traditional beats may be losing their cogency, reporters must add a new online “beat” to their repertoire that entails close surveillance of social media and online news, a dynamic that some critics have argued creates a house of mirrors effect in the news industry (Reinemann and Baugut 2013).

    Technology and Uncertainty in the Professions

    Journalism may be the paradigmatic case of a profession imperiled by a new technology, but its concerns about time and technological displacement cannot be generalized to other spheres. Take lawyers, social workers, and physicians. Uncertainty within the legal profession is largely unrelated to the digital. It was caused by the recent financial crisis coupled with the overtraining of new professionals. Jobs for newly minted JDs evaporated during the recession, leading to a decline in the number of law school applicants after 2010. With enrollment down, the future of smaller law schools became uncertain, and many schools lowered admission standards to stay afloat (Olson 2015; Pistone and Horn 2016). The profession has been in crisis, but not because of the Internet, and there is even some evidence that law positions are coming back (Solomon 2015). Uncertainty for social workers began even earlier, when the Clinton administration began dismantling the welfare state. Despite the obvious need for such professionals, government, non-profit, and other social service jobs have seen a quarter-century decline because of deep budgetary cuts that began in the 1990s (Reisch 2013).

    Physicians seem least concerned with the future. They worry more about burnout than they do the fate of their profession. The future is typically invoked in discussions about labor shortages and descriptions of new developments at the intersection of medicine and technology. Articles on the future of medicine routinely tout new developments like 3D printers that can form living cells into new organs (Mellgard 2015). Digitalization has changed many aspects of medicine: electronic medical records and charting alters the way nurses and physicians access information, for instance. But it has not led to credible speculation about replacing physicians with bots. Contrast this with some news workers’ worries about replacement by computer programs like Automated Insight’s narrative generation system, Wordsmith. The Associated Press now employs Wordsmith to do its quarterly earnings reports and other stories, and has become so confident in these auto-generated stories that it runs many of them without prior vetting (the rare human-edited AI story is said to have had “the human touch”) (Miller 2015). Nor have drones been proposed as a viable alternative to human physicians, as they have been for newsgatherer/photojournalists (Etzler 2016).[6]

    In none of these other cases is technology the primary motor of destabilization. The character of future angst in the professions, therefore, is occupation dependent. And journalism, it seems, is uniquely sensitive and vulnerable to technology. Every widely-adopted communications technology – the steam press, radio, the net – has restructured news and led to audience expansion or contraction. In this sense, there is nothing new to journalist’s dependence on and transformation by technologies. The one constant is that journalists work in a field of technological contingency.

    Conclusion: Euphoria and Dysphoria in Journalism

    Visions of the future are also statements about the present. Political and economic conditions, labor concerns, and beliefs about the nature of time are contained within predictive thought. The future of journalism has been asked when a number of possibilities are on the table and when fewer options are imaginable. Sometimes predictions are made when a journalist has a stake in seeing a particular vision enacted. There was no social stasis or treadmill for Munsey, who saw conglomeration as the key to good journalism, or for Stead, who imagined himself as the heroic journalist proselytizer. Both saw themselves as leaders of the free world. Feelings of euphoria and dysphoria, therefore, come and go and are not unique to one era. Nineteenth-century journalists like Stead and Maverick imagined their field’s future and the journalist’s future roles in society. Both were “feeling it,” riding high on the wave of mechanization.

    William T. Stead, 1909 (image source: https://it.wikipedia.org/wiki/William_Thomas_Stead, https://giphy.com/gifs/3XH3YqPpfwmPMxx5Xr)
    William T. Stead, 1909 (image source: Wikipedia and GIPHY

    Social roles are also embedded within occupational visions of the future. Will tomorrow’s journalists be tellers of truth, interpreters of data, shapers of public opinion, informers of policy makers, imaginers of social utopias? Some commentators insist that news must change to remain relevant in the digital age. In a world of abundant facts, reporters should be master interpreters, explaining the “what” and “how” to the public rather than reciting basic information (Cilizza 2016; Stephens 2014). As older models of journalism become outmoded, either by the Web or by computer programs, the hope is that professional journalists will find a niche explaining events. A similar impulse lies behind data-driven journalism, but in this case the journalists refashion themselves as computer workers, scraping the Web for reams of data, interpreting it, and presenting it to audiences in visually and narratively compelling ways. In solutions-based journalism, the reporter is a meta-social worker or public policy specialist, proposing potential solutions to local social problems based on what other locales have found successful.

    There is also an emerging patronage system in which billionaires, foundations, and small donations prop up capital-intensive journalistic forms like investigative journalism. This is a good stopgap measure, and much of the work that has been supported by tech giants like Jeff Bezos, Pierre Omidyar, and others has typically been of high quality. But it begs the question: can journalists write exposés today about the very people and their tech companies who are sponsoring our journalism the way the Ida Tarbell wrote about Standard Oil?

    The social roles future of news experts imagine might come to pass, but not always in the way they expect. Stead’s call for government by journalism, for instance, is certainly embodied in a figure like Breitbart’s Steve Bannon. Although Stead would disagree with his political vision and journalistic practices, Bannon is also “feeling it,” envisioning a future of infinite possibilities.

    Occupational forecasting serves both psychological and pragmatic ends: it reduces anxieties at the same time that it identifies trends to guide present-day action. Because the future is speculative and can only be imagined or modeled, not recreated from memory, artifact, or written record, prediction-based advice runs a high risk of misdirection. We can safely assume that prognosticators will not determine the actual future of journalism. If Stead were really clairvoyant, the Titanic would have been spared and journalism saved. As Robert Heilbroner suggests, prediction is an exercise in futility. It is better to “ask whether it is imaginable… to exercise effective control over the future-shaping forces of Today” (1995, 95). It is only in this sense that discussions of the future and the social experiments they generate do, in fact, transform the field.

    _____

    Gretchen Soderlund is Associate Professor of Media History in the University of Oregon’s School of Journalism and Communication. She is the author of Sex Trafficking, Scandal, and the Transformation of Journalism, 1885-1917 (University of Chicago Press) and editor of Charting, Tracking, and Mapping: New Technologies, Labor, and Surveillance, a special issue of Social Semiotics. Her articles have appeared in such journals as American Quarterly, Feminist Formations, The Communication Review, Humanity, and Critical Studies in Media Communication.

    Back to the essay

    _____

    Acknowledgments

    The author would like to thank Patrick Jones for his comments on an earlier draft of this essay.

    _____

    Notes

    [1] The tremendous success of nineteenth-century self-made owner-editors like Benjamin Day or S.S. McClure can be attributed to innovations in content and funding models. In the 1830s, Day lowered the cost of his newspaper to only a penny, making it affordable to more New Yorkers, and made up for the decreased revenue by selling more advertising space. McClure did the same thing for magazines in the 1890s, selling his publication for a nickel instead of the standard quarter while increasing ad revenue. In doing so, both took advantage of untapped opportunities to reshape the news field in their respective eras.

    [2] Before the 2016 election, this rhetoric united the libertarian left and the right. In a 2014 interview on Democracy Now that, not coincidentally, got positive play in the rightwing media, Glenn Greenwald lambasted Washington Post editors as, “old-style, old-media, pro-government journalists… the kind who have essentially made journalism in the U.S. neutered and impotent and obsolete” (Watson 2014).

    [3] Villard also said of Munsey: “There is not a drop of the reformer’s blood in him; there is in him nothing that cries out in pain in response to the travails of multitudes” (1923, 72).

    [4] The representational features of future thought are also culturally and historically specific (Rosenberg and Harding 2005).

    [5] This more mobile, targeted approach to news production with fewer fixed duties or beats may offer a more varied work experience. But it has labor implications as well: it edges toward freelancing and it may be difficult to say no for reasons beyond beats. Further, reporters may find themselves over their heads in reporting on topics around which they can claim no expertise.

    [6] Indeed, the FAA changed its policy on August 29, 2016 so that journalists do not need pilot’s licenses to fly drones, which will precipitate the increased use of the tool in the future (Etzler 2016).

    _____

    Works Cited

    • Barthel, Michael. 2016. “Newspapers: Fact Sheet.” Pew Research Center (Jun 15).
    • Carr, David. 2014. “NYT’s David Carr on the Future of Journalism.” Youtube interview.
    • Cilizza, Chris. 2016. “The Future of Journalism Is Saying ‘Yes.’ A Lot.” Washington Post (May 23).
    • Crary, Jonathan. 2013. 24/7: Late Capitalism and the Ends of Sleep. Brooklyn, NY: Verso.
    • Curran, James. 2010. “Technology Foretold.” In Natalie Fenton, ed., New Media, Old News: Journalism and Democracy in the Digital Age. London: Sage.
    • Etzler, Allen. 2016. “Exploring the Use of Drones in Journalism.” News Media Alliance (Sep).
    • Fabien, Johannes. 2002. Time and the Other: How Anthropology Makes Its Object. New York: Columbia University Press.
    • Friedhoff, Stefanie. 2015. “David Carr on Teaching and the Future of Journalism.” Boston Globe (Feb 13).
    • Gottfried, Jeffrey and Eliza Shearer. 2016. “News Use Across Social Media Platforms 2016.” Pew Research Center (May 26).
    • Heilbroner, Robert. 1995. Visions of the Future: The Distant Past, Yesterday, Today, Tomorrow. Oxford: Oxford University Press.
    • Kinsley, Michael. 2014. “The Front Page 2.0.” Vanity Fair (Apr 10).
    • Lowrey, Wilson and Zhou Shan. 2016. “Journalism’s Fortune Tellers: Constructing the Future of News.” Journalism. 1-17.
    • Maverick, Augustus. 1870. Henry J. Raymond and the New York Press for Thirty Years: Progress of American Journalism from 1840 to 1870. Hartford, CT: A.S. Hale and Company.
    • McCaskill, Nolan. 2017. “Trump Backs Bannon: ‘The Media is the Opposition Party.’Politico (Jan 27).
    • Mellgard, Peter. 2015. “Medical 3 D Printing Will ‘Enable a New Kind of Future.” The World Post (Jun 22).
    • Miller, Ross. 2015. “AP’s ‘Robot Journalists’ are Writing their own Stories Now.” The Verge (Jan 29).
    • Munsey, Frank. 1903. “Journalism of the Future.” Munsey’s Magazine 28. 823-830.
    • Neel, Patel V. 2015. “Dronalism is the Future of Journalism: The End of Privacy Cuts Both Ways.” Inverse (Sep).
    • Pew Research Center. 2015. “Americans’ Internet Access: Percent of Adults 2000-2015.”
    • Olson, Elizabeth. 2015. “Study Cites Lower Standards in Law School Admissions.” The New York Times (Oct. 26).
    • Pistone, Michele and Michael Horn. 2016. Disrupting Law School: How Disruptive Innovation will Revolutionize the Legal World. Clayton Christenson Institute.
    • Reinemann, Carsten and Philip Baugut. 2014. “German Political Journalism Between Change and Stability.” In Raymond Kuhn and Rasmus Kleis Nielson, eds., Political Journalism in Transition: Western Europe in a Comparative Perspective. New York: Palgrave Macmillan.
    • Reisch, Michael. 2013. “Social Work Education and the Neo-liberal Challenge: The U.S. Response to Increasing Global Inequality.” Social Work Education 32. 715-733.
    • Rescher, Nicholas. 1998. Predicting the Future: An Introduction to the Theory of Forecasting. Albany, NY: State University of New York Press.
    • Rogers, Tony. 2009. “A Timeline of Newspaper Closings and Calamities.” About.com.
    • Rosa, Harmut. 2010. “Full Speed Burnout? From the Pleasures of the Motorcycle to the Bleakness of the Treadmill: The Dual Face of Social Acceleration.” International Journal of Motorcycle Studies 6:1.
    • Rosa, Harmut. 2013. Social Acceleration – A New Theory of Modernity. New York: Columbia University Press.
    • Rosenberg, Daniel & Sandra Harding. 2005. In Daniel Rosenberg and Sandra Harding, eds., “Introduction: Histories of the Future.” Histories of the Future. Durham, NC:Duke University Press.
    • Seamans, Robert & Feng Zhu. 2013. “Responses to Entry in Multi-Sided Markets: The Impact of Craigslist on Local Newspapers.” Management Science 60. 476-493.
    • Sharma, Sarah. 2014. In the Meantime: Temporality and Cultural Politics. Durham, NC: Duke University Press.
    • Schwartz, Vanessa. 1999. Spectacular Realities: Early Mass Culture in Fin-de-Siécle Paris. Oakland, CA: University of California Press.
    • Solomon, Steven Davidoff. 2015. “Law Schools and Industry Show Signs of Life Despite Forecasts of Doom.” The New York Times (Mar 31).
    • Stead, William. 1887. “The Future of Journalism.” Contemporary Review 50. 664-679.
    • Stead, William. 1893. “From Old World to New: or, A Christmas Story of the Chicago Exhibition.” Review of Reviews.
    • Stephens, Mitchell. 2014. Beyond News: The Future of Journalism. New York: Columbia University Press.
    • Villard, Oswald Garrison. 1923. Some Newspapers and Newspaper-Men. New York: Alfred A. Knopf.
    • Watson, Steve. 2014. “Greenwald Slams ‘Neutered And Impotent and Obsolete Media.’Infowars.
  • Annemarie Perez — UndocuDreamers: Public Writing and the Digital Turn

    Annemarie Perez — UndocuDreamers: Public Writing and the Digital Turn

    Annemarie Perez [*]

    The supposition [is] that higher education and schooling in general serve a democratic society by nourishing hearty citizenship.
    – Richard Ohmann (2003)

    What are the risks of writing in public in this digital age? Of being a “speaking” subject in the world of public cyberspace? Physical and legal risks are discussed in work such as Nancy Welch’s (2005) recounting of her student’s encounter with the police for literally posting her poems where bills or poems were not meant to be posted. Weisser recounts a “hallway conversation” about public writing as “shared work, shared successes, and, occasionally, shared commiseration” (2002, xii). Likewise, in writing about blogging in the classroom, Charles Tryon writes about the way blogging with interactions from the public provokes “conversations” about the “relationship between writing and audience,” one that can, at times, be uncomfortable (2006, 130). There is an assumption that when discussing the “risks” of writing in public here in the United States, we instructors are discussing the risks of exercising the rights of citizenship, of first amendment disagreement and discord. Yet the assumption that the speaking subject has first amendment rights, that they possess or can express citizenship, is one which nullifies the risks some students face when they write in public, especially in digital spaces where the audience can be a vast everyone. What is the position of one who writes in public literally without the possibility of citizenship? In the absence of US citizenship, their taking the position of subject, and offering testimony about their situation, protesting it as unjust can provoke not simply abuse, which is disturbing enough, but to threats of legal action against them. Public writing opens them and their families up to threats of reporting, detainment and possible deportation by the government. Given these very real risks, I question whether from a Chicanx studies pedagogy we should be advocating for and instructing our students to express their thoughts on their positions, on their lives, in public.[1] This question feels especially urgent when, given the digital turn, to write in “public” can mean a single tweet results in huge consequences, from public humiliation to the horror of doxxing. To paraphrase Eileen Medeiros, who writes about these risks in another context, “was it all worth a few articles and essays” or, to make it more contemporary, is the risk worth a few blog posts or ‘zines? (2007, 4).

    This said, I was and am convinced about the power and efficacy of having students write in public, especially for Chicanx studies classrooms. Faced with the possibilities offered by the Internet and their effects on the Chicanx studies classroom, my response has been enthusiasm for the electronic, for electronic writing, of their making our discourse public. Chicanx pedagogy is, in part, based on a repudiation of top-down instruction. As a pedagogy, public writing instead advocates bringing the community into the classroom and the classroom into the community. Blogging is an effective way to do this. Especially given the relative lack of Chicanx digital voices on the ‘net, I yearn for my students to own part of the Internet, to be seen and heard. This enthusiasm for having my Chicanx studies students write for the Internet came first out of my final year of dissertation research when I “discovered” that online terms from the Chicano Movement like “Aztlán,” “La Raza” and so on were being used by reactionary racists to (re)define and revise the history of the Chicano Movement as racist and anti-Semitic and were wildly distorting the goals, philosophies, and accomplishments of revolutionary movements. More disturbing, these mis-definitions were well enough linked to appear on the first few pages of search results, inflating their importance and giving them a sense of being “truth” merely by virtue of their being oft repeated. My students’ writings, my thinking went, would change this. Their work, I imagined, would be interventions in the false discourse, changing, via Google hits, what people would find when they entered Chicanx studies terms into their browsers. Besides, I did my undergraduate degree at a university in the midwest without a Chicanx or Latinx studies department. My independent study classes in Chicanx literature were constructed from syllabi for courses I found online. I was, therefore, imagining our public writing being used by people without access to a Chicanx studies classroom to further their own educations.

    Public writing, generally defined as writings for an audience beyond the professor and classroom, can be figured in a variety of ways, but descriptions, especially those in the form of learning objectives and outcomes, tend toward a focus on writing centered around social change and the fostering of citizenship. This concept of “citizenship” is often repeated in composition studies as public writing is discussed as advocacy, as service, as an expression of active citizenship. Indeed the public writer has been figured by theorists as expressions of “citizenship” and an exercise in and demonstration of first amendment rights. Public writing is presented as being, as Christian Weisser wrote, the “discourse of public life,” further writing of his pride in being “a citizen in a self-reforming constitutional democracy” (xiv). Public writing is presented as nurturing citizenship and therefore we are encouraged to foster it in our classrooms, especially in the teaching of writing. Weisser also writes of the teaching of public writing as a “shared” classroom experience, sometimes including hardships, between students and instructors.

    However, this discussion of “citizenship” and the idea of creating it through teaching to me rather disturbingly echoes the idea of assimilation to the dominant culture, an idea that Chicana/o studies pedagogy resists (Perez 1993, 276). Rather than a somewhat nationalistic goal of creating and fostering “citizenship” Chicana/o studies, especially since the 1980s publication and adoption of Gloria Anzaldúa’s Borderlands, has been for a discourse that “explains the social conditions of subjects with hybrid identities” (Elenes 1997, 359). These hybrid identities and the assumption of the position of subjecthood by those who resist the idea of nation is fraught, especially when combined with public writing. As Anzaldúa writes, “[w]ild tongues can’t be tamed, they can only be cut out” (1987, 76). The responses to Chicanx and Latinx students speaking or writing their truth can be demands for their silence.

    My students and my use of public writing via blogging and Twitter was productive through upper division classes I taught on Latina coming of age stories, Chicana feminisms and Chicana/o gothic literature. After four courses taught using blogs created on my WordPress multisite installation with author accounts created for each student, I felt that I had the blogging with students / writing in public / student archiving thing down. My students had always had the option to write pseudonymously, but most had chosen to write under their own names, wanting to create a public digital identity. The blogs were on my domain and identified with their specific university and course. had been contacted by authors (and, in one case, an author’s agent), filmmakers and artists, and other bloggers had linked to our work. My students and I could see we had a small but steady traffic of people accessing student writing with their work being read and seen and, on a few topics, our class pages were on the first pages of a Google search. Therefore, when I was scheduled to teach a broader “Introduction to Chicana/o Studies” course, I decide to apply the same structure: students publicly blogging their writings on a common course blog on issues related to Chicanx studies, to this one hundred level survey course. Although, in keeping with my specialization, the course was humanities heavy with a focus on history, literature and visual art, the syllabus also included a significant amount of work in social science, especially sociology and political science, forming the foundations of Chicanx studies theory. The course engaged a number of themes related to Chicanx social and political identity, focusing a significant amount of work on communities and migrations. The demographics of the course were mixed. In the thirty student class, about half identified as Latina/o. The rest were largely white American, with several European international students.

    As we read about migrations, studying and discussing the politics behind both the immigrant rights May Day marches in Los Angeles and the generations of migrations back and forth across the border, movements of people which long pre-dated the border wall, we also discussed the contemporary protest writings and art creations by the UndocuQueer Movement. In the course of class discussion, sometimes in response to comments their classmates were making that left them feeling that undocumented people were being stereotyped, several students self-disclosed that they were either the children of undocumented migrants or were undocumented themselves. These students discussed their experience of not being citizens of the country they had lived in since young childhood, the fear of deportation they felt for themselves or their parents, and its effect on them. The students also spoke of their hopes for a future in which they, and / or their families, could apply for and receive legal status, become citizens. This self-disclosure and recounting of personal stories had, as had been my experience in previous courses, a significant effect on the other students in the class, especially for those who had never considered the privileges their legal status afforded them. In the process the undocumented students became witnesses and experts, giving testimony. They made it clear they felt empowered by giving voice to their experience and seeing that their disclosures changed the minds of some of their classmates on who was undocumented and what they looked like.

    After seeing the effect the testimonies had in changing the attitudes of their classmates, my undocumented students, especially one who had strongly identified with the UndocuQueer movement (in one case, the student had already participated in demonstrations), began to blog about their experiences, taking issue with stereotypes of migrants and discussing the pain reading or hearing a term like “illegals” could cause. Drawing on the course-assigned examples of writers Anzaldúa and Cherríe Moraga, they used their experiences, their physical bodies, as both evidence and metaphor of the undocumented state of being in-between, not belonging fully to any country or nation. They also wrote of their feelings of invisibility on a majority white campus where equal rights of citizenship were assumed and taken for granted. Their writing was raw and powerful, challenging, passionate and, at times, angry. These student blog posts seemed the classic case of students finding their voices. As an instructor, I was pleased with their work and gave them positive feedback, as did many of their classmates. Yet as their instructor, I was focused on the pedagogy and their learning outcomes relative to the course and had not fully considered the risk they were taking writing their truth in public.

    As part of being instructor and WordPress administrator, I was also moderating comments to the blog. The settings had the blog open to public comments, with the first from any email address being hand moderated in order to prevent spamming. However, for the most part, unless an author we were reading had been alerted via Twitter, comments were between and among students in the course, which gave the course blog the feeling of being an extension of the classroom community, an illusion of privacy and intimacy. Due to this closeness, the fact the posts and comments were all coming from class members, the students and I largely lost sight of the fact we were writing in public, as the space came to feel private. This illusion of privacy was shattered when I got a comment for moderation from what turned out to be a troll demanding “illegals” be deported. Although it was not posted, what I read was an attack on one of my students, hinting the poster had done (or would do) enough digging to identify the student and their family. Not only was the comment was abusive, the commenter claimed to have reported my student to ICE.

    I was reminded of the comment and the violent anger directed at undocumented students, however worthy they might try and prove themselves, again in June 2016 when Mayte Lara Ibarra, an honors high school student in Texas, tweeted her celebration of her status as valedictorian, her high GPA, her honors at graduation, her scholarship to the University of Texas and her undocumented status. While she received many messages of support, she felt forced to remove her tweet due to abuse and threats of deportation by users who claimed to have reported her and her family to Immigration and Customs Enforcement (ICE).

    When I received this comment for moderation, my first response was to go through and change the status of the blog posts testifying about being undocumented to “drafts” and then to contact the students who had self-disclosed their status to let them know about the comment and the threat. I feared for my students and their families. Had I encouraged behavior–public writing–that made them vulnerable? I wondered whether I should I go to my chair for advice. Guilty self-interest was also present. At the time I was an adjunct instructor at this university, hired semester-to-semester to teach individual classes. How would my chair, the department, the university feel about my having put my students at risk to write for a blog on my own domain? Suddenly the “walls” set up by Blackboard, the university’s learning management software, that I had dismissed for being “closed,” looked appealing as I wondered how to manage this threat. Much of the discourse around public writing for the classroom discusses “risk,” but whose risk are we talking about, and how much of it can students take, and, as their instructor, what sort of risks can I be responsible for allowing them to take? Nancy Welch discusses the “attention toward working with students on public writing” as an expression of our belief as instructors that this writing “can matter in tangible ways” (2005, 474), but I questioned whether it could matter enough to be worth tangible risk to my students’ and their families physical bodies at the hands of a nation-state that has detained and deported more than 2.5 million people since 2009 (Gonzalez-Barrera, and Krogstad 2014). While some of the students in this class qualified for Deferred Action for Childhood Arrivals (DACA), giving them something of a shield, their parents, and other members of their families did not all have this protection.

    By contrast, perhaps not surprisingly, my students, all of them first and second year students, felt no risk, or at least were sure they were willing to take the risk associated with the public writing. They did not want their writing taken down or hidden. My students felt they were part of a movement, a moment, to expressly leave the shadows. One even argued that the abusive comment should be posted so they could engage with its author. We discussed the risks. Initially I wanted them to be able to make the choice themselves, did not want to take their voice or power from them. Yet that was not true—what I wanted was for them to choose to take the writing down and absolve me of the responsibility for the danger in which my assignments had placed them. On the other hand though, as I explained to them, the power and responsibility rested with me. I could not conscience putting them at risk on a domain I owned, for doing work I had assigned. They agreed, albeit reluctantly. What I find most shameful in this, it was not their own risk, but their understanding mine, of my position in the department and university, that made them agree I needed to take their writing down. We made their posts password protected, shared the password with the other students for the duration of the class, and the course ended uneasily in our mutual discomfort. Nothing was comfortably resolved at this meeting of immigration law with my students’ bodies and their public writing. At the end of the course, after notifying them so they could save their writing if they wished, I did something I had never done before. I removed the students’—all of the students’—blogging from the Internet by archiving the course blog and removing it from public view.

    As I began to process and analyze what had happened, I wondered what could be done differently. Was there a way to allow my students to write in public yet somehow shield them from these risks? After I presented and discussed this experience at HASTAC in 2015, I was approached with a number of possible solutions, some of which would help. Very generously, one was to allow my next course blog to be on the HASTAC site where commenting requires registration. This was a partial solution that would protect against trolling, but I questioned whether it could it protect my students from identification, from them and their families being reported to the authorities. The answer was no, it could not.

    In 2011, Amy Wan examined traced and problematized the idea of creating citizens and expressing citizenship through the teaching of literacy, a concept which she traces through composition pedagogy, especially as it is expressed on syllabi and through learning objectives. The pedagogy on public writing is imbued with the assumption of citizenship with the production of citizens as a goal of public writing. Taken this way, public writing becomes a duty. Yet there is a problem with this objective of producing citizens and this desire for citizenship when it comes to students in our classes who lack legal citizenship. Anthropology in the 1990s tried to work around and give dignity to those without “full” citizenship by presenting the idea of “cultural citizenship” as a way to refer to shared values of belonging among people without legal citizenship. This was done as a way of trying to de-marginalize the marginalized and reimagine citizenship so no one’s status was second class (Rosaldo 1994, 402). But the situation of undocumented people belies this distinction, however noble and well rooted in social justice its intention. To be undocumented in the United States is to be dispossessed not only of the rights of citizenship, but to have the exercise of either the rights or responsibilities of citizenship through public speaking or writing be taken as incitement against the nation state, with some citizens viewing it as a personal assault.

    This problem of the exercise of rights being seen as incitement is demonstrated by the way the display of the Mexican flag at protests for immigrant rights is seen as a rejection of the United States and refusal of US citizenship, despite the protests themselves being demands or pleas for the creation of a citizenship path. The mere display of Mexico’s flag is read as a provocation, an action which, even when done by citizens, destabilizes citizenship, seems to remove protester’s first amendment rights, and prompts cries that they should “Go back to Mexico,” or, more recently, for the government to “Build a wall.” Latinxs displaying national flags are accused of wanting to conquer (or reconquer) the southwest, reclaiming it from the United States for Mexico. This anxiety about being “conquered” by the growing Latinx population is, perhaps, displaying an anxiety that the southwestern states (what Chicanxs call Aztlán) are not so much a stable part of the conquered body, but an expression of how the idea of “nation” is itself unstable within the US borders. When a non-citizen, a subject sin papeles, writes about the experience of being undocumented, they are faced with a backlash of those who believe their existence, if they are allowed existence in the United States at all, is one without rights, without voice. Any attempt to give voice to their position brings overt threats of government action against their tenuous existence in the US, however strong their cultural ties to the United States. My students writing in public about their undocumented status, are reminded that their bodies are not citizens and, that the right to free speech, the right to write one’s truth in public is one given to citizen subjects.

    This has left me with a paradox. My students should write in public. Part of what they are learning in Chicanx studies is about the importance of their voices, of their experiences and their stories are ones that should be told. Yet, given the risks in discussing migration and immigration through the use of public writing, I wonder how I as an instructor should either encourage or discourage students from writing their lives, their experiences as undocumented migrants, experiences which have touched, every aspect of their lives. From a practical point of view I could set up stricter anonymity so their identities are better shielded. I could have them create their own blogs, thus rather passing the responsibility to them to protect themselves. Or I could make the writing “public” only in the sense of it being public in the space of the classroom by using learning management software to keep it, them, behind a protective wall.

    _____

    Annemarie Perez is an Assistant Professor of Interdisciplinary Studies at California State University Dominguez Hills. Her area specialty is Latina/o literature and culture, with a focus on Chicana feminist writer-editors from 1965-to the present, and digital humanities and digital pedagogy and their intersections and divisions within ethnic and cultural studies. She is writing a book on Chicana feminist editorship using digital research to perform close readings across multiple editions and versions of articles and essays..

    Back to the essay

    _____

    Acknowledgments

    [*] This article is an outgrowth of a paper presented at HASTAC 2015 for a session titled:  DH: Affordances and Limits of Post/Anti/Decolonial and Indigenous Digital Humanities. The other panel presenters were: Roopika Risam (moderator), Siobhan Senier, Micha Cárdenas and Dhanashree Thorat.

    _____

    Notes

    [1] “Chicanx” is a gender neutral, sometimes contested, term of self-identification. I use it to mean someone of Mexican origin, raised in the United States, identifying with a politic of resistance to mainstream US hegemony and an identification with indigenous American cultures.

    _____

    Works Cited

    • Anzaldua, Gloria. 1987. Borderlands/La Frontera: The New Mestiza. San Francisco: Aunt Lute Books.
    • Elenes, C. Alejandra. 1997. “Reclaiming The Borderlands: Chicana/a Identity, Difference, and Critical Pedagogy.” Educational Theory 47:3. 359-75.
    • Gonzalez-Barrera, Ana and Jens Manuel Krogstad. 2014. “U.S. Deportations of Immigrants Reach Record High in 2013.” Pew Research Center.
    • Medeiros, Eileen. 2007. “Public Writing Inside and Outside the Classroom: A Comparative Analysis of Activist Rhetorics.” Dissertations and Master’s Theses. Paper AAI3298371.
    • Moraga, Cherríe. 1983. Loving in the War Years: Lo Que Nunca Pasó Por Sus Labios. Boston, MA: South End Press.
    • Ohmann, Richard. 2003. Politics of Knowledge: The Commercialization of the University, the Professions, and Print Culture. Middleton, CT: Wesleyan University Press.
    • Perez, Laura. 1993. “Opposition and the Education of Chicana/os,” Race Identity and Representation in Education, ed. Cameron McCarthy and Warren Chrichlow. New York: Routledge.
    • Rosaldo, Renato. 1994 “Cultural Citizenship and Educational Democracy.” Cultural Anthropology 9:3. 402-411.
    • Tryon, Charles. 2006. “Writing and Citizenship: Using Blogs to Teach First-Year Composition.” Pedagogy 6:1. 128-132.
    • Wan, Amy J. 2011. “In the Name of Citizenship: The Writing Classroom and the Promise of Citizenship.” College English 74. 28-49.
    • Weisser, Christian R. 2002. Moving Beyond Academic Discourse: Composition Studies and the Public Sphere. Carbondale: Southern Illinois University Press.
    • Welch, Nancy. 2005. “Living Room: Teaching Public Writing in a Post-Publicity Era.”  College Composition and Communication 56:3. 470-492.

     

  • John Pat Leary — Innovation and the Neoliberal Idioms of Development

    John Pat Leary — Innovation and the Neoliberal Idioms of Development

    John Pat Leary

    “Human creativity and human capacity is limitless,” said the Bangladeshi economist Muhammad Yunus to a darkened room full of rapt Austrian elites. The setting was TEDx Vienna, and Yunus’s address bore all the trademark features of TED’s missionary version of technocratic idealism. “We believe passionately in the power of ideas to change attitudes, lives and, ultimately, the world,” goes the TED mission statement, and this philosophy is manifest in the familiar form of Yunus’s talk (TED.com). The lighting was dramatic, the stage sparse, and the speaker alone on stage, with only his transformative ideas for company. The speech ends with the zealous technophilia that, along with the minimalist stagecraft and quaint faith in the old-fashioned power of lectures, defines this peculiar genre. “This is the age where we all have this capacity of technology,” Yunus declares: “The question is, do we have the methodology to use these capacities to address these problems?… The creativity of human beings has to be challenged to address the problems we have made for ourselves. If we do that, we can create a whole new world—we can create a whole new civilization” (Yunus 2012). Yunus’s conviction that now, finally and for the first time, we can solve the world’s most intractable problems, is not itself new. Instead, what TED Talks like this offer is a new twist on the idea of progress we have inherited from the nineteenth century. And with his particular focus on the global South, Yunus riffs on a form of that old faith, which might seem like a relic of the twentieth: “development.” What is new, then, about Yunus’s articulation of these old faiths? It comes from the TED Talk’s combination of prophetic individualism and technophilia: this is the ideology of “innovation.”

    “Innovation”: a ubiquitous word with a slippery meaning. “An innovation is a novelty that sticks,” writes Michael North in Novelty: A History of the New, pointing out the basic ontological problem of the word: if it sticks, it ceases to be a novelty. “Innovation, defined as a widely accepted change,” he writes, “thus turns out to be the enemy of the new, even as it stands for the necessity of the new” (North 2013, 4). Originally a pejorative term for religious heresy, in its common use today “innovation” is used a synonym for what would have once been called, especially in America, “futurity” or “progress.” In a policy paper entitled “A Strategy for American Innovation,” then-President Barack Obama described innovation as an American quality, in which the blessings of Providence are revealed no longer by the acquisition of territory, but rather by the accumulation of knowledge and technologies: “America has long been a nation of innovators. American scientists, engineers and entrepreneurs invented the microchip, created the Internet, invented the smartphone, started the revolution in biotechnology, and sent astronauts to the Moon. And America is just getting started” (National Economic Council and Office of Science and Technology Policy 2015, 10).

    In the Obama administration’s usage, we can see several of the common features of innovation as an economic ideology, some of which are familiar to students of American exceptionalism. First, it is benevolent. Second, it is always “just getting started,” a character of newness constantly being renewed. Third, like “progress” and “development” have been, innovation is a universal, benevolent abstraction made manifest through material, economic accomplishments. But even more than “progress,” which could refer to political and social accomplishments like universal suffrage or the polio vaccine, or “development,” which has had communist and social democratic variants, innovation is inextricable from the privatized market that animates it. For this reason, Obama can treat the state-sponsored moon landing and the iPhone as equivalent achievements. Finally, even if it belongs to the nation, the capacity for “innovation” really resides in the self. Hence Yunus’s faith in “creativity,” and Obama’s emphasis on “innovators,” the protagonists of this heroic drama, rather than the drama itself.

    This essay explores the individualistic, market-based ideology of “innovation” as it circulates from the English-speaking first world to the so-called third world, where it supplements, when it does not replace, what was once more exclusively called “development.” I am referring principally to projects that often go under the name of “social innovation” (or, relatedly, “social entrepreneurship”), which Stanford University’s Business School defines as “a novel solution to a social problem that is more effective, efficient, sustainable, or just than current solutions” (Stanford Graduate School of Business). “Social innovation” often advertises itself as “market-based solutions to poverty,” proceeding from the conviction that it is exclusion from the market, rather than the opposite, that causes poverty. The practices grouped under this broad umbrella include projects as different the micro-lending banks, for which Yunus shared the 2006 Nobel Peace Prize; smokeless, cell-phone charging cookstoves for South Asia’s rural peasantry; latrines that turn urine into electricity, for use in rural villages without running water; and the edtech academic and TED honoree Sugata Mitra’s “self-organized learning environment” (SOLE), which appears to consist mostly of giving internet-enabled laptops to poor children and calling it a day.

    The discourse of social innovation is a theory about economic process and also a story of the (first-world) self. The ideal innovator that emerges from the examples to follow is a flexible, socially autonomous individual, whose creativity and prophetic vision, nurtured by the market, can refashion social inequalities as discrete “problems” that simply await new solutions. Guided by a faith in the market but also shaped by the austerity that has slashed the budgets of humanitarian and development institutions worldwide, social innovation ideology marks a retreat from the social vision of development. Crucially, the ideologues of innovation also answer a post-development critique of Western arrogance with a generous, even democratic spirit. That is, one of the reasons that “innovation” has come to supersede “development” in the vocabulary of many humanitarian and foreign aid agencies is that innovation ideology’s emphasis on individual agency serves as a response to the legitimate charges of condescension and elitism long directed at Euro-American development agencies. But compromising the social vision of development also means jettisoning the ideal of global equality that, however deluded, dishonest, or self-serving it was, also came with it. This brings us to a critical feature of innovation thinking that is often disguised by the enthusiasm of its tech-economy evangelizers: it is in fact a pessimistic ideal of social change. The ideology of innovation, with its emphasis on processes rather than outcomes, and individual brilliance over social structures, asks us to accommodate global inequality, rather than challenge it. It is a kind of idealism, therefore, well suited to our dispiriting neoliberal moment, where the sense of possibility seems to have shrunk.

    My objective is not to evaluate these efforts individually, nor even to criticize their practical usefulness as solution-oriented projects (not all of them, anyway). Indeed, in response to the difficult, persistent question, “What is the alternative?” it is easy, and not terribly helpful, to simply answer “world socialism,” or at least “import-substitution industrialization.” My objective is perhaps more modest: to define the ideology of “innovation” that undergirds these projects, and to dissect the Anglo-American ego-ideal that it circulates. As an ideology, innovation is driven by a powerful belief, not only in technology and its benevolence, but in a vision of the innovator: the autonomous visionary whose creativity allows him to anticipate and shape capitalist markets.

    An Orthodoxy of Unorthodoxy: Innovation, Revolution, and Salvation

    Given the immodesty of the innovator archetype, it may seem odd that innovation ideology could be considered pessimistic. On its own terms, of course, it is not; but when measured against the utopian ambitions and rhetoric of many “social innovators” and technology evangelists, their actual prescriptions appear comparatively paltry. Human creativity is boundless, and everyone can be an innovator, says Yunus; this is the good news. The bad news, unfortunately, is that not everyone can have indoor plumbing or public lighting. Consider the “pee-powered toilet” sponsored by the Gates Foundation. The outcome of inadequate sewerage in the underdeveloped world has not been changed; only the process of its provision has been innovated (Smithers 2015).  This combination of evangelical enthusiasm and piecemeal accommodation becomes clearer, however, when we excavate innovation’s tangled history, which by necessity, the word seems at first glance to lack entirely.

    A demonstration toilet, capable of powering a light, or even a mobile phone, at the University of the West of England (photograph: </strong><strong>UWE Bristol)
    Figure 1. A demonstration toilet, capable of powering a light, or even a mobile phone, at the University of the West of England (photograph: UWE Bristol)

    For most of its history, the word has been synonymous with false prophecy and dissent: initially, it was linked to deceitful promises of deliverance, either from divine judgment or more temporal forms of punishment. For centuries, this was the most common usage of this term. The charge of innovation warned against either the possibility or the wisdom of remaking the world, and disciplined those “fickle changelings and poor discontents,” as the King says in Shakespeare’s Henry IV, grasping at “hurly-burly innovation.” Religious and political leaders tarred self-styled prophets or rebels as heretical “innovators.” In his 1634 Institution of the Christian Religion, for example, John Calvin warned that “a desire to innovate all things without punishment moveth troublesome men” (Calvin 1763, 716).  Calvin’s notion that innovation was both a political and theological error reflected, of course, his own jealously kept share of temporal and spiritual authority. For Thomas Hobbes, “innovators” were venal conspirators, and innovation a “trumpet of war and sedition.” Distinguishing men from bees—which Aristotle, Hobbes says, wrongly considers a political animal like humans—Hobbes laments the “contestation of honour and preferment” that plagues non-apiary forms of sociality. Bees only “talk” when and how they have to; men and women, by contrast, chatter away in their vanity and ambition (Hobbes 1949, 65-67). The “innovators” of revolutionary Paris, Edmund Burke thundered later, “leave nothing unrent, unrifled, unravaged, or unpolluted with the slime of their filthy offal” (1798, 316-17). Innovation, like its close relative “revolution,” was upheaval, destruction, the reversal of the right order of things.

    Figure 2: The Innovation Tango, in <strong><em>The Evening World</em></strong>
    Figure 2: The Innovation Tango, in The Evening World

    As Godin (2015) shows in his history of the concept in Europe, in the late nineteenth century “innovation” began to be recuperated as an instrumental force in the world, which was key to its transformation into the affirmative concept we know now. Francis Bacon, the philosopher and Lord Chancellor under King James I, was what we might call an “early adopter” of this new positive instrumental meaning. How, he asked, could Britons be so reverent of custom and so suspicious of “innovation,” when their Anglican faith was itself an innovation? (Bacon 1844, 32). Instead of being an act of sudden renting, rifling, and heretical ravaging, “innovation” became a process of patient material improvement.  By the turn of the last century, the word had mostly lost its heretical associations. In fact, “innovation” was far enough removed from wickedness or malice in 1914 that the dance instructor Vernon Castle invented a modest American version of the tango that year and named it “the Innovation.” The partners never touched each other in this chaste improvement upon the Argentine dance. “It is the ideal dance for icebergs, surgeons in antiseptic raiment and militant moralists,” wrote Marguerite Marshall (1914), a thoroughly unimpressed dance critic in the New York Evening World. “Innovation” was then beginning to assume its common contemporary form in commercial advertising and economics, as a synonym for a broadly appealing, unthreatening modification of an existing product.

    Two years earlier, the Austrian-born economist Joseph Schumpeter published his landmark text The Theory of Economic Development, where he first used “innovation” to describe the function of the “entrepreneur” in economic history (1934, 74). For Schumpeter, it was in the innovation process that capitalism’s tendency towards tumult and creative transformation could be seen. He understood innovation historically, as a process of economic transformation, but he also singled out an innovator responsible for driving the process. In his 1942 book Capitalism, Socialism, and Democracy, Schumpeter returned to the idea in the midst of war and the threat of socialism, which gave the concept a new urgency. To innovate, he wrote was “to reform or revolutionize the pattern of production by exploiting an invention or, more generally, an untried technological possibility for producing a new commodity or producing an old one in a new way, by opening up a new source of supply of materials or a new outlet for products, by reorganizing an industry and so on” (Schumpeter 2003, 132). As Schumpeter goes on to acknowledge, this transformative process is hard to quantify or professionalize. The elusiveness of his theory of innovation comes from a central paradox in his own definition of the word: it is both a world-historical force and a quality of personal agency, both a material process and a moral characteristic. It was a historical process embodied in heroic individuals he called “New Men,” and exemplified in non-commercial examples, like the “expressionist liquidation of the object” in painting (126). To innovate was also to do, at the local level of the production process, what Marx and Engels credit the bourgeoisie as a class for accomplishing historically: revolutionizing the means of production, sweeping away what is old before it can ossify. Schumpeter told a different version of this story, though. For Marx, capitalist accumulation is a dialectical historical process, but what Schumpeter called innovation was a drama driven by a particular protagonist: the entrepreneur.

    In a sympathetic 1943 essay about Schumpeter theory of innovation, the Marxist economist Paul Sweezy criticized the centrality Schumpeter gave to individual agency. Sweezy’s interest in the concept is unsurprising, given how Schumpeter’s treatment of capitalism as a dynamic but destructive historical force draws upon Marx’s own. It is therefore not “innovation” as a process to which Sweezy objects, but the mythologized figure of the entrepreneurial “innovator,” the social type driving the process. Rather than a free agent, powering the economy’s inexorable progress, “we may instead regard the typical innovator as the tool of the social relations in which he is enmeshed and which force him to innovate on pain of elimination,” he writes (Sweezy 1943, 96). In other words, it is capital accumulation, not the entrepreneurial function, and certainly not some transcendent ideal of creativity and genius, that drives innovation. And while the innovator (the successful one, anyway) might achieve a pantomime of freedom within the market, for Sweezy this agency is always provisional, since innovation is a conditional economic practice of historically constituted subjects in a volatile and pitiless market, not a moral quality of human beings. Of course, Sweezy’s critique has not won the day. Instead, a particularly heroic version of the Schumpeterian sense of innovation as a human, moral quality liberated by the turbulence of capitalist markets is a mainstream feature of institutional life. An entire genre of business literature exists to teach the techniques of “managing creativity and innovation in the workplace” (The Institute of Leadership and Management 2007),  to uncover the “map of innovation” (O’Connor and Brown 2003), to nurture the “art of innovation” (Kelley 2001), to close the “circle of innovation” (Peters 1999), to collect the recipes in “the innovator’s cookbook” (Johnson 2011), to give you the secrets of “the sorcerers and their apprentices” (Moss 2011)—business writers leave virtually no hackneyed metaphor for entrepreneurial creativity, from the domestic to the occult, untouched.

    As its contemporary proliferation shows, innovation has never quite lost its association with redemption and salvation, even if it is no longer used to signify their false promises. As Lepore (2014) has argued about its close cousin, “disruption,” innovation can be thought of as a secular discourse of economic and personal deliverance. Even as the concept became rehabilitated as procedural, its deviant and heretical connotations were common well into the twentieth century, when Emma Goldman (2000) proudly and defiantly described anarchy as an “uncompromising innovator” that enraged the princes and oligarchs of the world. Its seeming optimism, which is inseparable from the disasters from which it promises to deliver us, is thus best considered as a response to a host of persistent anxieties of twenty-first-century life: economic crisis, violence and war, political polarization, and ecological collapse. Yet the word has come to describe the reinvention or recalibration of processes, whether algorithmic, manufacturing, marketing, or otherwise. Indeed, even Schumpeter regarded the entrepreneurial function as basically technocratic. As he put it in one essay, “it consists in getting things done” (Schumpeter 1941, 151).[1] However, as the book titles above make clear, the entrepreneurial function is also a romance. If capitalism was to survive and thrive, Schumpeter suggested, it needed to do more than produce great fortunes: it had to excite the imagination. Otherwise, it would simply calcify into the very routines it was charged with overthrowing. Innovation discourse today remains,  paradoxically, both procedural and prophetic. The former meaning lends innovation discourse its piecemeal, solution-oriented accommodation to inequality. In this latter sense, though, the word retains some of the heretical rebelliousness of its origins. We are familiar with the lionization of the tech CEO as a non-confirming or “disruptive” visionary, who sets out to “move fast and break things,” as the famous Facebook motto went. The archetypal Silicon Valley innovator is forward-looking and rebellious, regardless of how we might characterize the results of his or her innovation—a social network, a data mining scheme, or Uber-for-whatever. The dissenting meaning of innovation is at play in the case of social innovation, as well, given its aim to address social inequalities in significant new ways. So, in spite of innovation’s implicit bias towards the new, the history and present-day use of the word remind us that its present-day meaning is seeded with its older ones. Innovation’s new secular, instrumental meaning is therefore not a break with its older, prohibited, religious connotation, but an embellishment of it: what is described here is a spirit, an ideal, an ideological rescrambling of the word’s older heterodox meaning to suit a new orthodoxy.

    The Innovation of Underdevelopment: From Exploitation to Exclusion

    In his 1949 inaugural address, which is often credited with popularizing the concept of “development,” Harry Truman called for “a bold new program for making the benefits of our scientific advances and industrial progress available for the improvement and growth of underdeveloped areas” (Truman 1949).[2] “Development” in U.S. modernization theory was defined, writes Nils Gilman, by “progress in technology, military and bureaucratic institutions, and the political and social structure” (2003, 3). It was a post-colonial version of progress that defined itself as universal and placeless; all underdeveloped societies could follow a similar path. As Kristin Ross argues, development in the vein of post-war modernization theory anticipated a future “spatial and temporal convergence” (1996, 11-12). Emerging in the collapse of European colonialism, the concept’s positive value was that it positioned the whole world, south and north, as capable of the same level of social and technical achievement. As Ross suggests, however, the future “convergence” that development anticipates is a kind of Euro-American ego-ideal—the rest of the world’s brightest possible future resembled the present of the United States or western Europe. As Gilman puts it, the modernity development looked forward to was “an abstract version of what postwar American liberals wished their country to be.”

    Emerging as it did in the decline, and then in the wake, of Europe’s African, Asian, and American empires, mainstream mid-century writing on development tread carefully around the issue of exploitation. Gunnar Myrdal, for example, was careful to distinguish the “dynamic” term “underdeveloped” from its predecessor, “backwards” (1957, 7). Rather than view the underdeveloped as static wards of more “advanced” metropolitan countries, in other words, the preference was to view all peoples as capable of historical dynamism, even if they occupied different stages on a singular timeline. Popularizers of modernization theory like Walter Rostow described development as a historical stage that could be measured by certain material benchmarks, like per-capita car ownership. But it also required immaterial, subjective cultural achievements, as Josefina Saldaña-Portillo, Jorge Larrain, and Molly Geidel have pointed out. In his well-known Stages of Economic Growth, Rostow emphasized how achieving modernity required the acquisition of what he called “attitudes,” such as a “Newtonian” worldview and an acclimation to “a life of change and specialized function” (1965, 26). His emphasis on cultural attributes—prerequisites for starting development that are also consequences of achieving it—is an example of the development concept’s circular, often self-contradictory meanings. “Development” was both a process and its end point—a nation undergoes development in order to achieve development, something Cowen and Shenton call the “old utilitarian tautology of development” (1996, 4), in which a precondition for achieving development would appear to be  its presence at the outset.

    This tautology eventually circles back to what Nustad (2007, 40) calls the lingering colonial relationship of trusteeship, the original implication of colonial “development.” For post-colonial critics of developmentalism the very notion of “development” as a process unfolding in time is inseparable from this colonial relation, given the explicit or implicit Euro-American telos of most, if not all, development models. Where modernization theorists “naturalized development’s emergence into a series of discrete stages,” Saldaña-Portillo (2003, 27) writes, the Marxist economists and historians grouped loosely under the heading of “dependency theory” spatialized global inequality, using a model of “core” and “periphery” economies to counter the model of “traditional” and “modern” ones. Two such theorists, Andre Gunder Frank and Walter Rodney, framed their critiques of development with the grammar of the word itself. Like “innovation,” “development” is a progressive noun, which indicates an ongoing process in time. Its temporal and agential imprecision—when will the process ever end? Can it? Who is in charge?—helps to lend development a sense of moral and political neutrality, which it shares with “innovation.” Frank titled his most famous book on the subject The Development of Underdevelopment, the title emphasizing the point that underdevelopment was not a mere absence of development, but capitalist development’s necessary product. Rodney’s book How Europe Underdeveloped Africa did something similar, by making “underdevelop” into a transitive verb, rather than treating “underdevelopment” as a neutral condition.[3]

    As Luc Boltanski and Eve Chiapello argue, this language of neutrality became a hallmark of European accounts of global poverty and underdevelopment after the 1960s. According to their survey of economics and development literature, the category of “exclusion” (and its opposite number, “empowerment”) and the gradual disappearance of “exploitation” from economic and humanitarian literature about poverty. No single person, firm, institution, party, or class is responsible for “exclusion,” Boltanksi and Chiapello explain. Reframing exploitation as exclusion therefore “permits identification of something negative without proceeding to level accusations. The excluded are no one’s victims” (2007, 347 & 354). Exploitation is a circumstance that enriches the exploiter; the poverty that results from exclusion, however, is a misfortune profiting no one. Consider, as an example, the mission statement of the Grameen Foundation, which Yunus founded. It remains one of the leading microlenders in the world, devoted to bringing impoverished people in the global South, especially women, into the financial system through the provision of small, low-collateral loans. “Empowerment” and “innovation” are two of its core values. “We champion innovation that makes a difference in the lives of the poor,” runs one plank of the Foundation’s mission statement (Grameen Foundation India nd). “We seek to empower the world’s poor, especially the poorest women.” “Innovation” is often not defined in such statements, but rather treated as self-evidently meaningful. Like “development,” innovation is a perpetually ongoing process, with no clear beginning or end. One undergoes development to achieve development; innovation, in turn, is the pursuit of innovation, and as soon as one innovates, the innovation thus created soon ceases to be an innovation. This wearying semantic circle helps evacuate the processes of its power dynamics, of winners and losers. As Evgeny Morozov (2014, 5) has argued about what he calls “solutionism,” the celebration of technological and design fixes approaches social problems like inequality, infrastructural collapse, inadequate housing, etc.—which might be regarded as results of “exploitation”—as intellectual puzzles for which we simply have to discover the solutions. The problems are not political; rather, they are conceptual: we either haven’t had the right ideas, or else we haven’t applied them right.[4] Grameen’s mission, to bring the world’s poorest into financial markets that currently do not include them, relies on a fundamental presumption: that the global financial system is something you should definitely want to be a part of.[5] But as Banerjee et. al (2015: 23) have argued, to the extent that microcredit programs offer benefits, they mostly accrue to already profitable businesses. The broader social benefits touted by the programs—women’s “empowerment,” more regular school attendance, and so on—were either negligible or non-existent. And as a local government official in the Indian province of Anhan Pradesh told the New York Times in 2010, microloan programs in his district had not proven to be less exploitative than their predecessors, only more remote. “The money lender lives in the community,” he said. “At least you can burn down his house” (Polgreen and Bajaj 2010).

    Humanitarian Innovation and the Idea of “The Poor”

    Yunus’s TED Talk and the Grameen Foundation’s mission statement draw on the twinned ideal of innovation as procedure and salvation, and in so doing they recapitulate development’s modernist faith in the leveling possibilities of technology, albeit with the individualist, market-based zeal that is particular to neoliberal innovation thinking. “Humanitarian innovation” is a growing subfield of international development theory, which, like “social innovation,” encourages market-based solutions to poverty. Most scholars date the concept to the 2009 fair held by ALNAP (Active Learning Network for Accountability and Performance in Humanitarian Action), an international humanitarian aid agency that measures and evaluates aid programs.  Two of its leading academic proponents, Alexander Betts and Louise Bloom of the Oxford Humanitarian Innovation Project, define it thusly:

    “Innovation is the way in which individuals or organizations solve problems and create change by introducing new solutions to existing problems. Contrary to popular belief, these solutions do not have to be technological and they do not have to be transformative; they simply involve the adaptation of a product or process to context. ‘Humanitarian’ innovation may be understood, in turn, as ‘using the resources and opportunities around you in a particular context, to do something different to what has been done before’ to solve humanitarian challenges” (Betts and Bloom 2015, 4).[6]

    Here and elsewhere, the HIP hews closely to conventional Schumpeterian definitions of the term, which indeed inform most uses of “innovation” in the private sector and elsewhere: as a means of “solving problems.” Read in this light, “innovation” might seem rather innocuous, even banal: a handy way of naming a human capacity for adaptation, improvisation, and organization. But elsewhere, the authors describe humanitarian innovation as an urgent response to very specific contemporary problems that are political and ecological in nature. “Over the past decade, faced with growing resource constraints, humanitarian agencies have held high hopes for contributions from the private sector, particularly the business community,” they write. Compounding this climate of economic austerity that derives from “growing resource constraints” is an environmental and geopolitical crisis that means “record numbers of people are displaced for longer periods by natural disasters and escalating conflicts.” But despite this combination of violence, ecological degradation, and austerity, there is hope in technology: “new technologies, partners, and concepts allow humanitarian actors to understand and address problems quickly and effectively” (Betts and Bloom 2014, 5-6).

    The trope of “exclusion,” and its reliance on a rather anodyne vision of the global financial system as a fair sorter of opportunities and rewards, is crucial to a field that counsels collaboration with the private sector. Indeed, humanitarian innovators adopt a financial vocabulary of “scaling,” “stakeholders,” and “risk” in assessing the dangers and effectiveness (the “cost” and “benefits”) of particular tactics or technologies.  In one paper on entrepreneurial activity in refugee camps, de la Chaux and Haugh make an argument in keeping with innovation discourse’s combination of technocratic proceduralism and utopian grandiosity: “Refugee camp entrepreneurs reduce aid dependency and in so doing help to give life meaning for, and confer dignity on, the entrepreneurs,” they write, emphasizing in their first clause the political and economic austerity that conditions the “entrepreneurial” response (2014, 2). Relying on an exclusion paradigm, the authors point to a “lack of functioning markets” as a cause of poverty in the camps. By “lack of functioning markets,” de la Chaux and Haugh mean lack of capital—but “market,” in this framework, becomes simply an institutional apparatus which one enters and is adjudicated on one’s merits, rather than a field of conflict in which one labors in a globalized class society. At the same time, “innovation” that “empowers” the world’s “poorest” also inherits an enduring faith in technology as a universal instrument of progress. One of the preferred terms for this faith is “design”: a form of techne that, two of its most famous advocates argue, “addresses the needs of the people who will consume a product or service and the infrastructure that enables it” (Brown and Wyatt, 2010).[7] The optimism of design proceeds from the conviction that systems—water safety, nutrition, etc.—fail because they are designed improperly, without input from their users. De la Chaux addresses how ostensibly temporary camps grow into permanent settlements, using Jordan’s Za’atari refugee camp near the Syrian border as an example. Her elegant solution to the infrastructural problems these under-resourced and overpopulated communities experience? “Include urban planners in the early phases of the humanitarian emergency to design out future infrastructure problems,” as if the political question of resources is merely secondary to technical questions of design and expertise (de la Chaux and Haugh 2014, 19; de la Chaux 2015).

    In these examples, we can see once again how the ideal type of the “innovator” or entrepreneur emerges as the protagonist of the historical and economic drama unfolding in the peripheral spaces of the world economy. The humanitarian innovator is a flexible, versatile, pliant, and autonomous individual, whose potential is realized in the struggle for wealth accumulation, but whose private zeal for accumulation is thought to benefit society as a whole.[8] Humanitarian or social innovation discourse emphasizes the agency and creativity of “the poor,” by discursively centering the authority of the “user” or entrepreneur rather than the agency or the consumer. Individual qualities like purpose, passion, creativity, and serendipity are mobilized in the service of broad social goals. Yet while this sort of individualism is central in the literature of social and humanitarian innovation, it is not itself a radically new “innovation.” It instead recalls a pattern that Molly Geidel has recently traced in the literature and philosophy of the Peace Corps. In Peace Corps memoirs and in the agency’s own literature, she writes, the “romantic desire” for salvation and identification with the excluded “poor” was channeled into the “technocratic language of development” (2015, 64).

    Innovation’s emphasis on the intellectual, spiritual, and creative faculties of single entrepreneur as historically decisive recapitulates in these especially individualistic terms a persistent thread in Cold War development thinking: its emphasis on cultural transformations as prerequisites for economic ones. At the same time, humanitarian innovation’s anti-bureaucratic ethos of autonomy and creativity is often framed as a critique of “developmentalism” as a practice and an industry. It is a response to criticisms of twentieth-century development as a form of neocolonialism, as too growth-dependent, too detached from local needs, too fixated on big projects, too hierarchical. Consider the development agency UNICEF, whose 2014 “Innovation Annual Report” embraces a vocabulary and funding model borrowed from venture capital. “We knew that we needed to help solve concrete problems experienced by real people,” reads the report, “not just building imagined solutions at our New York headquarters and then deploy them” (UNICEF 2014, 2). Rejecting a hierarchical model of modernization, in which an American developmentalist elite “deploys” its models elsewhere, UNICEF proposes “empowerment” from within. And in place of “development,” as a technical process of improvement from a belated historical and economic position of premodernity, there is “innovation,” the creative capacity responsive to the desires and talents of the underdeveloped.

    As in the social innovation model promoted by the Stanford Business School and the ideal of “empowerment” advanced by Grameen, the literature of humanitarian innovation sees “the market” as a neutral field. The conflict between the private sector, military, other non-humanitarian actors in the process of humanitarian innovation is mitigated by considering each as an equivalent “stakeholder,” with a shared “investment” in the enterprise and its success; abuse of the humanitarian mission by profit-seeking and military “stakeholders” can be prevented via the fabrication of “best practices” and “voluntary codes of conduct” (Betts and Bloom 2015, 24) One report, produced for ALNAP along with the Humanitarian Innovation Fund, draws on Everett Rogers’s canonical theory of innovation diffusion. Rogers taxonomizes and explains the ways innovative products or methods circulate, from the most forward-thinking “early adopters” to the “laggards” (1983, 247-250). The ALNAP report does grapple with the problems of importing profit-seeking models into humanitarian work, however. “In general,” write Obrecht and Warner (2014, 80-81), “it is important to bear in mind that the objective for humanitarian scaling is improvement to humanitarian assistance, not profit.” Here, the problem is explained as one of “diffusion” and institutional biases in non-profit organizations, not a conflict of interest or a failing of the private market. In the humanitarian sector, they write, “early adopters” of innovations developed elsewhere are comparatively rare, since non-profit workers tend to be biased towards techniques and products they develop themselves. However, as Wendy Brown (2015, 129) has recently argued about the concepts of “best practices” and “benchmarking,” the problem is not necessarily that the goals being set or practices being emulated are intrinsically bad. The problem lies in “the separation of practices from products,” or in other words, the notion that organizational practices translate seamlessly across business, political, and knowledge enterprises, and that different products—market dominance, massive profits, reliable electricity in a rural hamlet, basic literacy—can be accomplished via practices imported from the business world.

    Again, my objective here is not to evaluate the success of individual initiatives pursued under this rubric, nor to castigate individual humanitarian aid projects as irredeemably “neoliberal” and therefore beyond the pale. To do so basks a bit too easily in the comfort of condemnation that the pejorative “neoliberal” offers the social critic, and it runs the risk, as Ferguson (2009, 169) writes, of nostalgia for the era of “old-style developmental states,” which were mostly capitalist as well, after all.[9] Instead, my point is to emphasize the political work that “innovation” as a concept does: it depoliticizes the resource scarcity that makes it seem necessary in the first place by treating the private market as a neutral arbiter or helpful partner rather than an exploiter, and it does so by disavowing the power of a Western subject through the supposed humility and democratic patina of its rhetoric. For example, the USAID Development Innovation Ventures, which seeds projects that will win support from private lenders later, stipulates that “applicants must explain how they will use DIV funds in a catalytic fashion so that they can raise needed resources from sources other than DIV” (USAID 2017). The hoped-for innovation here, it would seem, is the skill with which the applicants accommodate the scarcity of resources, and the facility with which they commercialize their project. One funded project, an initiative to encourage bicycle helmets in Cambodia, “has the potential to save the Cambodian government millions of dollars over the next 10 years,” the description proclaims. But obviously, just because something saves the Cambodian government millions doesn’t mean there is a net gain for the health and safety of Cambodians. It could simply allow the Cambodian government to give more money away to private industry or buy $10 million worth of new weapons to police the Laotian border. “Innovation,” here, requires an adjustment to austerity.

    Adjustment, often reframed positively as “resilience,” is a key concept in this literature. In another report, Betts, Bloom, and Weaver (2015, 8) single out a few exemplary innovators from the informal economy of the displaced person’s camp. They include tailors in a Syrian camp’s outdoor market; the Somali owner of an internet café in a Kenyan refugee camp; an Ethiopian man who repairs refrigerators with salvaged air conditioners and fans; and a Ugandan who built a video-game arcade in a settlement near the Rwandan border. This man, identified only as Abdi, has amassed a collection of second-hand televisions and game consoles he acquired in Kampala, the Ugandan capital. “Instead of waiting for donors I wanted to make a living,” says Abdi in the report, exemplifying the values of what Betts, Bloom, and Weaver call “bottom-up innovation” by the refugee entrepreneur. Their assessment is a generous one that embraces the ingenuity and knowledge of displaced and impoverished people affected by crisis. Top-down or “sector-wide” development aid, they write, “disregards the capabilities and adaptive resourcefulness that people and communities affected by conflict and disaster often demonstrate” (2015, 2). In this report, refugees are people of “great resilience,” whose “creativity” makes them “change makers.” As Julian Reid and Brad Evans write, we apply the word “resilient” to a population “insofar as it adapts to rather than resists the conditions of its suffering in the world” (2014, 81). The discourse of humanitarian innovation has the same concession to the inevitability of the structural conditions that make such resilience necessary in the first place. Nowhere is it suggested that refugee capitalists might be other than benevolent, or that inclusion in circuits of national and transnational capital might exacerbate existing inequalities, rather than transcend them. Furthermore, humanitarian innovation advocates never argue that market-based product and service “innovation” are, in a refugee context, beneficial to the whole, given the paucity of employment and services in affected communities; this would at least be an arguable point. The problem is that the question is never even asked. The market is like oxygen.

    Conclusion: The TED Talk and the Innovation Romance

    In 2003, I visited a recently-settled barrio settlement—one could call it a “shantytown”—perched on a hillside high above the east side of Caracas. I remember vividly a wooden, handmade press, ringed with barbed wire scavenged from a nearby business, that its owner, a middle-aged woman newly arrived in the capital, used to crush sugar cane into juice. It was certainly an innovation, by any reasonable definition: a novel, creative solution to a problem of scarcity, a new process for doing something. I remember being deeply impressed by the device, which I found brilliantly ingenious. What I never thought to call it, though, was a “solution” to its owner’s poverty. Nor, I am sure, did she; she lived in a hard-core chavista neighborhood, where dispossessing the country’s “oligarchs” would have been offered as a better innovation—in the old Emma Goldman sense. Therefore, it is not that individual ingenuity, creativity, fearlessness, hard work, and resistance to the impossible demands that transnational capital has placed on people like the video-game entrepreneur in Uganda, or that woman in Caracas, are disreputable things to single out and praise. Quite the contrary: my objection is to the capitulation to their exploitation that is smuggled in with this admiration.

    I have argued that “innovation” is, at best, a vague concept asked to accommodate far too much in its combination of heroic and technocratic meanings. Innovation, in its modern meaning, is about revolutionizing “process” and technique: this often leaves outcomes unexamined and unquestioned. The outcome of that innovative sugar cane press in Caracas is still a meager income selling juice in a perilous informal marketplace. The promiscuity of innovation’s use also makes it highly mobile and subject to abuse, as even enthusiastic users of the concept, like Betts and Bloom at the Oxford Humanitarian Innovation Project, acknowledge. As they caution, “use of the term in the humanitarian system has lacked conceptual clarity, leading to misuse, overuse, and the risk that it may become hollow rhetoric” (2014, 5). I have also argued that innovation, especially in the context of neoliberal development, must be understood in moral terms, as it makes a virtue of private accumulation and accomodation to scarcity, and it circulates an ego-ideal of the first-world self to an audience of its admirers. It is also an ideological celebration of what Harvey calls the neoliberal alignment of individual well-being with unregulated markets, and what Brown calls “the economization of the self” (2015, 33). Finally, as a response to the enduring crises of third-world poverty, exacerbated by the economic and ecological dangers of the twenty-first century, the language of innovation beats a pessimistic retreat from the ideal of global equality that, in theory at least, development in its manifold forms always held out as its horizon.

    Innovation discourse draws on deep wells—its moral claim is not new, as a reader of The Protestant Ethic and the Spirit of Capitalism will observe. Inspired in part by the example of Benjamin Franklin’s autobiography, Max Weber argued that capitalism in its ascendancy reimagined profit-seeking activities, which might once have been described as avaricious or vulgar as a virtuous “ethos” (2001, 16-17). Capitalism’s challenge to tradition, Weber argued, demanded some justification; reframing business as a calling or a vocation could help provide one. Capitalism in our time demands still demands validation not only as a virtuous discipline, but as an enterprise devoted to serving the “common good,” write Boltanski and Chiapello. As they say, “an existence attuned to the requirements of accumulation must be marked out for a large number of actors to deem it worth the effort of being lived” (2007, 10-11). “Innovation” as an ideology marks out this sphere of purposeful living for the contemporary managerial classes. Here, again, the word’s close association with “creativity” is instrumental, since creativity is often thought to be an intrinsic, instinctual human behavior. “Innovating” is therefore not only a business practice that will, as Franklin argued about his own industriousness, improve oneself in the eyes of both man and God. It is also a secular expression of the most fundamental individual and social features of the self—the impulse to understand and to improve the world. This is particularly evident in the discourse of social innovation, which the Social Innovation Lab at Stanford defines as a practice that aims to leverage the private market to solve modern society’s most intractable “problems”: housing, pollution, hunger, education, and so on. When something like world hunger is described as a “problem” in this way, though, international food systems, agribusiness, international trade, land ownership, and other sources of malnutrition disappear. Structures of oppression and inequality simply become discrete “problems” for no one has yet invented the fix. They are individual nails in search of a hammer, and the social innovator is quite confident that a hammer exists for hunger.

    Microfinance is another one of these hammers. As one economist critical of the microcredit system notes at the beginning of his own book on the subject, “most accounts of microfinance—the large-scale, businesslike provision of financial services to poor people—begin with a story” (Roodman 2012, 1). These are usually some narrative of an encounter with a sympathetic third-world subject. For Roodman, the microfinancial stories of hardship and transcendence have a seductive power over their first-world audiences, of which he is legitimately suspicious. As we saw above, Schumpeter’s procedural “entrepreneurial function” is itself also a story of a creative entrepreneur navigating the tempests of modern capitalism. In the postmodern romance of social innovation in the “underdeveloped” world, the Western subject of the drama is both ever-present and constantly disavowed. The TED Talk, with which we began, is in its crude way the most expressive genre of this contemporary version of the entrepreneurial romance.

    Rhetorically transformative but formally archaic—what could be less innovative than a lecture?—the genre of the social innovation TED Talk models innovation ideology’s combination of grandiosity and proceduralism, even as its strict generic conventions—so often and easily parodied—repeatedly undermine the speakers’ regular claims to transcendent breakthroughs. For example, in his TEDx Montreal address, Ethan Kay (2012) began in the conventional way: with a dire assessment of a monumental, yet easily overlooked, social problem in a third-world country. “If we were to think about the biggest problems affecting our world,” Kay begins, “any socially conscious person would have to include poverty, disease, and climate change. And yet there is one thing that causes all three of these simultaneously, that we pay almost no attention to, even though a very good solution exists.” Having established the scope of the problem, next comes the sentimental identification. The knowledge of this social problem is only possible because of the hospitality and insight of some poor person abroad, something familiar from Geidel’s reading of Peace Corps memoirs and Roodman’s microcredit stories: in Kay’s case, it is in the unelectrified “hut” of a rural Indian woman where, choking on cooking smoke, he realizes the need for a clean-burning indoor cookstove. Then comes the self-deprecating joke, in which the speaker acknowledges his early naivete and establishes his humble capacity for self-reflection. (“I’m just a guy from Cleveland, Ohio, who has trouble cooking a grilled-cheese sandwich,” says Kay, winning a few reluctant laughs.) And then, the technocratic solution emerges: when the insight thus acquired is subjected to the speaker’s reason and empathy, the deceptively simple and yet world-making “solution” emerges. Despite the prominent formal place of the underdeveloped character in this genre, the teller of the innovation story inevitably ends up the hero. The throat-clearing self-seriousness, the ritualistic gestures of humility, the promise to the audience of transformative change without inconvenient political consequences, and the faith in technology as a social leveler all perform the TED Talk’s ego-ideal of social “innovation.”

    One of the most successful social innovation TED Talks is Mitra’s tale of the “self-organized learning environment” (SOLE). Mitra won a $1 million prize from TED in 2013 for a talk based on his “hole-in-the-wall” experiment in New Delhi, which tests poor children’s ability to learn autonomously, guided only by internet-enabled laptops and cloud-based adult mentors abroad. (Ted.com 2013). Mitra’s idea was an excellent example of innovation discourse’s combination of the procedural and the prophetic. In the case of the latter, he begins: “There was a time when Stone Age men and women used to sit and look up at the sky and say, ‘What are those twinkling lights?’ They built the first curriculum, but we’ve lost sight of those wondrous questions” (Mitra 2013). What gets us to this lofty goal, however, is a comparatively simple process. True to genre, Mitra describes the SOLE as the fruit of a serendipitous discovery. After cutting a hole in the wall that separated his technology firm’s offices from an adjoining New Delhi slum, they placed an Internet-enabled computer in the new common area. When he returned weeks later, Mitra found local children using it expertly. Leaving unsupervised children in a room with a laptop, it turns out, activates innate capacities for self-directed learning stifled by conventional schooling. Mitra promises a cost-effective solution to the problem of primary and secondary education in the developing world—do virtually nothing. “This is done by children without the help of any teacher,” Mitra confidently concludes, sharing a PowerPoint slide of the students’ work. “The teacher only raises the question, and then stands back and admires the answer.”

    When we consider innovation’s religious origins in false prophecy, its current orthodoxy in the discourse of technological evangelism—and, more broadly, in analog versions of social innovation—is often a nearly literal example of Rayvon Fouché’s argument that the formerly colonized, “once attended to by bibles and missionaries, now receive the proselytizing efforts of computer scientists wielding integrated circuits in the digital age” (2012, 62). One of the additional ironies of contemporary innovation ideology, though, is that these populations exploited by global capitalism are increasingly charged with redeeming it—the comfortable denizens of the West need only “stand back and admire” the process driven by the entrepreneurial labor of the newly digital underdeveloped subject. To the pain of unemployment, the selfishness of material pursuits, the exploitation of most of humanity by a fraction, the specter of environmental cataclysm that stalks our future and haunts our imagination, and the scandal of illiteracy, market-driven innovation projects like Mitra’s “hole in the wall” offer next to nothing, while claiming to offer almost everything.

    _____

    John Patrick Leary is associate professor of English at Wayne State University in Detroit and a visiting scholar in the Program in Literary Theory at the Universidade de Lisboa in Portugal in 2019. He is the author of A Cultural History of Underdevelopment: Latin America in the U.S. Imagination (Virginia 2016) and Keywords: The New Language of Capitalism, forthcoming in 2019 from Haymarket Books. He blogs about the language and culture of contemporary capitalism at theageofausterity.wordpress.com.

    Back to the essay

    _____

    Notes

    [1] “The entrepreneur and his function are not difficult to conceptualize,” Schumpeter writes: “the defining characteristic is simply the doing of new things or the doing of things that are already being done in a new way (innovation).”

    [2] The term “underdeveloped” was only a bit older: it first appeared in “The Economic Advancement of Under-developed Areas,” a 1942 pamphlet on colonial economic planning by a British economist, Wilfrid Benson.

    [3] I explore this semantic and intellectual history in more detail in my book, A Cultural History of Underdevelopment (Leary, 4-10).

    [4] Morozov describes solutionism as an ideology that sanctions the following delusion: “Recasting all complex social situations either as neatly defined problems with definite, computable solutions or as transparent and self-evident processes that can be easily optimized—if only the right algorithms are in place!”

    [5] “Although the number of unbanked people globally dropped by half a billion from 2011 to 2014,” reads a Foundation web site’s entry under the tab “financial services”, “two billion people are still locked out of formal financial services.” One solution to this problem focuses on Filipino convenience stores, called “sari-sari” stores: “In a project funded by the JPMorgan Chase Foundation, Grameen Foundation is empowering sari-sari store operators to serve as digital financial service agents to their customers.” Clearly, the project must result not only in connecting customers to financial services, but in opening up new markets to JP Morgan Chase. See “Alternative Channels.”

    [6] This quoted definition of “humanitarian innovation” is attributed to an interview with an unnamed international aid worker.

    [7] Erickson (2015, 113-14) writes that “design thinking” in public education “offers the illusion that structural and institutional problems can be solved through a series of cognitive actions…” She calls it “magic, the only alchemy that matters.”

    [8] A management-studies article on the growth of so-called “innovation prizes” for global development claimed sunnily that at a recent conference devoted to such incentives, “there was a sense that society is on the brink of something new, something big, and something that has the power to change the world for the better” (Everett, Wagner, and Barnett 2012, 108).

    [9] “It is here that we have to look more carefully at the ‘arts of government’ that have so radically reconfigured the world in the last few decades,” writes Ferguson, “and I think we have to come up with something more interesting to say about them than just that we’re against them.” Ferguson points out that neoliberalism in Africa—the violent disruption of national markets by imperial capital—looks much different than it does in western Europe, where it usually treated as a form of political rationality or an “art of government” modeled on markets. It is the political rationality, as it is formed through an encounter with the “third world” object of imperial neoliberal capital, that is my concern here.

    _____

    Works Cited

    • Bacon, Francis. 1844. The Works of Francis Bacon, Lord Chancellor of England. Vol. 1. London: Carey and Hart.
    • Banerjee, Abhijit, et al. 2015. “The Miracle of Microfinance? Evidence from a Randomized Evaluation.” American Economic Journal: Applied Economics 7:1.
    • Betts, Alexander, Louise Bloom, and Nina Weaver. 2015. “Refugee Innovation: Humanitarian Innovation That Starts with Communities.” Humanitarian Innovation Project, University of Oxford.
    • Betts, Alexander and Louise Bloom. 2014. “Humanitarian Innovation: The State of the Art.” OCHA Policy and Studies Series.
    • Boltanski, Luc and Eve Chiapello. 2007. The New Spirit of Capitalism. Translated by Gregory Elliot. New York: Verso.
    •  Brown, Tim and Jocelyn Wyatt. 2010. “Design Thinking for Social Innovation.” Stanford Social  Innovation Review.
    • Brown, Wendy. 2015. Undoing the Demos: Neoliberalism’s Stealth Revolution. New York: Zone Books.
    • Burke, Edmund. 1798. The Beauties of the Late Right Hon. Edmund Burke, Selected from the Writings, &c., of that Extraordinary Man. London: J.W. Myers.
    • Calvin, John. 1763. The Institution of the Christian Religion. Translated by Thomas Norton. Glasgow: John Bryce and Archibald McLean.
    • Clark, Donald. 2013. “Sugata Mitra: Slum Chic? 7 Reasons for Doubt.”
    • Cowen, M.P. and R.W. Shenton. 1996. Doctrines of Development. London: Routledge.
    • De la Chaux, Marlen, 2015. “Rethinking Refugee Camps: Turning Boredom into Innovation.” The Conversation (Sep 24).
    • De la Chaux, Marlen and Helen Haugh. 2014. “Entrepreneurship and Innovation: How Institutional Voids Shape Economic Opportunities in Refugee Camps.” Judge Business School, University of Cambridge,
    • Erickson, Megan. 2015. Class War: The Privatization of Childhood. New York: Verso.
    • Everett, Bryony, Erika Wagner, and Christopher Barnett. 2012. “Using Innovation Prizes to Achieve the Millennium Development Goals.” Innovations: Technology, Governance, Globalization 7:1.
    • Ferguson, James. 2009. “The Uses of Neoliberalism.” Antipode 41:S1.
    • Fouché, Rayvon. 2012. “From Black Inventors to One Laptop Per Child: Exporting a Racial Politics of Technology.” In Race after the Internet, edited by Lisa Nakamura and Peter Chow-White. New York: Routledge. 61-84
    • Frank, Andre Gunder. 1991. The Development of Underdevelopment. Stockholm, Sweden: Bethany Books.
    • Geidel, Molly. 2015. Peace Corps Fantasies: How Development Shaped the Global Sixties. Minneapolis: University of Minnesota Press.
    • Gilman, Nils. 2003. Mandarins of the Future: Modernization Theory in Cold War America. Baltimore: Johns Hopkins University Press, 2003.
    • Godin, Benoit. 2015. Innovation Contested: The Idea of Innovation Over the Centuries. New York: Routledge.
    • Goldman, Emma. 2000. “Anarchism: What It Really Stands For.” Marxists Internet Archive.
    • Grameen Foundation India. No date. “Our History.”
    • Hobbes, Thomas. 1949. De Cive, or The Citizen. New York: Appleton-Century-Crofts.
    • Institute of Leadership and Management. 2007. Managing Creativity and Innovation in the Workplace. Oxford, UK: Elsevier.
    • Johnson, Steven. 2011. The Innovator’s Cookbook: Essentials for Inventing What is Next. New York: Riverhead.
    • Kay, Ethan. 2012. “Saving Lives Through Clean Cookstoves.” TEDx Montreal.
    • Kelley, Tom. 2001. The Art of Innovation: Lessons in Creativity from IDEO, America’s Leading Design Firm. New York: Crown Business.
    • Larrain, Jorge. 1991. Theories of Development: Capitalism, Colonialism and Dependency. New York: Wiley.
    • Leary, John Patrick. 2016. A Cultural History of Underdevelopment: Latin America in the U.S. Imagination. University of Virginia Press.
    • Lepore, Jill. 2014. “The Disruption Machine: What the Gospel of Innovation Gets Wrong.” The New Yorker (Jun 23).
    • Marshall, Marguerite Moore. 1914. “In Dancing the Denatured Tango the Couple Keep Two Feet Apart.” The Evening World (Jan 24).
    • Mitra, Sugata. 2013. “Build a School in the Cloud.”
    • Morozov, Evgeny. 2014. To Save Everything, Click Here: The Folly of Technological  Solutionism. New York: Public Affairs.
    • Moss, Frank. 2011. The Sorcerers and Their Apprentices: How the Digital Magicians of the MIT Media Lab Are Creating the Innovative Technologies that Will Transform Our Lives. New York: Crown Business.
    • National Economic Council and Office of Science and Technology Policy. 2015. “A Strategy for American Innovation.” Washington, DC: The White House.
    • North, Michael. 2013. Novelty: A History of the New. Chicago: University of Chicago Press.
    • Nustad, Knut G. 2007. “Development: The Devil We Know?” In Exploring Post-Development: Theory and Practice, Problems and Perspectives, edited by Aram Ziai. London: Routledge. 35-46.
    • Obrecht Alice and Alexandra T. Warner. 2014. “More than Just Luck: Innovation in Humanitarian Action.” London: ALNAP/ODI.
    • O’Connor, Kevin and Paul B. Brown. 2003. The Map of Innovation: Creating Something Out of Nothing. New York: Crown.
    • Peters, Tom. 1999. The Circle of Innovation: You Can’t Shrink Your Way to Greatness. New York: Vintage.
    • Polgreen, Lydia and Vikas Bajaj. 2010. “India Microcredit Faces Collapse From Defaults.” The New York Times (Nov 17).
    • Rodney, Walter. 1981. How Europe Underdeveloped Africa. Washington, DC: Howard University Press.
    • Ross, Kristin. 1996. Fast Cars, Clean Bodies: Decolonization and the Reordering of French Culture. Cambridge, MA: The MIT Press.
    • Rostow, Walter. 1965. The Stages of Economic Growth: A Non-Communist Manifesto. New York: Cambridge University Press.
    • Reid, Julian and Brad Evans. 2014. Resilient Life: The Art of Living Dangerously. New York: John Wiley and Sons.
    • Rogers, Everett M. 1983. Diffusion of Innovations. Third edition. New York: The Free Press.
    • Roodman, David. 2012. Due Diligence: An Impertinent Inquiry into Microfinance. Washington, D.C.: Center for Global Development.
    • Saldaña-Portillo, Josefina. 2003. The Revolutionary Imagination in the Americas and the Age of Development. Durham, NC: Duke University Press.
    • Schumpeter, Joseph. 1934. The Theory of Economic Development. Cambridge, MA. Harvard University Press.
    • Schumpeter, Joseph. 1941. “The Creative Response in Economic History,” The Journal of Economic History 7:2.
    • Schumpeter, Joseph. 2003. Capitalism, Socialism, and Democracy. London: Routledge.
    • Seitler, Ellen. 2005. The Internet Playground: Children’s Access, Entertainment, and Miseducation. New York: Peter Lang.
    • Shakespeare, William. 2005. Henry IV. New York: Bantam Classics.
    • Smithers, Rebecca. 2015. “University Intalls Prototype ‘Pee Power’ Toilet.” The Guardian (Mar 5).
    • Stanford Graduate School of Business, Center for Social Innovation. No date. “Defining Social Innovation.”
    • Sweezy, Paul. 1943. “Professor Schumpeter’s Theory of Innovation.” The Review of Economics and Statistics 25:1.
    • TED.com. No date. “Our Organization.”
    • TED.com. 2013. “Sugata Mitra Creates a School in the Cloud.”
    • Truman, Harry. 1949. “Inaugural Address, January 20, 1949.”
    • UNICEF. 2014. “UNICEF Innovation Annual Report 2014: Focus on Future Strategy.”
    • USAID. 2017. “DIV’s Model in Detail.” (Apr 3).
    • Weber, Max. 2001. The Protestant Ethic and the Spirit of Capitalism. Translated by Talcott Parsons. London: Routledge Classics.
    • Yunus, Muhammad. 2012. “A History of Microfinance.” TEDx Vienna.
  • Rob Hunter — The Digital Turn and the Ethical Turn: Depoliticization in Digital Practice and Political Theory

    Rob Hunter — The Digital Turn and the Ethical Turn: Depoliticization in Digital Practice and Political Theory

    Rob Hunter [*]

    Introduction

    In official, commercial, and activist discourses, networked computing is frequently heralded for establishing a field of inclusive, participatory political activity. It is taken to be the latest iteration of, or a standard-bearer for, “technology”: an autonomous force penetrating the social world, an independent variable whose magnitude may not directly be modified and whose effects are or ought to be welcomed. The internet, its component techniques and infrastructures, and related modalities of computing are often supposed to be accelerating and multiplying various aspects of the ideological lynchpin of the neoliberal order: individual sovereignty.[1] The Internet is heralded as the dawn of a new communication age, one in which democracy is to be reinvigorated and expanded through the publicity and interconnectivity made possible by new forms of networked relations among informed consumers.

    Composed of consumer choice, intersubjective rationality, and the activity of the autonomous subject, such sovereignty also forms the basis of many strands of contemporary ethical thought—which has increasingly come to displace rival conceptions of political thought in sectors of the Anglophone academy. In this essay, I focus on two turns and their parallels—the turn to the digital in commerce, politics, and society; and the turn to the ethical in professional and elite thought about how such domains should be ordered. I approach the digital turn through the case of the free and open source software movements. These movements are concerned with sustaining a publicly-available information commons through certain technical and juridical approaches to software development and deployment. The community of free, libre, and open source (FLOSS) developers and maintainers is one of the more consequential spaces in which actors frequently endorse the claim that the digital turn precipitates an unleashing of democratic potential in the form of improved deliberation, equalized access to information, networks, and institutions, and a leveling of hierarchies of authority. I approach the ethical turn through an examination of the political theory of democracy, particularly as it has developed in the work of theorists of deliberative democracy like Jürgen Habermas and John Rawls.

    By FLOSS I refer, more or less interchangeably, to software that is licensed such that it may be freely used, modified, and distributed, and whose source code is similarly available so that it may be inspected or changed by anyone (Free Software Foundation 2018). (It stands in contradistinction to “closed source” or proprietary software that is typically produced and sold by large commercial firms.) The agglomeration of “free,” “libre,” and “open source” reflects the multiple ideological geneses of non-proprietary software. Briefly, “free” or “libre” software is so named because, following Stallman’s (2015) original injunction in 1985, the conditions of its distribution forbid rendering the code (or derivative code) proprietary for the sake of maximizing the freedom of downstream coders and users to do as they see fit with it. The signifier “free” primarily connotes the absence of restrictions on use, modification, and distribution, rather than considerations of cost or exchange value. Of crucial importance to the free software movement was the adoption of “copyleft” licensure of software, in which copies of software are freely distributed with the restriction that subsequent users and distributors not impose additional restrictions upon subsequent distribution. As Stallman has noted, copyleft is built on a deliberate contradiction of copyright: “Copyleft uses copyright law, but flips it over to serve the opposite of its usual purpose: instead of a means of privatizing software, it becomes a means of keeping software free” (Stallman 2002, 22). Avowed members of the free software movement also conceive of free software’s importance not just in technical terms but in moral terms as well. For them, the free software ecosystem is a moral-pedagogical space in which values are reproduced and developers’ skills are fostered through unfettered access to free software (Kelty 2008).

    “Open source” software derives its name from a push—years after Stallman’s cri de coeur—that stressed non-proprietary software’s potential in the business world. Advocates of the open source framing downplayed free software’s origins in the libertarian-individualist ethos of the early free software movement. They discarded its rhetorics of individual freedom in favor of the invocation of “innovation,” “openness,” and neoliberal subjectivity. Toward the end of the twentieth century, open source activists “partially codified this philosophical frame by establishing a clear priority for pragmatic technical achievement over ideology (which was more central to the culture of the Free Software Foundation)” (Weber 2005, 165). In the current moment, antagonisms between proponents of the respective terminologies are comparatively muted. In many FLOSS developer spaces, the most commonly-avowed view is that the practical upshot of the differences in emphasis between “free” and “open source” is unimportant: the typical user or producer doesn’t care, and the immediate social consequences of the distinction are close to nil. (It is noteworthy that this framing is fully compatible with the self-consciously technicist, pragmatic framing of the open source movement, less so with the ideological commitments of the free software movement. Whether or not it is the case at the micro level that free software and open source software retain meaningfully different political valences is beyond the scope of this essay, although it is possible that voices welcoming an elision of “free” and “open source” do protest too much.)

    FLOSS is situated at the intersection of several trends and tendencies. It is a body of technical practice (hacking or coding); it is also a political-ethical formation. FLOSS is an integral component of capitalist software development—but it is also a hobbyist’s toy and a creator’s instrument (Kelty 2008), a would-be entrepreneur’s tool (Weber 2005), and an increasingly essential piece of academic kit (see, e.g., Coleman 2012). A generation of scholarship in anthropology, cultural studies, history, sociology, and other related fields has established that FLOSS is an appropriate object of study not only because its participants are typically invested in the internet-as-emancipatory-technology narrative, but also because free and open source software development has been profoundly consequential for both the cultural and technical character of the present-day information commons.

    In the remainder of the essay, I gesture at a critique of this view of the internet’s alleged emancipatory potential by examining its underlying assumptions and the theory of democracy to which it adheres. This theory trades on the idea that democracy is an ethical practice, one that achieves its fullest expression in the absence of coercion and the promotion of deliberative norms. This approach to thinking about democracy has numerous analogues in current debates in political theory and political philosophy. In prevailing models of liberal politics, institutions and ethical constraints are privileged over concepts like organization, contestation, and—above all—the pursuit and exercise of power. Indeed, within contemporary liberal political thought it is sometimes difficult to discern the activity of thinking about politics as such. I do not argue here for the merits of contestatory democracy, nor do I conceal an unease with the depoliticizing tendencies of deliberative democracy, or with the tendency to substitute the ethical for the political. Instead I draw out the theoretical commonalities between the emergence of deliberative democracy and the turn toward the digital in relations of production and reproduction. I suggest that critiques of the shortcomings of liberal thought regarding political activity and political persuasion are also applicable to the social and political claims and propositions that undergird the strategies and rhetorics of FLOSS. The hierarchies of commitment that one finds in contemporary liberalism may be detected in FLOSS thought as well. Liberalism typically prioritizes intersubjectivity over mass political action and contestation. Similarly, FLOSS rhetoric focuses on ethical persuasion rather than the pursuit of influence and social power such that proprietarian computing may be resisted or challenged. Liberalism also prioritizes property relations over other social relations. The FLOSS movement similarly retains a stark commitment to the priority of liberal property relations and to the idea of personal property in digital commodities (Pedersen 2010).

    In the context of FLOSS and the information commons, a depoliticized theory of democracy fails to attend to the dynamics of power, and to crucial considerations of political economy in communications and computing. An insistence on conceiving of democracy as an ethical aspiration or as a moral ideal—rather than as a practice of mass politics with a given historical and institutional specificity—serves to obscure crucial features of the internet as a cultural and social phenomenon. It also grants an illusory warrant for ideological claims to the effect that computing and internet-mediated communication constitute meaningful and consequential forms of civic participation and political engagement. As the ethical displaces the political, so the technological displaces the ethical. In the process, the workings of power are obscured, the ideological trappings of technologically-mediated domination are mystified, and the social forms that are peculiar to internet subcultures are naturalized as typifying the form of social organization that all democrats ought to seek after.

    In identifying the theoretical affinities between the liberalism of the digital turn and the ethical turn in liberal political theory, I hope to contribute to an enriched, interdisciplinary understanding of the available spaces for investigation and research with respect to emerging trends in digital life. The social relations that are both constituted by and constitutive of the worlds of software, networked communication, and pervasive computing are rightly becoming the objects of sustained study within disparate fields in humanistic disciplines. This essay aims at provoking new questions in such study by examining the theoretical linkages between the digital turn and the ethical turn.

    The Digital Turn

    The internet—considered in the broadest possible sense, as something comprised of networks and terminals through which various forms of sociality are mediated electronically—attracts, of course, no small amount of academic, elite, and popular attention. A familiar story tends to arise out of these attentions. The digital turn ushers in the promise of digital democracy: an expansion of opportunities for participation in politics (Klein 1999), and a revolutionizing of communications that connects individuals in networks (Castells 2010) of informed and engaged consumers and producers of non-material content (Shirky 2008). Dissent would prove impossible to stifle, as information—endowed with its own virtual, composite personality, and empowered by sophisticated technologies—would both want and be able to be free. “The Net interprets censorship as damage and routes around it” (as cited in Reagle 1999) is a famous—and possibly apocryphal—variant of this piece of folk wisdom. Pervasive networked computing ensures that citizens will be self-mobilizing in their participation in politics and in their scrutiny of corruption and rights abuses. Capital, meanwhile, can anticipate a new suite of needs to be satisfied through informational commodities. The only losers are governments that, despite enthusiastic rhetoric about an “information superhighway,” are unable to keep pace with technological growth, or with popular adoption of decentralized communications media. Their capacities to restrict or control discourse will be crippled; their control over their own populations will diminish in proportion to the growth of electronically-mediated communication.[2]

    Much of the excitement over the internet is freighted with neoliberal (Brown 2005) ideology, either in implicit or explicit terms. On this view, liberalism’s focus on the unfettered movement of commodities and the unrestricted consumption activities of individuals will find its final and definitive instantiation in a world of digital objects (with a marginal cost approaching zero) and the satisfaction of consumer needs through novel and innovative patterns of distribution. The cultural commons may be reclaimed through transformations of digital labor—social, collaborative, and remix-friendly (Benkler 2006). Problems of production can be solved through increasingly sophisticated chains of logistics (Bonacich and Wilson 2008), finally fulfilling the unrealized cybernetic dreams of planners and futurists in the twentieth century.[3] Political superintendence of the market—and many other social fields—will be rendered redundant by rapid, unmediated feedback mechanisms linking producers and consumers. This contradictory utopia will achieve a non-coercive panopticon of full information, made possible through the endless concatenation of individual decisions to consume, evaluate, and generate information (Shirky 2008).

    This prediction has not been vindicated. Contemporary observers of the internet age do not typically describe it in terms of democratic vistas and cultural efflorescence. They are likelier to examine it in terms of the extension of technologies of control and surveillance, and in terms of the subsumption of sociality under the regime of neoliberal capital accumulation. Indeed, the digital turn follows a trajectory similar to that of the neoliberal turn in governance. The neoliberal turn has enhanced rather than undermined the capacity of the state. Those capacities are directed not at the provision of public goods and social services but rather coercive security and labor discipline. The digital turn’s course has decidedly not been one of individual empowerment and an expansion of the scope of participatory forms of democratic politics. Instead, networked computing is now a profit center for a small number of titanic capitals. Certainly, the revolution in communications technology has influenced social relations. But the political consequences of that influence do not constitute a profound transformation and extension of democracy (Hindman 2008). Nor are the consequences of the revolution in communications uniformly emancipatory (Morozov 2011). More generally, the subsumption of greater swathes of sociality within the logics of computing presents the risk of the enclosure of public information, and of the extension of the capabilities of the powerful to surveil and coerce others while evading public supervision (Drahos 2002, Golumbia 2009, Pasquale 2015).

    Extensive critiques of “the Californian ideology” (Barbrook and Cameron 2002), renascent “cyberlibertarianism” (Dahlberg 2010) and its affinities with longstanding currents in right-wing thought (Golumbia 2013), and related ideological formations are all ready to hand. The digital turn is of course not characterized by a singular politics. However, the hegemonic political tendency associated with it may be fairly described as a complex of libertarian ideology, neoliberal political economy, and antistatist rhetoric. The material substrate for this complex is the burgeoning arena of capitals pursuing profits through the exploitation of “digital labor” (Fuchs 2014). Such labor occurs in software development, but also in hardware manufacturing; the buying, selling, and licensing of intellectual property; and the extractive industries providing the necessary mineral ores, rare earth metals, and other primary inputs for the production of computers (on this point see especially Dyer-Witheford 2015). The growth of this sector has been accomplished through the exploitation of racialized and marginalized populations (see, for example, Amrute 2016), the expropriation of the commons through the transformation of public assets into private property, and the decoupling in the public mind of any link between easily accessed electronic media and computing power, on the one hand, and massive power consumption and environmental devastation, on the other.

    To the extent that hopes for the emancipatory potential of a cyberlibertarian future have been dashed, enthusiasm for the left-right hybrid politics that first bruited it is still widespread. In areas in which emancipatory hopes remain unchastened by the experience of capital’s colonization of the information commons, that enthusiasm is undiminished. FLOSS movements are important examples of such areas. In FLOSS communities and spaces, left-liberal commitments to social justice causes are frequently melded with a neoliberal faith in decentralized, autonomous activity in the development, deployment, and maintenance of computing processes. When FLOSS activists self-reflexively articulate their political commitments, they adopt rhetorics of democracy and cooperative self-determination that are broadly left-liberal. However, the politics of FLOSS, like hacker politics in general, also betray a right-libertarian fixation on the removal of obstacles to individual wills. The hacker’s political horizon is the unfettering of the socially untethered, electronically empowered self (Borsook 2000). Similarly, the liberal commitments that undergird contemporary theories of “deliberative democracy” are easily adapted to serve libertarian visions of the good society.

    The Ethical and the Political

    The liberalism of such political theory as is encountered in FLOSS discourse may be fruitfully compared to the turn toward deliberative models of social organization. This turn is characterized by a dual trend in postwar political thought, centrally but not exclusively limited to the North Atlantic academy.  It consists of the elision of theoretical distinctions between individual ethical practice and democratic citizenship, while increasing the theoretical gap between agonistic practices—contestation, conflict, direction action—and policy-making within the institutional context of liberal constitutionality. The political is often equated with conflict—and thereby, potentially, violence or coercion. The ethical, by contrast, comes closer to resembling democracy as such. Democracy is, or ought to be, “depoliticized” (Pettit 2004); deliberative democracy, aimed at the realization of ethical consensus, is normatively prior to aggregative democracy or the mere counting of votes. On this view, the historical task of democracy is not to grant greater social purchase to political tendencies or formations; nor does it consist in forging tighter links between decision-making institutions and the popular will. Rather, democracy is a legitimation project, under which the decisions of representative elites are justified in terms of the publicity of the reasons or justifications supplied on their behalf. The uncertain movement between these two poles—conceiving of democracy as a normative ideal, and conceiving of it as a description of adequately legitimated institutions—is hardly unique to contemporary democratic theory. The turn toward the deliberative and the ethical is distinguished by the narrowness of its conception of the democratic—indeed by its insistence that the democratic, properly understood, is characterized by the dampening of political conflict and a tendential movement toward consensus.

    Why ought we consider the trajectory of postwar liberal thought in conjunction with the digital turn? First, there are, of course, similarities and continuities between the fortunes of liberal ideology in both the world of software work and the world of academic labor. The former is marked to a much greater extent by a widespread distrust of mechanisms of governance and is indelibly marked by outpourings of an ascendant strain of libertarian triumphalism. Where ideological development in software work has charted a libertarian course, in academic Anglophone political thought it has more closely followed a path of neoliberal restructuring. To the extent that we maintain an interest in the consequences of the digitization of sociality, it is germane and appropriate to consider liberalism in software work and liberalism in professional political theory in tandem. However, there is a rather more important reason to chart the movement of liberal political thought in this context: many of the debates, problematics, and proffered solutions in the politico-ideological discourse in the world of software work are, as it were, always already present in liberal democratic theory. As such, an examination of the ethical turn—liberal democratic theory’s disavowal of contestation, and of the agon that interpellates structures of politics (Mouffe 2005, 80–105)—can aid further, subsequent examinations of the ontological, methodological, and normative presuppositions that inform the self-understanding of formations and tendencies within FLOSS movements. Both FLOSS discourses and professional democratic theory tend to discharge conclusions in favor of a depoliticized form of democracy.

    Deliberative democracy’s roots lie in liberal legitimation projects begun in response to challenges from below and outside existing power structures. Despite effacing its own political content, deliberative democracy must nevertheless be understood as a political project. Notable gestures toward the concept may be found in John Rawls’s theory-building project, beginning with A Theory of Justice (1971); and in Jürgen Habermas’s attempts to render the intellectual legacy of the Frankfurt School compatible with postwar liberalism, culminating in Between Facts and Norms (1996). These philosophical moves were being made at the same time as the fragmentation of the postwar political and economic consensus in developed capitalist democracies. Critics have detected a trend toward retrenchment in both currents: the evacuation of political economy—let alone Marxian thought—from critical theory; the accommodation made by Rawls and his epigones with public choice theory and neoliberal economic frames. The turn from contestatory politics in Anglophone political thought was simultaneous with the rise of a sense that the institutional continuity and stability of democracy were in greater need of defense than were demands for political criticism and social transformation. By the end of the postwar boom years, an accommodation with “neoliberal governmentality” (Brown 2015) was under way throughout North Atlantic intellectual life. The horizons of imagined political possibility were contracting at the very conjuncture when labor movements and left political formations foundered in the face of the consolidation of the capitalist restructuring under way since the third quarter of the twentieth century.

    Rawls’s account of justified institutions does not place a great emphasis on mass politics; nor does Habermas’s delineation of the boundaries of the ideal circumstances for communication—except insofar as the memory of fascism that Habermas inherited from the Frankfurt School weighs heavily on his forays into democratic theory. Mass politics is an inherently suspect category in Habermas’s thought. It is telling—and by no means surprising—that the two heavyweight theorists of North Atlantic postwar social democracy are primarily concerned with political institutions and with “the ideal speech situation” (Habermas 1996, 322–328) rather than with mass politics. They are both concerned with making justificatory moves rather than with exploring the possibilities and limits to mass politics and collective action. Rawls’s theory of justice describes a technocratic scheme for a minimally redistributive social democratic polity, while Habermas’s oeuvre has increasingly come to serve as the most sophisticated philosophical brief on behalf of the project of European cosmopolitan liberalism. Within the confines of this essay it is impossible to engage in a sustained consideration of the full sweep of Rawls’s political theory, including his conception of an egalitarian and redistributive polity and his constructivist account of political justification; similarly, the survey of Habermas presented here is necessarily compressed and abstracted. I restrict the scope of my critical gestures to the contributions made by Rawls and Habermas to the articulation of a deliberative conception of democracy. In this respect, they were strikingly similar:

    Both Rawls and Habermas assert, albeit in different ways, that the aim of democracy is to establish a rational agreement in the public sphere. Their theories differ with respect to the procedures of deliberation that are needed to reach it, but their objective is the same: to reach a consensus, without exclusion, on the ‘common good.’ Although they claim to be pluralist, it is clear that theirs is a pluralism whose legitimacy is only recognized in the private sphere and that it has no constitutive place in the public one. They are adamant that democratic politics requires the elimination of passions from the public sphere. (Mouffe 2013, 55)

    In neither Rawls’s nor Habermas’s writings is the theory of deliberative democracy simply the expression of a preference for the procedural over the substantive. It is better understood as a preference for unity and consensus, coupled with a minoritarian suspicion of the institutions and norms of mass electoral democracy. It is true that both their deliberative democratic theories evince considerable concern for the procedures and conditions under which issues are identified, alternatives are articulated, and decisions are made. However, this concern is motivated by a preoccupation with a particular substantive interest: specifically, the reproduction of liberal democratic forms. Such forms are valued not for their own sake—indeed, that would verge on incoherence—but because they are held to secure certain moral ends: respect for individuals, reciprocity of regard or recognition between persons, the banishment of coercion from public life, and so on. The ends of politics are framed in terms of morality—a system of universal duties or ends. The task of political theory is to envision institutions which can secure ends or goods that may be seen as intrinsically desirable. Notions that the political might be an autonomous domain of human activity, or that political theory’s ambit extends beyond making sense of existing configurations of institutions, are discarded. In their place is an approach to political thought rooted in concerns about technologies of governance. Such an approach concerns itself with political disagreement primarily insofar as it is a foreseeable problem that must be managed and contained.

    Depoliticized, deliberative democracy may be characterized as one or more of several forms of commitment to an apolitical conception of social organization. It is methodologically individualist: it takes the (adult, sociologically normative and therefore likely white and cis-male) individual person as the appropriate object of analysis and as the denominator to which social structures ultimately reduce. It is often intersubjective in its model of communication: that is, ideas are transmitted by and between individuals, typically or ideally two individuals standing in a relation of uncoerced respect with one another. It is usually deliberative in the kind of decision-making it privileges: authoritative decisions arise not out of majoritarian voting mechanisms or mass expressions of collective will, but rather out of discursive encounters that encourage the formation and exchange of claims whose content conform to specific substantive criteria. It is often predicated on the notion that the most valuable or self-constitutive of individuals’ beliefs and understandings are pre-political: individual rational agents are “self-authenticating sources of valid claims” (Rawls 2001, 23). Their claims are treated as exogenous to the social and political contexts in which they are found. Depoliticized democracy is frequently racialized and erected on a series of assumptions and cultural logics of hierarchy and domination (Mills 1997). Finally, depoliticized democracy insists on a particular hermeneutic horizon: the publicity of reasons. For any claim to be considered credible, and for public exercises to be considered legitimate, they must be comprehensible in terms of the worldviews, held premises, or anterior normative commitments of all persons who might somehow be affected by them.

    Theories of deliberative democracy are not merely suspicious of political disagreement—they typically treat it as pathological. Social cleavages over ideology (which may always be reduced to the concatenation of individual deliberations) are evidence either of bad faith argumentation or a failure to apprehend the true nature of the common good. To the extent that deliberative democracy is not nakedly elitist, it ascribes to those democratic polities it considers well-formed a capacity for a peculiar kind of authority. Such collectivities are capable, by virtue of their well-formed deliberative structures, of discharging decisions that are binding precisely because they are correct with reference to standards that are anterior to any dialectic that might take place within the social body itself. Consequently, much depends on the ideological content of those standards.

    The concept of public reason has acquired special potency in the hands of Rawls’s legatees in North American analytic political philosophy. Similar in aim to Habermas’s ideal speech situation, the modern idea of public reason is meant to model an ideal state of deliberative democracy. Rawls locates its origins in Rousseau (Rawls 2007, 231). However, it acquires a specifically Kantian conception in his elaboration (Rawls 2001, 91–94), and an extensive literature in analytic political philosophy is devoted to the elaboration of the concept in a Rawlsian mode (for a good recent discussion see Quong 2013). Public reason requires that contested policies’ justifications are comprehensible to those who controvert those policies. More generally, the polity in which the ideal public reason obtains is one in which interlocutors hold themselves to be obliged to share, to the extent possible, the premises from which political reasoning proceeds. Arguments that are deemed to originate from outside the boundaries of public reason cannot serve a legitimating function. Public reason usually finds expression in the writings of liberal theorists as an explanation for why controverted policies or decisions may nevertheless be viewed as substantively appropriate and democratically legitimated.

    Proponents of public reason often cast the ideal as a commonplace of reasonable discussion that merely binds interlocutors to deliberate in good faith. However, public reason may also be described as a cudgel with which to police the boundaries of debate. It effectively cedes discursive power to those who controvert public policy in order to control the trajectory of the discourse—if they are possessed of enough social power. Explicitly liberal in its philosophical genealogy, public reason is expressive of liberal democratic theory’s wariness with respect to both radical and reactionary politics. Many liberal theorists are primarily concerned to show how public reason constrains reactionaries from advancing arguments that rest on religious or theological grounds. An insistence on public reasonableness (perhaps framed through an appeal to norms of civility) may also allow the powerful to cavil at challenges to prevailing economic thought as well as to prevailing understandings of the relationship between the public and the religious.

    Habermas’s project on the communicative grounds of liberal democracy (1998) reflects a similar commitment to containing disagreement and establishing the parameters when and how citizens may contest political institutions and the rules they produce and enforce. His “discourse principle” (1996, 107) is not unlike Rawls’s conception of public reason in that it is intended to serve as a justificatory ground for deliberations tending toward consensus. According to the discourse principle, a given rule or law is justified if and only if those who are to be affected by it could accept it as the product of a reasonable discourse. Much of Habermas’s work—particularly Between Facts and Norms (1996)—is devoted to establishing the parameters of reasonable discourses. Such cartographies are laid out not with respect to controversies arising out of actually existing politics (such as pan-European integration or the problems of contemporary German right-wing politics). They are instead sited within the coordinates of Habermas’s specification of the linguistic and pragmatic contours of the social world in established constitutional democracies. The practical application of the discourse principle is often recursive, in that the particular implications and the scope of the discourse principle require further elaboration or extension within any given domain of practical activity in which the principle is invoked. Despite its rarefied abstraction, the discourse principle is meant in the final instance to be embedded in real activities and sites of discursive activity. (Habermas’s work in ethics parallels his discourse-theoretic approach to politics. His dialogical principle of universalization holds that moral norms are valid insofar as its observance—and the effects of that observance—would be accepted singly and jointly by all those affected.)

    Both Rawls and Habermas’s conceptions of the communicative activity underlying collective decision-making are strongly motivated by concerns for intersubjective ethical concerns. If anything, Habermas’s discourse ethics, and the parallel moves that he makes in his interventions in political thought, are more exacting than Rawls’s conception of public reason, both in terms of the discursive environments that they presuppose as well as the demands that they place upon individual interlocutors. Both thinkers’ views also conceive of political conflict as a field in which ethical questions predominate. Indeed, under these views political antagonism might be seen as pathological, or at least taken to be the locus of a sort of problem situation: If politics is taken to be a search for the common welfare (grounded in commonly-avowed terms), or is held to consist in the provision of public goods whose worth can, in principle, be agreed upon, then it would make sense to think that political antagonism is an ill to be avoided. Politics would then be exceptional, whereas the suspension of political antagonism for the sake of decisive, authoritative decision-making would be the norm. This is the core constitutive contradiction of the theory of deliberative democracy: the priority given to discussion and rationality tends to foreclose the possibility of contestation and disagreement.

    If, however, politics is a struggle for power in the pursuit of collective interests, it becomes harder to insist that the task of politics is to smooth over differences, rather than to articulate them and act upon them. Both Rawls and Habermas have been the subjects of extensive critique by proponents of several different perspectives in political theory. Communitarian critics have typically charged Rawls with relying on a too-atomized conception of individual subjects, whose preferences and beliefs are unformed by social, cultural or institutional contexts (Gutmann 1985); similar criticisms have been mounted against Habermas (see, for example, C. Taylor 1989). Both thinkers’ accounts of the foundations of political order fail to acknowledge the politically constitutive aspects of gender and sexuality (Okin 1989, Meehan 1995). From the perspective of a more radical conception of democracy, even Rawls’s later writings in which he claims to offer a constructivist (rather than metaphysical) account of political morality (Rawls 1993) does not necessarily pass muster, particularly given that his theory is fundamentally a brief for liberalism and not for the democratization of society (for elaboration of this claim see Wolin 1996).

    Deliberative democracy, considered as a prescriptive model of politics, represents a striking departure both from political thought on the right—typically preoccupied with maintaining cultural logics and preserving existing social hierarchies—and political thought on the left, which often emphasizes contingency, conflict, and the priority of collective action. Both of these latter approaches to politics take social phenomena as subjects of concern in and of themselves, and not merely as intermediate formations which reduce to individual subjectivity. The substitution of the ethical for the political marks an intellectual project that is adequate to the imperatives of a capitalist political economy. The contradictory merger of the ethical anxieties underpinning deliberative democratic theory and liberal democracy’s notional commitment to legitimation through popular sovereignty tends toward quietism and immobilism.

    FLOSS and Democracy

    The free and open source software movements are cases of distinct importance in the emergence of digital democracy. Their traditions, and many of the actors who participate in them, antedate the digital turn considerably: the free software movement began in earnest in the mid-1980s, while its social and technical roots may be traced further back and are tangled with countercultural trends in computing in the 1970s. The movements display durable commitments to ethical democracy in their rhetoric, their organizational strategies, and the philosophical presuppositions that are revealed in their aims and activities (Coleman 2012).

    FLOSS is sited at the intersection of many of liberal democratic theory’s desiderata. These are property, persuasion, rights, and ethics. The movement is a flawed, incompletely successful, but suggestive and instructive attempt at reconfiguring capitalist property relations—importantly, and fatally, from inside of an existing set of capitalist property relations—for the sake of realizing liberal ethical commitments with respect to expression, communication, and above all personal autonomy. Self-conscious hackers in the world of FLOSS conceive of their shared goals as the maximization of individual freedom with respect to the use of computers. Coleman describes how many hackers conceive of this activity in explicitly ethical terms. For them, hacking is a vital expression of individual freedom—simultaneously an aesthetic posture as well as a furtherance of specific ethical projects (such as the dissemination of information, or the empowerment of the alienated subject).

    The origins of the free software movement are found in the countercultural currents of computing in the 1970s, when several lines of inquiry and speculation converged: cybernetics, decentralization, critiques of bureaucratic organization, and burgeoning individualist libertarianism. Early hacker values—such as unfettered sharing and collaboration, a suspicion of distant authority given expression through decentralization and redundancy, and the maximization of the latitude of individual coders and users to alter and deploy software as they see fit—might be seen as the outflowing of several political traditions, notably participatory democracy and mutualist forms of anarchism. Certainly, the computing counterculture born in the 1970s was self-consciously opposed to what it saw as the bureaucratized, sclerotic, and conformist culture of major computing firms and research laboratories (Barbrook and Cameron 2002). Richard Stallman’s 1985 declaration of the need for, and the principles underlying, the free development of software is often treated as the locus classicus of the movement (Stallman, The GNU Manifesto 2015). Stallman succeeded in instigating a narrow kind of movement, one whose social specifity it is possible to trace. Its social basis consisted of communities of software developers, analysts, administrators, and hobbyists—in a word, hackers—that shared Stallman’s concerns over the subsumption of software development under the value-expanding imperatives of capital. As they saw it, the values of hacking were threatened by a proprietarian software development model predicated on the enclosure of the intellectual commons.

    Democracy, as it is championed by FLOSS advocates, is not necessarily an ideal of well-ordered constitutional forms and institutions whose procedures are grounded in norms of reciprocity and intersubjective rationality. It is characterized by a tension between an enthusiasm for volatile forms of participatory democracy and a tendency toward deference to the competence or charisma (the two are frequently conflated) of leaders. Nevertheless, the parallels between the two political projects—deliberative democracy and hacker liberation under the banner of FLOSS—are striking. Both projects share an emphasis on the persuasion of individuals, such that intersubjective rationality is the test of the permissibility of power arrangements or use restrictions. As such, both projects—insofar as they are to be considered to be interventions in politics—are necessarily self-limiting.

    Exponents of digital democracy rely on a conception of democracy that is strikingly similar to the theory of ethical democracy considered above. The constitutive documents and inscriptive commitments of various FLOSS communities bear witness to this. FLOSS communities should attract our interest because they are frequently animated by ethical and political concerns which appear to be liberal—even left-liberal—rather than libertarian. Barbrook and Cameron’s “Californian ideology” is frequently manifested in libertarian rhetorics that tend to have a right-wing grounding. The rise of Bitcoin is also a particularly resonant recent example (Golumbia 2016). The adulation that accompanies the accumulation of wealth in Silicon Valley furnishes a more abstract example of the ideological celebration of acquisitive amour propre in computing’s social relations. The ideological substrate of commercial computing is palpably right-wing, at least in its orientation to political economy. As such it is all the more noteworthy that the ideological commitments of many FLOSS projects appear to be animated by ethico-political concerns that are more typical of left-liberalism, such as: consensus-seeking modes of collective decision-making; recognition of the struggles and claims of members of marginalized or oppressed groups; and the affirmation of differing identifies.

    Free software rhetoric relies on concepts like liberty and freedom (Free Software Foundation 2016). It is in this rhetoric that free software’s imbrication within capitalist property relations is most apparent:

    Freedom means having control over your own life. If you use a program to carry out activities in your life, your freedom depends on your having control over the program. You deserve to have control over the programs you use, and all the more so when you use them for something important in your life. (Stallman 2015)

    Stallman’s equation of freedom with control—self-control—is telling: Copyleft does not subvert copyright; it depends upon it. Hacking is dependent upon the corporate structure of industrial software development. It is embedded in the social matrix of closed-source software production, even though hackers tend to believe that “their expertise will keep them on the upside of the technology curve that protects the best and brightest from proletarianization” (Ross 2009, 168). A dual contradiction is at work here. First, copyleft inverts copyright in order to produce social conditions in which free software production may occur. Second, copyleft nevertheless remains dependent on closed-source software development for its own social reproduction. Without the state power that is necessary for contracts to be enforced, or without the reproduction of technical knowledge that is underwritten by capital’s continued interest in software development, FLOSS loses its social base. Artisanal hacking or digital homesteading could not enter into the void were capitalist computing to suddenly disappear. The decentralized production of software is largely epiphenomenal upon the centralized and highly cooperative models of development and deployment that typify commercial software development. The openness of development stands in uneasy contrast with the hierarchical organization of the management and direction of software firms (Russell 2014).

    Capital has accommodated free and open source software with little difficulty, as can be seen in the expansion of the open source software movement. As noted above, many advocates of both the free software and open source software movements frequently aver that their commitments overlap to the point that any differences are largely ones of emphasis. Nevertheless, open source software differs—in an ideal, if not political, sense—from free software in its distinct orientation to the value of freedom: it is something which is to be valued as the absence of the fetters on coding, design, and debugging that characterize proprietary software development. As such open source software trades on an interpretation of freedom that is rather distinct from the ethical individualism of free software. Indeed, it is more recognizably politically adjacent to right-wing libertarianism. This may be seen, for example, in the writings of His influential essay “The Cathedral and the Bazaar” is a paean not to the emancipatory potential of open source software but its adaptability and suitability for large-scale, rapid-turnover software development—and its amenability to the prerogatives of capital (Raymond 2000).

    One of the key ethical arguments made by free and open source software advocates rests on an understanding of property that is historically specific. The conception of property deployed within FLOSS is the absolute and total right of owners to dispose of their possessions—a form of property rights that is peculiar to the juridical apparatus of capitalism. There are, of course, superficial resemblances between software license agreements—which curtail the rights of those who buy hardware with pre-installed commercial software, for example—and the seigneurial prerogatives associated with feudalism. However, the specific set of property relations underpinning capitalist software development is also the same set of property relations that are traded upon in FLOSS theory. FLOSS criticism of proprietary software rarely extends to a criticism of private property as such. Ethical arguments for the expansion of personal computing freedoms, made with respect to the prevailing set of property relations, frequently focus on consumption. The focus may be positive: the freedom of the individual finds expression in the autonomy of the rational consumer of commodities. Or the focus may be negative: individual users must eschew a consumerist approach to computing or they will be left at the mercy of corporate owners of proprietary software.

    Arguments erected on premises about individual consumption choices are not easily extended to the sphere of collective political action. They do not discharge calls for pressuring political institutions or pursuing public power. The Free Software Foundation, the main organizational node of the free software movement, addresses itself to individual users (and individual capitalist firms) and places its faith in the ersatz property relations made possible by copyleft’s parasitism on copyright. The FSF’s ostensible non-alignment is really complementary, rather than antagonistic with, the alignments of major open source organizations. Organizations associated with the open source software movement are eager to find institutional partners in the business world. It is certainly the case that in the world of commercial computing, the open source approach has been embraced as an effective means for socializing the costs of software production (and the reproduction of software development capacities) while privatizing the monetary rewards that can be realized on the basis of commodified software. Meanwhile, the writings of Stallman and the promotional literature of the Free Software Foundation eschew the kind of broad-based political strategy that their analysis would seem to militate for, one in which FLOSS movements would join up with other social movements. An immobilist tendency toward a single-issue approach to politics is characteristic of FLOSS at large.

    One aspect of deliberative democracy—an aspect that is, as we have seen treated as banal in an unproblematic by many theorists of liberalism—that is often given greater emphasis by active proponents of digital democracy is the primacy of liberal property relations. Property relations take on special urgency in the discourse and praxis of free and open source software movements. Particularly in the propaganda and apologia of the open source movement, the personal computer is the ultimate form of personal property. More than that—it is an extension of the self. Computers are intimately enmeshed in human lives, to a degree even greater than was the case thirty years ago. To many hackers, the possibility that the code executed on their machines is beyond their inspection is a violation of their individual autonomy. Tellingly, analogies for this putative loss of freedom take as their postulates the “normal,” extant ways in which owners relate to the commodities they have purchased. (For example, running proprietary code on a computer may be analogized to driving a car whose hood cannot be opened.)

    Consider the Debian Social Contract, which encodes a variety of liberal principles as the constitutive political materials of the Debian project, adopted in the wake of a series of controversies and debates about gender imbalance (O’Neil 2009, 129–146). That the project’s constitutive document is self-reflexively liberal is signaled in its very title: it presupposes liberal concerns with the maximization of personal freedom and the minimization of coercion, all under the rubric of cooperation for a shared goal. The Debian Social Contract was the product of internal struggles within the Debian project, which aims to produce a technically sophisticated and yet ethically grounded version of the GNU/Linux operating system. It represents the ascendancy of a tendency within the Debian project that sought to affirm the project’s emancipatory aims. This is not to suggest that, prior to the adoption of the Social Contract, the project was characterized by an uncontested focus on technical expertise, at the direct expense of an emancipatory vision of FLOSS computing; nevertheless, the experience decisively shifted Debian’s trajectory such that it was no longer parallel with that of related projects.

    Another example of FLOSS’s fetishism for non-coercive, individual-centered ethics may be found in the emphasis placed on maximizing individual user freedom. The FSF, for example, considers it a violation of user autonomy to make the use of free, open source software conditional by restricting its use—even only notionally—to legal or morally-sanctioned use cases. As is often the case when individualist libertarianism comes into contact with practical politics, an obstinate insistence on abstract principles discharges absurd commitments. The major stakeholders and organizational nodes in the free software movement—the FSF, the GNU development community, and so on—refuse even to censure the use of free software in situations characterized by the restriction or violation of personal freedoms: military computing, governmental surveillance, and so on.

    It must also be noted that the hacker ethos is at least partially coterminous with cyberlibertarianism. Found in both is the tendency to see the digital sphere as both the vindication of neoliberal economic precepts as well as the ideal terrain in which to pursue right-wing social projects. From the user’s perspective, cyberlibertarianism is presented as a license to use and appropriate the work of others who have made their works available for such purposes. It may perhaps be said that cyberlibertarianism is the ethos of the alienated monad pursuing jouissance through the acquisition of technical mastery and control over a personal object, the computer.

    Persuasion and Contestation

    We are now in a position to examine the contradictions in the theory of politics that informs FLOSS activity. These contradictions converge at two distinct—though certainly related—sites. The first site centers on power, and interest aggregation; the second, on property and the claims of users over their machines and data. An elaboration and examination of these contradictions will suggest that, far from overcoming or transcending the contradictions of liberalism as they inhere either in contemporary political practice or in liberal political thought, FLOSS hackers and activists have reproduced them in their practices as well as in their texts.

    The first site of contradiction centers on politics. FLOSS advocates adhere to an understanding of politics that emphasizes moral suasion and that valorizes the autonomy of the individual to pursue chosen projects and satisfy their own preferences. This despite the fact that the primary antagonists in the FLOSS political imaginary—corporate owners of IP portfolios, developers and retailers of proprietary software, and policy-makers and bureaucrats—possess considerable political, legal, and social power. FLOSS discourses counterpose to this power, not counterpower but evasion, escape, and exit. Copyleft itself may be characterized as evasive, but more central here is the insistence that FLOSS is an ethical rather than a political project, in which individual developers and users must not be corralled into particular formations that might use their collective strength to demand concessions or transform digitally mediated social relations. This disavowal of politics directly inhibits the articulation of counter-positions and the pursuit of counterpower.

    So long as FLOSS as a political orientation remains grounded in a strategic posture of libertarian individualism and interpersonal moral suasion, it will be unable to effectively underwrite demands or place significant pressures on institutions and decision-making bodies. FLOSS political rhetoric trades heavily on tropes of individual sovereignty, egalitarian epistemologies, and participatory modes of decision-making. Such rhetorics align comfortably with the currently prevailing consensus regarding the aims and methods of democratic politics, but when relied on naïvely or uncritically, they place severe limits on the capacity for the FLOSS movement to expand its political horizons, or indeed to assert itself in such a way as to become a force to be reckoned with.

    The second site of contradiction is centered on property relations. In the self-reflexive and carefully articulated discourse of FLOSS advocates, persons are treated as ethical agents, but such agents are primarily concerned with questions of the disposition of their property—most importantly, their personal computing devices. Free software advocates, in particular, emphasize the importance of users’ freedoms, but their attentiveness to such freedoms appears to end at the interface between owner and machine. More generally, property relations are foregrounded in FLOSS discourse even as such discourse draws upon and deploys copyleft in order to weaponize intellectual property law against proprietarian use cases.

    For so long as FLOSS as a social practice remains centered on copyleft, it will reproduce and reinforce the property relations which sustain a scarcity economy of intellectual creations. Copyleft is commonly understood as an ingenious solution to what is seen as an inherent tendency in the world of software towards restrictions on access, limitations on communication and exchange of information, and the diminution of the informational commons. However, these tendencies are more appropriately conceived of as notably enduring features of the political economy of capitalism itself. Copyleft cannot dismantle a juridical framework heavily weighted in favor of ownership in intellectual property from the inside—no more so than a worker-controlled-and-operated enterprise threatens the circuits of commodity production and exchange that comprise capitalism as a set of social relations. Moreover, major FLOSS advocates—including the FSF and the Open Source Initiative—proudly note the reliance of capitalist firms on open source software in their FAQs, press releases, and media materials. Such a posture—welcoming the embrace of FLOSS the software industry, with its attendant practices of labor discipline and domination, customer and citizen surveillance, and privatization of data—stands in contradiction with putative FLOSS values like collaborative production, code transparency, and user freedom.

    The persistence—even, in some respects, the flourishing—of FLOSS in the current moment represents a considerable achievement. Capitalism’s tendency toward crisis continues to impel social relations toward the subsumption of more and more of the social under the rubric of commodity production and exchange. And yet it is still the case that access to computing processes, logics, and resources remains substantially unrestricted by legal or commercial barriers. Much of this must be credited to the efforts of FLOSS activists. The first cohort of FLOSS activists recognized that resisting the commodification of the information commons was a social struggle—not simply a technical challenge—and sought to combat it. That they did so according to the logic of single-issue interest group activism, rather than in solidarity with a broader struggle against commodification, should perhaps not be surprising; in the final quarter of the twentieth century, broad struggles for power and recognition by and on behalf of workers and the poor were at their lowest ebb in a century, and a reconfiguration of elite power in the state and capitalism was well under way. Cross-class, multiracial, and gender-inclusive social movements were losing traction in the face of retrenchment by a newly emboldened ruling class; and the conceptual space occupied by such work was contested. Articulating their interests and claims as participants in liberal interest group politics was by no means the poorest available strategic choice for FLOSS proponents.

    The contradictions of such an approach have nevertheless developed apace, such that the current limitations and impasses faced by FLOSS movements appear more or less intractable. Free and open source software is integral to the operations of some of the largest firms in economic history. Facebook (2018), Apple (2018), and Google (Alphabet, Inc. 2018), for example, all proudly declare their support of and involvement in open source development.[4] Millions of coders, hackers, and users can and do participate in widely (if unevenly) distributed networks of software development, debugging, and deployment. It is now a practical possibility for the home user to run and maintain a computer without proprietary software installed on it. Nevertheless, proprietary software development remains a staggeringly profitable undertaking, FLOSS hacking remains socially and technically dependent on closed computing, and the home computing market is utterly dominated by the production and sale of machines that ship with and run software that is opaque—by design and by law—to the user’s inspection and modification. These limitations are compounded by FLOSS movements’ contradictions with respect to property relations and political strategy.

    Implications and Further Questions

    The paradoxes and contradictions that attend both the practice and theory of digital democracy in the FLOSS movements bear strong family resemblances to the paradoxes and contradictions that inhere in much contemporary liberal political theory. Liberal democratic theory is frequently committed to the melding of a commitment to rational legitimation with the affirmation of the ideal of popular sovereignty; but an insistence on rational authority tends to undermine the insurgent potential of democratic mass action. Similarly, the public avowals of respect for human rights and the value of user freedom that characterize FLOSS rhetoric are in tension with a simultaneous insistence on moral suasion centered on individual subjectivity. What’s more, they are flatly contradicted by the stated commitments by prominent leaders and stakeholders in FLOSS communities in favor of capitalist labor relations and neutrality with respect to the social or moral consequences of the use of FLOSS. Liberal political theory is potentially self-negating to the extent that it discards the political in favor of the ethical. Similarly, FLOSS movements short-circuit much of FLOSS’s potential social value through a studied refusal to consider the merits of collective action or the necessity of social critique.

    The disjunctures between the rhetorics and stated goals of FLOSS movements and their actual practices and existing social configurations are deserving of greater attention from a variety of perspectives. I have approached those disjunctures through the lens of political theory, but these phenomena are also deserving of attention within other disciplines. The contradiction between FLOSS’s discursive fealty to the emancipatory potential of software and the dependence of FLOSS upon the property relations of capitalism merits further elaboration and exploration. The digital turn is too easily conflated with the democratization of a social world that is increasingly intermediated by networked computing. The prospects for such an opening up of digital public life remain dim.

    _____

    Rob Hunter is an independent scholar who holds a PhD in Politics from Princeton University.

    Back to the essay

    _____

    Acknowledgments

    [*] I am grateful to the b2o: An Online Journal editorial collective and to two anonymous reviewers for their feedback, suggestions, and criticism. Any and all errors in this article are mine alone. Correspondence should be directed to: jrh@rhunter.org.

    _____

    Notes

    [1] The notion of the digitally-empowered “sovereign individual” is adumbrated at length in an eponymous book by Davidson and Rees-Mogg (1999) that sets forth a right-wing techno-utopian vision of network-mediated politics—a reactionary pendant to liberal optimism about the digital turn. I am grateful to David Golumbia for this reference.

    [2] For simultaneous presentations and critiques of these arguments see, for example, Dahlberg and Siapera (2007), Margolis and Moreno-Riaño (2013), Morozov (2013), Taylor (2014), and Tufekci (2017).

    [3] See Bernes (2013) for a thorough presentation of the role of logistics in (re)producing social relations in the present moment.

    [4] “Google believes that open source is good for everyone. By being open and freely available, it enables and encourages collaboration and the development of technology, solving real world problems” (Alphabet, Inc. 2017).

    _____

    Works Cited

    • Alphabet, Inc. 2018. “Google Open Source.” (Accessed July 31, 2018.)
    • Amrute, Sareeta. 2016. Encoding Race, Encoding Class: Indian IT Workers in Berlin. Durham, NC: Duke University Press.
    • Apple Inc. 2018. “Open Source.” (Accessed July 31, 2018.)
    • Barbrook, Richard, and Andy Cameron. (1995) 2002. “The Californian Ideology.” In Peter Ludlow, ed., Crypto Anarchy, Cyberstates, and Pirate Utopias. Cambridge, MA: The MIT Press. 363–387.
    • Benkler, Yochai. 2006. The Wealth of Networks: How Social Production Transforms Markets and Freedom. New Haven, CT: Yale University Press.
    • Bernes, Jasper. 2013. “Logistics, Counterlogistics and the Communist Prospect.” Endnotes 3. 170–201.
    • Bonacich, Edna, and Jake Wilson. 2008. Getting the Goods: Ports, Labor, and the Logistics Revolution. Ithaca, NY: Cornell University Press.
    • Borsook, Paulina. 2000. Cyberselfish: A Critical Romp through the Terribly Libertarian Culture of High Tech. New York: PublicAffairs.
    • Brown, Wendy. 2005. “Neoliberalism and the End of Liberal Democracy.” In Edgework: Critical Essays on Knowledge and Politics. Princeton, NJ: Princeton University Press. 37–59.
    • Brown, Wendy. 2015. Undoing the Demos: Neoliberalism’s Stealth Revolution. New York: Zone Books.
    • Castells, Manuel. 2010. The Rise of The Network Society. Malden, MA: Wiley-Blackwell.
    • Coleman, E. Gabriella. 2012. Coding Freedom: The Ethics and Aesthetics of Hacking. Princeton, NJ: Princeton University Press.
    • Dahlberg, Lincoln. 2010. “Cyber-Libertarianism 2.0: A Discourse Theory/Critical Political Economy Examination.” Cultural Politics 6:3. 331–356.
    • Dahlberg, Lincoln, and Eugenia Siapera. 2007. “Tracing Radical Democracy and the Internet.” In Lincoln Dahlberg and Eugenia Siapera, eds., Radical Democracy and the Internet: Interrogating Theory and Practice. Basingstoke: Palgrave. 1–16.
    • Davidson, James Dale, and William Rees-Mogg. 1999. The Sovereign Individual: Mastering the Transition to the Information Age. New York: Touchstone.
    • Drahos, Peter. 2002. Information Feudalism: Who Owns the Knowledge Economy? New York: The New Press.
    • Dyer-Witheford, Nick. 2015. Cyber-Proletariat: Global Labour in the Digital Vortex. London: Pluto Press.
    • Facebook, Inc. 2018. “Facebook Open Source.” (Accessed July 31, 2018.)
    • Free Software Foundation. 2018. “What Is Free Software?” (Accessed July 31, 2018.)
    • Fuchs, Christian. 2014. Digital Labour and Karl Marx. London: Routledge.
    • Golumbia, David. 2009. The Cultural Logic of Computation. Cambridge, MA: Harvard University Press
    • Golumbia, David. 2013. “Cyberlibertarianism: The Extremist Foundations of ‘Digital Freedom.’” Uncomputing.
    • Golumbia, David. 2016. The Politics of Bitcoin: Software as Right-Wing Extremism. Minneapolis, MN: University of Minnesota Press.
    • Gutmann, Amy. 1985. “Communitarian Critics of Liberalism.” Philosophy and Public Affairs 14. 308–322.
    • Habermas, Jürgen. 1996. Between Facts and Norms: Contributions to a Discourse Theory of Law and Democracy. Cambridge, MA: MIT Press.
    • Habermas, Jürgen. 1998. The Inclusion of the Other. Edited by Ciarin P. Cronin and Pablo De Greiff. Cambridge, MA: MIT Press.
    • Hindman, Matthew. 2008. The Myth of Digital Democracy. Princeton, NJ: Princeton University Press.
    • Kelty, Christopher M. 2008. Two Bits: The Cultural Significance of Free Software. Durham, NC: Duke University Press.
    • Klein, Hans. 1999. “Tocqueville in Cyberspace: Using the Internet for Citizens Associations.” Technology and Society 15. 213–220.
    • Laclau, Ernesto, and Chantal Mouffe. 2014. Hegemony and Socialist Strategy: Towards a Radical Democratic Politics. London: Verso.
    • Margolis, Michael, and Gerson Moreno-Riaño. 2013. The Prospect of Internet Democracy. Farnham: Ashgate.
    • Meehan, Johanna, ed. 1995. Feminists Read Habermas. New York: Routledge.
    • Mills, Charles W. 1997. The Racial Contract. Ithaca, NY: Cornell University Press.
    • Morozov, Evgeny. 2011. The Net Delusion: The Dark Side of Internet Freedom. New York: PublicAffairs.
    • Morozov, Evgeny. 2013. To Save Everything, Click Here: The Folly of Technological Solutionism. New York: PublicAffairs.
    • Mouffe, Chantal. 2005. The Democratic Paradox. London: Verso.
    • Mouffe, Chantal. 2013. Agonistics: Thinking the World Politically. London: Verso.
    • Okin, Susan Moller. 1989. “Justice as Fairness, For Whom?” In Justice, Gender and the Family. New York: Basic Books. 89–109.
    • O’Neil, Mathieu. 2009. Cyberchiefs: Autonomy and Authority in Online Tribes. London: Pluto Press.
    • Pasquale, Frank. 2015. The Black Box Society: The Secret Algorithms That Control Money and Information. Cambridge, MA: Harvard University Press.
    • Pedersen, J. Martin. 2010. “Introduction: Property, Commoning and the Politics of Free Software.” The Commoner 14 (Winter). 8–48.
    • Pettit, Philip. 2004. “Depoliticizing Democracy.” Ratio Juris 17:1. 52–65.
    • Quong, Jonathan. 2013. “On the Idea of Public Reason.” In The Blackwell Companion to Rawls, edited by John Mandle and David A. Reidy. Oxford: Wiley-Blackwell. 265–280.
    • Rawls, John. 1971. A Theory of Justice. Cambridge, MA: Harvard University Press.
    • Rawls, John. 1993. Political Liberalism. New York: Columbia University Press.
    • Rawls, John. 2001. Justice as Fairness: A Restatement. Cambridge, MA: Harvard University Press.
    • Rawls, John. 2007. Lectures in the History of Political Philosophy. Cambridge, MA: The Belknap Press of Harvard University Press.
    • Raymond, Eric S. 2000. The Cathedral and the Bazaar. Self-published.
    • Reagle, Joseph. 1999. “Why the Internet Is Good: Community Governance That Works Well.” Berkman Center.
    • Ross, Andrew. 2009. Nice Work If You Can Get It: Life and Labor in Precarious Times. New York: New York University Press.
    • Russell, Andrew L. 2014. Open Standards and the Digital Age. New York, NY: Cambridge University Press.
    • Shirky, Clay. 2008. Here Comes Everybody: The Power of Organizing Without Organizations. London: Penguin.
    • Stallman, Richard M. 2002. Free Software, Free Society: Selected Essays of Richard M. Stallman. Edited by Joshua Gay. Boston: GNU Press.
    • Stallman, Richard M. 2015. “Free Software Is Even More Important Now.” GNU.org.
    • Stallman, Richard M. 2015. “The GNU Manifesto.” GNU.org.
    • Taylor, Astra. 2014. The People’s Platform: Taking Back Power and Culture in the Digital Age . New York: Metropolitan Books.
    • Taylor, Charles. 1989. Sources of the Self. Cambridge, MA: Harvard University Press.
    • Tufekci, Zeynep. 2017. Twitter and Tear Gas: The Power and Fragility of Networked Protest. New Haven, CT: Yale University Press.
    • Weber, Steven. 2005. The Success of Open Source. Cambridge, MA: Harvard University Press.
    • Wolin, Sheldon. 1996. “The Liberal/Democratic Divide: On Rawls’s Political Liberalism.” Political Theory 24. 97–119.

     

  • Siobhan Senier — What Indigenous Literature Can Bring to Electronic Archives

    Siobhan Senier — What Indigenous Literature Can Bring to Electronic Archives

    Siobhan Senier

    Indigenous people are here—here in digital space just as ineluctably as they are in all the other “unexpected places” where historian Philip Deloria (2004) suggests we go looking for them. Indigenous people are on Facebook, Twitter, and YouTube; they are gaming and writing code, podcasting and creating apps; they are building tribal websites that disseminate immediately useful information to community members while asserting their sovereignty. And they are increasingly present in electronic archives. We are seeing the rise of Indigenous digital collections and exhibits at most of the major heritage institutions (e.g., the Smithsonian) as well as at a range of museums, universities and government offices. Such collections carry the promise of giving tribal communities more ready access to materials that, in some cases, have been lost to them for decades or even centuries. They can enable some practical, tribal-nation rebuilding efforts, such as language revitalization projects. From English to Algonquian, an exhibit curated by the American Antiquarian Society, is just one example of a digitally-mediated collaboration between tribal activists and an archiving institution that holds valuable historic Native-language materials.

    “Digital repatriation” is a term now used to describe many Indigenous electronic archives. These projects create electronic surrogates of heritage materials, often housed in non-Native museums and archives, making them more available to their tribal “source communities” as well as to the larger public. But digital repatriation has its limits. It is not, as some have pointed out, a substitute for the return of the original items. Moreover, it does not necessarily challenge the original archival politics. Most current Indigenous digital collections, indeed, are based on materials held in universities, museums and antiquarian societies—the types of institutions that historically had their own agendas of salvage anthropology, and that may or may not have come by their materials ethically in the first place. There are some practical reasons that settler institutions might be first to digitize: they tend to have rather large quantities of material, along with the staff, equipment and server space to undertake significant electronic projects. The best of these projects are critically self-conscious about their responsibilities to tribal communities. And yet the overall effect of digitizing settler collections first is to perpetuate colonial archival biases—biases, for instance, toward baskets and buckskins rather than political petitions; biases toward sepia photographs of elders rather than elders’ letters to state and federal agencies; biases toward more “exotic” images, rather than newsletters showing Native activists successfully challenging settler institutions to acknowledge Indigenous peoples’ continuous, and political presence.

    Those petitions, letters and newsletters do exist, but they tend to reside in the legions of small archives gathered, protected and curated by tribal people themselves, often with gallingly little material support or recognition from outside their communities. While it is true that many Indigenous cultural heritage items have been taken from their source communities for display in remote collecting institutions, it is also true that Indigenous people have continued to maintain their own archives of books, papers and art objects in tribal offices, tribal museums, attics and garages. Such items might be in precarious conditions of preservation, subject to mold, mildew or other damage. They may be incompletely inventoried, or catalogued only in an elder’s memory. And they are hardly ever digitized. A recent survey by the Association of Tribal Archives Libraries and Museums (2013) found that, even though digitization is now the industry standard for libraries and archives, very few tribal collections in the United States are digitizing anything at all. Moreover, the survey found, this often isn’t for lack of desire, but for lack of resources—lack of staff and time, lack of access to adequate equipment and training, lack of broadband.[1]

    Tribally stewarded collections often hold radically different kinds of materials that tell radically different stories from those historically promoted by institutions that thought they were “preserving” cultural remnants. Of particular interest to me as a literary scholar is the Indigenous writing that turns up in tribal and personal archives: tribal newsletters and periodicals; powwow and pageant programs; mimeographed books used to teach language and traditional narratives; recorded oral histories; letters, memoirs and more. Unlike the ethnographers’ photographs, colonial administrators’ records and (sometimes) decontextualized material objects that dominate larger museums, these writings tell stories of Indigenous survival and persistence. In what follows, I give a brief review of some of the best-known Indigenous electronic archives, followed by a consideration of how digitizing Indigenous writing, specifically, could change the way we see such archives. In their own recirculations of their writings online, Native people have shown relatively little interest in the concerns that currently dominate the field of Digital Humanities, including “preservation,” “open access,” “scalability,” and (perhaps the most unfortunate term in this context) “discoverability.” They seem much keener to continue what those literary traditions have in fact always done: assert and enact their communities’ continuous presence and political viability.

    Digital Repatriation and Other Consultative Practices

    Indigenous digital archives are very often based in universities, headed by professional scholars, often with substantial community engagement. The Yale Indian Papers Project, which seeks to improve access to primary documents demonstrating the continuous presence of Indigenous people in New England, elicits editorial assistance from a number of Indigenous scholars and tribal historians. The award-winning Plateau People’s Web Portal at Washington State University takes this collaborative methodology one step further, inviting consultants from neighboring tribal nations to come in to the university archives and select and curate materials for the web. Other digital Indigenous exhibits come from prestigious museums and collecting institutions, like the American Philosophical Society’s “Native American Images Project.” Indeed, with so many libraries, museums and archives now creating digital collections these days (whether in the form of e-books, scanned documents, or full electronic exhibits), materials related to Indigenous people can be found in an ever-growing variety of formats and places. Hence, there is also a rising popularity in portals—regional or state-based sites that can act as gateways to a wide variety of digital collections. Some are specific to Indigenous topics and locations, like the Carlisle Indian School Digital Resource Center, which compiles web-based resources for studying U.S. boarding school history. Others digital portals sweep up Indigenous objects along with other cultural materials, like the Maine Memory Network or the Digital Public Library of America.

    It is not surprising that the bent of most of these collections is decidedly ethnographic, given that Indigenous people the world over have been the subjects of one prolonged imperial looting. Cultural heritage professionals are now legally (or at least ethically) required to repatriate human remains and sacred objects, but in recent years, many have also begun to speak of “digital repatriation.” Just as digital collections of all kinds are providing new access to materials held in far-flung locations, these are arguably a boon to elders or Native people living far away, for instance, from the Smithsonian Museum, to be able to readily view their cultural property. The digitization of heritage and materials can, in fact, help promote cultural revitalization and culturally responsive teaching  (Roy and Christal 2002; Srinivasan et al. 2010). Many such projects aim expressly “to reinstate the role of the cultural object as a generator, rather than an artifact, of cultural information and interpretation” (Brown and Nicholas 2012, 313).

    Nonetheless, Indigenous people may be forgiven if they take a dim view of their cultural heritage items being posted willy nilly on the internet. Some have questioned whether digital repatriation is a subterfuge for forestalling or refusing the return of the original items. Jim Enote (Zuni), Executive Director of the A:shiwi A:wan Museum and Heritage Center, has gone so far as to say that the words “digital” and “repatriation” simply don’t belong in the same sentence, pointing out that nothing in fact is being repatriated, since even the digital item is, in most cases, also created by a non-Native institution (Boast and Enote 2013,  110). Others worry about the common assumption that unfettered access to information is always and everywhere an unqualified good. Anthropologist Kimberly Christen has asked pointedly, “Does Information Really Want to be Free?” Her answer: “For many Indigenous communities in settler societies, the public domain and an information commons are just another colonial mash-up where their cultural materials and knowledge are ‘open’ for the profit and benefit of others, but remain separated from the sociocultural systems in which they were and continue to be used, circulated, and made meaningful” (Christen 2012, 2879-80).

    A truly decolonized archive, then, calls for a critical re-examination of the archive itself. As Ellen Cushman (Cherokee) puts it, “Archives of Indigenous artifacts came into existence in part to elevate the Western tradition through a process of othering ‘primitive’ and Native traditions . . . . Tradition. Collection. Artifacts. Preservation. These tenets of colonial thought structure archives whether in material or digital forms” (Cushman 2013, 119). The most critical digital collections, therefore, are built not only through consultation with Indigenous knowledge-keepers, but also with considerable self-consciousness about the archival endeavor itself. The Yale editors, for instance, explain that “we cannot speak for all the disciplines that have a stake in our work, nor do we represent the perspective of Native people themselves . . . . [Therefore tribal] consultants’ annotations might include Native origin stories, oral sources, and traditional beliefs while also including Euro-American original sources of the same historical event or phenomena, thus offering two kinds of narratives of the past” (Grant-Costa, Glaza, and Sletcher 2012). Other sites may build this archival awareness into the interface itself. Performing Archive: Curtis + the “vanishing race,” for instance, seeks explicitly to “reject enlightenment ideals of the cumulative archive—i.e. that more materials lead to better, more accurate knowledge—in order to emphasize the digital archive as a site of critique and interpretation, wherein access is understood not in terms of access to truth, but to the possibility of past, present, and future performance” (Kim and Wernimont 2014).

    Additional innovations worth mentioning here include the content management system Mukurtu, initially developed by Christen and her colleagues to facilitate culturally responsive archiving for an Aboriginal Australian collection, and quickly embraced by projects worldwide. Recognizing that “Indigenous communities across the globe share similar sets of archival, cultural heritage, and content management needs” (2005:317), Mukurtu lets them build their own digital collections and exhibits, while giving them finely grained control over who can access those materials—e.g., through tribal membership, clan system, family network, or some other benchmark. Christen and her colleague Jane Anderson have also created a system of traditional knowledge (TK) licenses and labels—icons that can be placed on a website to help educate site visitors about the culturally appropriate use of heritage materials. The licenses (e.g., “TK Commercial,” “TK Non-Commercial”) are meant to be legal instruments for owners of heritage material; a tribal museum, for instance, could use them to signal how it intends for electronic material to be used or not used. The TK labels, meanwhile, are extra-legal tools meant to educate users about culturally appropriate approaches to material that may, legalistically, be in the  “public domain,” but from a cultural standpoint have certain restrictions: e.g., “TK Secret/Sacred,” “TK Women Restricted,” “TK Community Use Only.”)

    All of the projects described here, many still in their incipient stages, aim to decolonize archives at their core. They put Indigenous knowledge-keepers in partnership with computing and heritage management professionals to help communities determine how, whether, and why their collections shall be digitized and made available. As such, they have a great deal to teach digital literary projects—literary criticism (if I may) not being a profession historically inclined to consult with living subjects very much at all. Next, I ponder why, despite great strides in both Indigenous digital collections and literary digital collections, the twain have really yet to meet.

    Electronic Textualities: The Pasts and Futures of Indigenous Literature

    While signatures, deeds and other Native-authored texts surface occasionally in the aforementioned heritage projects, digital projects devoted expressly to Indigenous writing are relatively few and far between.[2] Granting that Aboriginal people, like any other people, do produce writings meant to be private, as a literary scholar I am confronted daily with a rather different problem than that of cultural “protection”: a great abundance of poetry, fiction and nonfiction written by Indigenous people, much of which just never sees the larger audiences for which it was intended. How can the insights of the more ethnographically oriented Indigenous digital archives inform digital literary collections, and vice versa? How do questions of repatriation, reciprocity, and culturally sensitive contextualization change, if at all, when we consider Indigenous writing?

    Literary history is another of those unexpected places in which Indians are always found. But while Indigenous literature—both historic and contemporary—has garnered increasing attention in the academy and beyond, the Digital Humanities does not seem to have contributed very much to the expansion and promotion of these canons. Conversely, while DH has produced some dynamic and diverse literary scholarship, scholars in Native American Studies seem to be turning toward this scholarship only slowly. Perhaps digital literary studies has not felt terribly inviting to Indigenous texts; many observers (Earhart 2012; Koh 2015) have remarked that the emerging digital literary canon, indeed, looks an awful lot like the old one, with the lion’s share of the funding and prestige going to predictable figures like William Shakespeare, William Blake, and Walt Whitman. At this moment, I know of no mass movement to digitize Indigenous writing, although a number of “public domain” texts appear in places like the Internet Archive, Google Books, and Project Gutenberg.[3] Indigenous digital literature seems light years away from the kinds of scholarly and technical standards achieved by the Whitman and Rosetti Archives. And without a sizeable or searchable corpus, scholarship on Indigenous literature likewise seems light years from the kinds of text mining, topic modeling and network analysis that is au courant in DH.

    Instead, we see small-scale, emergent digital collections that nevertheless offer strong correctives to master narratives of Indigenous disappearance, and that supply further material for ongoing sovereignty struggles. The Hawaiian language newspaper project is one powerful example. Started as a massive crowdsourcing effort that digitized at least half of the remarkable 100 native-language newspapers produced by Hawaiian people between the 1830s and the 1940s, it calls itself “the largest native-language cache in the Western world,” and promises to change the way Hawaiian history is seen. It might well do so if, as Noenoe Silva (2004, 2) has argued, “[t]he myth of [Indigenous Hawaiian] nonresistance was created in part because mainstream historians have studiously avoided the wealth of material written in Hawaiian.” A grassroots digitization movement like the Hawaiian Nupepa Project makes such studious avoidance much more difficult, and it brings to the larger world of Indigenous digital collections direct examples—through Indigenous literacy—of Indigenous political persistence.

    It thus points to the value of the literary in Indigenous digitization efforts. Jessica Pressman and Lisa Swanstrom (2013) have asked, “What kind of scholarly endeavors are possible when we think of the digital humanities as not just supplying the archives and data-sets for literary interpretation but also as promoting literary practices with an emphasis on aesthetics, on intertextuality, and writerly processes? What kind of scholarly practices and products might emerge from a decisively literary perspective and practice in the digital humanities?” Abenaki historian Lisa Brooks (2012, 309) has asked similar questions from an Indigenous perspective, positing that digital space allows us to challenge conventional notions of literary periodization and of place, to “follow paths of intellectual kinship, moving through rhizomic networks of influence and inquiry.” Brooks and other literary historians have long argued that Indigenous people have deployed alphabetic literacy strategically to (re)build their communities, restore and revitalize their traditions, and exercise their political and cultural sovereignty. Digital literary projects, like the Hawaiian newspaper project, can offer powerful extensions of these practices in electronic space.

    Dawnlandvoices.org: Curating Indigenous Literary Continuance

    These were some of the questions and issues we had in mind when we started dawnlandvoices.org.[4] This archive is emergent—not a straight scan-and-upload of items residing in one physical site or group of sites, but rather a collaboration among tribal authors, tribal collections, and university-based scholars and students. It came out of a print volume, Dawnland Voices: An Anthology of Writing from Indigenous New England (Senier 2014), edited by myself and eleven tribal historians. Organized by tribal nation, the book ranges from the earliest writings (petroglyphs and political petitions) to the newest (hip-hop poetry and blog entries). The print volume already aimed to be a counter-archive, insofar as it represents the literary traditions of “New England,” a region that has built its very identity on colonial dispossession, colonial boundaries and the myth of Indian disappearance. It also already aimed to decolonize the archive, insofar as it distributes editorial authority and control to Indigenous writers, historians and knowledge-keepers. At almost 700 pages, though, Dawnland in book form could only scratch the surface of the wealth of writing that regional Native people have produced, and that remains, for the most part, in their own hands.

    We wanted a living document—one that could expand to include some of the vibrant pieces we could not fit in the book, one that could be revised and reshaped according to ongoing community conversation. And we wanted to keep presenting historic materials alongside new (in this case born-digital) texts, the better to highlight the long history of Indigenous writing in this region. But we also realized that this required resources. We approached the National Endowment for the Humanities and received a $38,000 Preservation and Access grant to explore how digital humanities resources might be better redistributed to empower tribal communities who want to digitize their texts, either for private tribal use or more public dissemination. The partners on this grant included three different, but representative kinds of collections: a tribal museum with some history of professional archiving and private support (the Tomaquag Indian Memorial Museum in Rhode Island); a tribal office that finds itself acting as an unofficial repository for a variety of papers and documents, and that does not have the resources to completely inventory or protect these (the Passamaquoddy Cultural Preservation Office in Maine); and four elders who have amassed a considerable collection of books, papers, and slides from their years working in the Boston Children’s Museum and Plimoth Plantation, and were storing these in their own homes (the Indigenous Resources Collaborative in Massachusetts). Under the terms of the grant, the University of New Hampshire sent digital librarians to each site to set up basic hardware and software for digitization, while training tribal historians in digitization basics. The end result of this two-year pilot project was a small exhibit of sample items from each archive.

    The obstacles to this kind of work for small tribal collections are perhaps not unique, but they are intense. Digitization is expensive, time-consuming, and labor-intensive, even more so for collections that do not have ample (or any) paid staff, that can’t afford to update basic software or that don’t even have reliable internet connections. And there were additional hurdles: while the pressure from DH writ large (and granting institutions individually) is frequently to demonstrate scalability, in the end, the tribal partners on this grant did not coalesce around a shared goal of digitizing their collections wholesale. The Passamaquoddy tribal heritage preservation officer has scanned and uploaded the greatest quantity of material by far, but he has strategically zeroed in on dozens of tribal newsletters containing radical histories of Native resistance and survival in the latter half of the twentieth century. The Tomaquag Museum does want to digitize its entire collection, but it prefers to do so in-house, for optimum control of intellectual property. The Indigenous Resources Collaborative, meanwhile, would rather digitize and curate just a small handful of items as richly as possible. While these elders were initially adamant that they wanted to learn to scan and upload their own documents, they learned quickly just how stultifying this labor is. What excited them much more was the process of selecting individual documents and dreaming about how to best share these online.  An old powwow flyer describing the Mashpee Wampanoag game of fireball, for instance, had them naming elders and players they could interview, with the possibility of adding metadata in the form of video or narrative audio.

    More than a year after articulating this aspiration, the IRC has not begun to conduct or record any such interviews. Such a project is beyond their current energies, time and resources; and to be sure, any continuation of their work on this partner project at dawnlandvoices.org should be compensated, which will mean applying for new grants. But the delay or inaction also points to a larger conundrum: that for all of the Web’s avowed multimodality, indigenous digital collections have generally not reflected the longstanding multimodality of indigenous literatures themselves—in particular, their longstanding and mutually sustaining interplay of oral and written forms. Some (Golumbia 2015) would attribute this to an unwillingness within DH to recognize the kinds of digital language work being done by Indigenous communities worldwide. Perhaps, too, it owes something to the history of violence embedded in “recording” or “preserving” Indigenous oral traditions (Silko 1981); the Indigenous partners with whom I have worked are generally skeptical of the need to put their traditional narratives—or even some of the recorded oral histories they may have stored in cassette—online. Too, there is the time and labor involved in recording. It is now common to hear digital publishers wax enthusiastic about the “affordances” of the Web (it seems so easy, to just add an mp3), but with few exceptions, dawnlandvoices.org has not elicited many recordings, despite our invitations to authors to contribute them.

    Unlike the texts in the most esteemed digital literature archives like the Rosetti Archive (edited, contextualized and encoded to the highest scholarly standard), the texts in dawnlandvoices.org are often rough, edgy, and unfinished; and that, quite possibly, is the way they will remain. Insofar as dawnlandvoices.org aspires to be a “database” at all (and we are not sure that it does), it makes sense at this point for there to be multiple pathways in and out of that collection, multiple ways of formatting and presenting material. It is probably fair to say that most scholars working on indigenous digital archives dream of a day when these sites will have robust community engagement and commentary. At the same time, many would readily admit that it’s not as simple as building it and hoping they will come. David Golumbia (2015) has gone so far as to suggest that what marginalizes Indigenous projects within DH is the archive-centric nature of the field itself—that while “most of the major First Nations groups now maintain rich community/governmental websites with a great deal of information on history, geography, culture, and language. . . none of this work, or little of it, is perceived or labeled as DH.” Thus, the esteemed digital archives might not, in fact, be what tribal communities want most. Brown and Nicholas raise the equally provocative possibility that “[i]nstitutional databases may . . . already have been superseded by social networking sites as digital repositories for cultural information” (2012:315). And, in fact, that most pervasive and understandably-maligned of social-networking sites, Facebook, seems to be serving some tribal museums’, authors’ and historians’ immediate cultural heritage needs surprisingly well. Many post historic photos or their own writings to their walls, and generate fabulously rich commentary: identifications of individuals in pictures, memories of places and events, praise and criticism for poetry. Facebook is a proprietary, and notoriously problematic platform, especially on the issue of intellectual property. And yet it has made room, at least for now, for a kind of fugitive curation that, albeit fragile and fugitive, raises the question of whether such curation should be “institutional” at all. We can see similar things happening on Twitter (as in Daniel Heath Justice’s recent “year of tweets” naming Indigenous authors) and Instagram (where artists like Stephen Paul Judd store, share, and comment on their work).  Outside of DH and settler institutions, indigenous people are creating all kinds of collections that—if they are not “archives” in a way that satisfy professional archivists—seem to do what Native people, individually and collectively, need them to do. At least for today, these collections create what First Nations artists Jason Lewis and Skawennati Tricia Fragnito call “Aboriginally determined territories in cyberspace” (2005).

    What the conversations initiated by Kim Christen, Jane Anderson, Jim Enote and others can bring to digital literature collections is a scrupulously ethical concern for Indigenous intellectual property, an insistence on first voice and community engagement. What Indigenous literature, in turn, can bring to the table is an insistence on politics and sovereignty.  Like many literary scholars, I often struggle with what (if anything) makes “Literature” distinctive. It’s not that baskets or katsina masks cannot be read expressions of sovereignty—they can, and they are. But Native literatures—particularly the kinds saved by Indigenous communities themselves rather than by large collecting institutions and salvage anthropologists—provide some of the most powerful and overt archives of resistance and resurgence. The invisibility of these kinds of tribal stories and tribal ways of knowing and keeping stories is an ongoing concern, even on the “open” Web. It may be that Digital Humanities writ large will continue to struggle against the seeming centrifugal force of traditional literary and cultural canons. It is not likely, however, that Indigenous communities will wait for us.

    _____

    Siobhan Senier is associate professor of English at the University of New Hampshire. She is the editor of Dawnland Voices: An Anthology of Writing from Indigenous New England and dawnlandvoices.org.

    Back to the essay

    _____

    Notes

    [1] A study by Native Public Media (Morris and Meinrath 2009) found that broadband access in and around Native American and Alaska Native communities was less than 10 percent, sometimes as low as 5 to 6 percent.

    [2] Two rare, and newer, exceptions include the Occom Circle Project at Dartmouth College, and the Kim-Wait/Eisenberg Collection at Amherst College.

    [3] The University of Virginia Electronic Texts Center at one time had an excellent collection of Native-authored or Native-related works, but these are now buried within the main digital catalog.

    _____

    Works Cited

    • Association of Tribal Archives Libraries and Museums. 2013. “International Conference Program.” Santa Ana Pueblo, NM.
    • Boast, Robin, and Jim Enote. 2013. “Virtual Repatriation: It Is Neither Virtual nor Repatriation.” In Peter Biehl and Christopher Prescott, eds., Heritage in the Context of Globalization. SpringerBriefs in Archaeology. New York, NY: Springer New York. 103–13.
    • Brooks, Lisa. 2012. “The Primacy of the Present, the Primacy of Place: Navigating the Spiral of History in the Digital World.” PMLA 127:2. 308–16.
    • Brown, Deidre, and George Nicholas. 2012. “Protecting Indigenous Cultural Property in the Age of Digital Democracy: Institutional and Communal Responses to Canadian First Nations and Māori Heritage Concerns.” Journal of Material Culture 17:3. 307–24.
    • Christen, Kimberly. 2005. “Gone Digital: Aboriginal Remix and the Cultural Commons.” International Journal of Cultural Property 12:3. 315–45.
    • Christen, Kimberly. 2012. “Does Information Really Want to Be Free?: Indigenous Knowledge Systems and the Question of Openness.” International Journal of Communication 6. 2870–93.
    • Cushman, Ellen. 2013. “Wampum, Sequoyan, and Story: Decolonizing the Digital Archive.” College English 76:2. 116–35.
    • Deloria, Philip Joseph. 2004. Indians in Unexpected Places. Lawrence: University Press of Kansas.
    • Earhart, Amy. 2012. “Can Information Be Unfettered? Race and the New Digital Humanities Canon.” In Matt Gold, ed., Debates in the Digital Humanities. Minneapolis: University of Minnesota Press.
    • Golumbia, David. 2015. “Postcolonial Studies, Digital Humanities, and the Politics of Language.” Postcolonial Digital Humanities.
    • Grant-Costa, Paul, Tobias Glaza, and Michael Sletcher. 2012. “The Common Pot: Editing Native American Materials.” Scholarly Editing 33.
    • Kim, David J., and Jacqueline Wernimont. 2014. “‘Performing Archive’: Identity, Participation, and Responsibility in the Ethnic Archive.Archive Journal 4.
    • Koh, Adeline. 2015. “A Letter to the Humanities: DH Will Not Save You.” Hybrid Pedagogy, April 19.
    • Lewis, Jason, and Skawennati Fragnito. 2005. “Aboriginal Territories in Cyberspace.” Cultural Survival (June).
    • Morris, Traci L., and Sascha D. Meinrath. 2009. New Media, Technology and Internet Use in Indian Country: Quantitative and Qualitative Analyses. Flagstaff, AZ: Native Public Media.
    • Pressman, Jessica, and Lisa Swanstrom. 2013. “The Literary And/As the Digital HumanitiesDigital Humanities Quarterly 7:1.
    • Roy, Loriene, and Mark Christal. 2002. “Digital Repatriation: Constructing a Culturally Responsive Virtual Museum Tour.” Journal of Library and Information Science 28:1. 14–18.
    • Senier, Siobhan, ed. 2014. Dawnland Voices: An Anthology of Indigenous Writing from New England. Lincoln: University of Nebraska Press.
    • Silko, Leslie Marmon. 1981. “An Old-Time Indian Attack Conducted in Two Parts.” In Geary Hobson, ed. The Remembered Earth: An Anthology of Contemporary Native American Literature. Albuquerque: University of New Mexico Press. 211–16
    • Silva, Noenoe K. 2004. Aloha Betrayed: Native Hawaiian Resistance to American Colonialism. Durham: Duke University Press.
    • Srinivasan, Ramesh, et al. 2010. “Diverse Knowledges and Contact Zones within the Digital Museum.” Science Technology Human Values 35:5. 735–68.

     

     

  • Jonathan Beller — The Computational Unconscious

    Jonathan Beller — The Computational Unconscious

    Jonathan Beller

    God made the sun so that animals could learn arithmetic – without the succession of days and nights, one supposes, we should not have thought of numbers. The sight of day and night, months and years, has created knowledge of number, and given us the conception of time, and hence came philosophy. This is the greatest boon we owe to sight.
    – Plato, Timaeus

    The term “computational capital” understands the rise of capitalism as the first digital culture with universalizing aspirations and capabilities, and recognizes contemporary culture, bound as it is to electronic digital computing, as something like Digital Culture 2.0. Rather than seeing this shift from Digital Culture 1.0 to Digital Culture 2.0 strictly as a break, we might consider it as one result of an overall intensification in the practices of quantification. Capitalism, says Nick Dyer-Witheford (2012), was already a digital computer and shifts in the quantity of quantities lead to shifts in qualities. If capitalism was a digital computer from the get-go, then “the invisible hand”—as the non-subjective, social summation of the individualized practices of the pursuit of private (quantitative) gain thought to result in (often unknown and unintended) public good within capitalism—is an early, if incomplete, expression of the computational unconscious. With the broadening and deepening of the imperative toward quantification and rational calculus posited then presupposed during the early modern period by the expansionist program of Capital, the process of the assignation of a number to all qualitative variables—that is, the thinking in numbers (discernible in the commodity-form itself, whereby every use-value was also an encoded as an exchange-value)—entered into our machines and our minds. This penetration of the digital, rendering early on the brutal and precise calculus of the dimensions of cargo-holds in slave ships and the sparse economic accounts of ship ledgers of the Middle Passage, double entry bookkeeping, the rationalization of production and wages in the assembly line, and more recently, cameras and modern computing, leaves no stone unturned. Today, as could be well known from everyday observation if not necessarily from media theory, computational calculus arguably underpins nearly all productive activity and, particularly significant for this argument, those activities that together constitute the command-control apparatus of the world system and which stretch from writing to image-making and, therefore, to thought.[1] The contention here is not simply that capitalism is on a continuum with modern computation, but rather that computation, though characteristic of certain forms of thought, is also the unthought of modern thought. The content indifferent calculus of computational capital ordains the material-symbolic and the psycho-social even in the absence of a conscious, subjective awareness of its operations. As the domain of the unthought that organizes thought, the computational unconscious is structured like a language, a computer language that is also and inexorably an economic calculus.

    The computational unconscious allows us to propose that much contemporary consciousness (aka “virtuosity” in post-Fordist parlance) is a computational effect—in short, a form of artificial intelligence. A large part of what “we” are has been conscripted, as thought and other allied metabolic processes are functionalized in the service of the iron clad movements of code. While “iron clad” is now a metaphor and “code” is less the factory code and more computer code, understanding that the logic of industrial machinery and the bureaucratic structures of the corporation and the state have been abstracted and absorbed by discrete state machines to the point where in some quarters “code is law” will allow us to pursue the surprising corollary that all the structural inequalities endemic to capitalist production—categories that often appear under variants of the analog signs of race, class, gender, sexuality, nation, etc., are also deposited and thus operationally disappeared into our machines.

    Put simply, and, in deference to contemporary attention spans, too soon, our machines are racial formations. They are also technologies of gender and sexuality.[2] Computational capital is thus also racial capitalism, the longue durée digitization of racialization and, not in any way incidentally, of regimes of gender and sexuality. In other words inequality and structural violence inherent in capitalism also inhere in the logistics of computation and consequently in the real-time organization of semiosis, which is to say, our practices and our thought. The servility of consciousness, remunerated or not, aware of its underlying operating system or not, is organized in relation not just to sociality understood as interpersonal interaction, but to digital logics of capitalization and machine-technics. For this reason, the political analysis of postmodern and, indeed, posthuman inequality must examine the materiality of the computational unconscious. That, at least, is the hypothesis, for if it is the function of computers to automate thinking, and if dominant thought is the thought of domination, then what exactly has been automated?

    Already in the 1850s the worker appeared to Marx as a “conscious organ” in the “vast automaton” of the industrial machine, and by the time he wrote the first volume of Capital Marx was able to comment on the worker’s new labor of “watching the machine with his eyes and correcting its mistakes with his hands” (Marx 1867: 496, 502). Marx’s prescient observation with respect to the emergent role of visuality in capitalist production, along with his understanding that the operation of industrial machinery posits and presupposes the operation of other industrial machinery, suggests what was already implicit if not fully generalized in the analysis: that Dr. Ure’s notion, cited by Marx, of the machine as a “vast automaton,” was scalable—smaller machines, larger machines, entire factories could be thus conceived, and with the increasing scale and ubiquity of industrial machines, the notion could well describe the industrial complex as a whole. Historically considered, “watching the machine with his eyes and correcting the mistakes with his hands” thus appears as an early description of what information workers such as you and I do on our screens. To extrapolate: distributed computation and its integration with industrial process and the totality of social processes suggest that not only has society as a whole become a vast automaton profiting from the metabolism of its conscious organs, but further that the confrontation or interface with the machine at the local level (“where we are”) is an isolated and phenomenal experience that is not equivalent to the perspective of the automaton or, under capitalism, that of Capital. Given that here, while we might still be speaking about intelligence, we are not necessarily speaking about subjects in the strict sense, we might replace Althusser’s relation of S-s—Big Subject (God, the State, etc) to small subject (“you” who are interpellated with and in ideology)—with AI-ai— Big Artificial Intelligence (the world system as organized by computational capital) and “you” Little Artificial Intelligence (as organized by the same). Here subjugation is not necessarily intersubjective, and does not require recognition. The AI does not speak your language even if it is your operating system. With this in mind we may at once understand that the space-time regimes of subjectivity (point-perspective, linear time, realism, individuality, discourse function, etc.) that once were part of the digital armature of “the human,” have been profitably shattered, and that the fragments have been multiplied and redeployed under the requisites of new management. We might wager that these outmoded templates or protocols may still  also meaningfully refer to a register of meaning and conceptualization that can take the measure of historical change, if only for some kind of species remainder whose value is simultaneously immeasurable, unknown and hanging in the balance.

    Ironically perhaps, given the progress narratives attached to technical advances and the attendant advances in capital accumulation, Marx’s hypothesis in Capital Chapter 15, “Machinery and Large-Scale Industry,” that “it would be possible to write a whole history of the inventions made since 1830 for the purpose of providing capital with weapons against working class revolt” (1867, 563), casts an interesting light on the history of computing and its creation-imposition of new protocols. Not only have the incredible innovations of workers been abstracted and absorbed by machinery, but so also have their myriad antagonisms toward capitalist domination. Machinic perfection meant the imposition of continuity and the removal of “the hand of man” by fixed capital, in other words, both the absorption of know-how and the foreclosure of forms of disruption via automation (Marx 1867, 502).

    Dialectically understood, subjectivity, while a force of subjugation in some respects, also had its own arsenal of anti-capitalist sensibilities. As a way of talking about non-conformity, anti-sociality and the high price of conformity and its discontents, the unconscious still has its uses, despite its unavoidable and perhaps nostalgic invocation of a future that has itself been foreclosed. The conscious organ does not entirely grasp the cybernetic organism of which it is a part; nor does it fully grasp the rationale of its subjugation. If the unconscious was machinic, it is now computational, and if it is computational it is also locked in a struggle with capitalism. If what underlies perceptual and cognitive experience is the automaton, the vast AI, what I will be referring to as The Computer, which is the totalizing integration of global practice through informatic processes, then from the standpoint of production we constitute its unconscious. However, as we are ourselves unaware of our own constitution, the Unconscious of producers is their/our specific relation to what Paolo Virno acerbically calls, in what can only be a lamentation of history’s perverse irony, “the communism of capital” (2004, 110). If the revolution killed its father (Marx) and married its mother (Capitalism), it may be worth considering the revolutionary prospects of an analysis of this unconscious.

    Introduction: The Computational Unconscious

    Beginning with the insight that the rise of capitalism marks the onset of the first universalizing digital culture, this essay, and the book of which it is chapter one, develops the insights of The Cinematic Mode of Production (Beller 2006) in an effort to render the violent digital subsumption by computational racial capital that the (former) “humans” and their (excluded) ilk are collectively undergoing in a manner generative of sites of counter-power—of, let me just say it without explaining it, derivatives of counter-power, or, Derivative Communism. To this end, the following section offers a reformulation of Marx’s formula for capital, Money-Commodity-Money’ (M-C-M’), that accounts for distributed production in the social factory, and by doing so hopes to direct attention to zones where capitalist valorization might be prevented or refused. Prevented or refused not only to break a system which itself functions by breaking the bonds of solidarity and mutual trust that formerly were among the conditions that made a life worth living, but also to posit the redistribution of our own power towards ends that for me are still best described by the word communist (or perhaps meta-communist but that too is for another time). This thinking, political in intention, speculative in execution and concrete in its engagement, also proposes a revaluation of the aesthetic as an interface that sensualizes information. As such, the aesthetic is both programmed, and programming—a privileged site (and indeed mode) of confrontation in the digital apartheid of the contemporary.

    Along these lines, and similar to the analysis pursued in The Cinematic Mode of Production, I endeavor to de-fetishize a platform—computation itself—one that can only be properly understood when grasped as a means of production embedded in the bios. While computation is often thought of as being the thing accomplished by hardware churning through a program (the programmatic quantum movements of a discrete state machine), it is important to recognize that the universal Turing machine was (and remains) media indifferent only in theory and is thus justly conceived of as an abstract machine in the realm of ideas and indeed of the ruling ideas. However, it is an abstract machine that, like all abstractions, evolves out of concrete circumstances and practices; which is to say that the universal Turing Machine is itself an abstraction subject to historical-materialist critique. Furthermore, Turing Machines iterate themselves on the living, on life, reorganizing its practices. One might situate the emergence and function of the universal Turing machine as perhaps among the most important abstract machines in the last century, save perhaps that of capital itself. However, both their ranking and even their separability is here what we seek to put into question.

    Without a doubt, the computational process, like the capitalist process, has a corrosive effect on ontological precepts, accomplishing a far-reaching liquidation of tradition that includes metaphysical assumptions regarding the character of essence, being, authenticity and presence. And without a doubt, computation has been built even as it has been discovered. The paradigm of computation marks an inflection point in human history that reaches along temporal and spatial axes: both into the future and back into the past, out to the cosmos and into the sub-atomic. At any known scale, from plank time (10^-44 seconds) to yottaseconds (10^24 seconds), and from 10^-35 to 10^27 meters, computation, conceptualization and sense-making (sensation) have become inseparable. Computation is part of the historicity of the senses. Just ask that baby using an iPad.

    The slight displacement of the ontology of computation implicit in saying that it has been built as much as discovered (that computation has a history even if it now puts history itself at risk) allows us to glimpse, if only from what Laura Mulvey calls “the half-light of the imaginary” (1975, 7)—the general antagonism is feminized when the apparatus of capitalization has overcome the symbolic—that computation is not, so far as we can know, the way of the universe per se, but rather the way of the universe as it has become intelligible to us vis-à-vis our machines. The understanding, from a standpoint recognized as science, that computation has fully colonized the knowable cosmos (and is indeed one with knowing) is a humbling insight, significant in that it allows us to propose that seeing the universe as computation, as, in short, simulable, if not itself a simulation (the computational effect of an informatic universe), may be no more than the old anthropocentrism now automated by apparatuses. We see what we can see with the senses we have—autopoesis. The universe as it appears to us is figured by—that is, it is a figuration of—computation. That’s what our computers tell us. We build machines that discern that the universe functions in accord with their self-same logic. The recursivity effects the God trick.

    Parametrically translating this account of cosmic emergence into the domain of history, reveals a disturbing allegiance of computational consciousness organized by the computational unconscious, to what Silvia Federici calls the system of global apartheid. Historicizing computational emergence pits its colonial logic directly against what Fred Moten and Stefano Harney identify as “the general antagonism” (2013, 10) (itself the reparative antithesis, or better perhaps the reverse subsumption of the general intellect as subsumed by capital). The procedural universalization of computation is a cosmology that attributes and indeed enforces a sovereignty tantamount to divinity and externalities be damned. Dissident, fugitive planning and black study – a studied refusal of optimization, a refusal of computational colonialism — may offer a way out of the current geo-(post-)political and its computational orthodoxy.

    Computational Idolatry and Multiversality

    In the new idolatry cathetcted to inexorable computational emergence, the universe is itself currently imagined as a computer. Here’s the seductive sound of the current theology from a conference sponsored by the sovereign state of NYU:

    As computers become progressively faster and more powerful, they’ve gained the impressive capacity to simulate increasingly realistic environments. Which raises a question familiar to aficionados of The Matrix—might life and the world as we know it be a simulation on a super advanced computer? “Digital physicists” have developed this idea well beyond the sci-fi possibilities, suggesting a new scientific paradigm in which computation is not just a tool for approximating reality but is also the basis of reality itself. In place of elementary particles, think bits; in place of fundamental laws of physics, think computer algorithms. (Scientific American 2011)

    Science fiction, in the form of “the Matrix,” is here used to figure a “reality” organized by simulation, but then this reality is quickly dismissed as something science has moved well beyond. However, it would not be illogical here to propose that “reality” is itself a science fiction—a fiction whose current author is no longer the novel or Hollywood but science. It is in a way no surprise that, consistent with “digital physics,” MIT physicist, Max Tegmark, claims that consciousness is a state of matter: Consciousness as a phenomenon of information storage and retrieval, is a property of matter described by the term “computronium.” Humans represent a rather low level of complexity. In the neo-Hegelian narrative in which the philosopher—scientist reveals the working out of world—or, rather, cosmic—spirit, one might say that it is as science fiction—one of the persistent fictions licensed by science—that “reality itself” exists at all. We should emphasize that the trouble here is not so much with “reality,” the trouble here is with “itself.” To the extent that we recognize that poesis (making) has been extended to our machines and it is through our machines that we think and perceive, we may recognize that reality is itself a product of their operations. The world begins to look very much like the tools we use to perceive it to the point that Reality itself is thus a simulation, as are we—a conclusion that concurs with the notion of a computational universe, but that seems to (conveniently) elide the immediate (colonial) history of its emergence. The emergence of the tools of perception is taken as universal, or, in the language of a quantum astrophysics that posits four levels of multiverses: multiversal. In brief, the total enclosure by computation of observer and observed is either reality itself becoming self-aware, or tautological, waxing ideological, liquidating as it does historical agency by means of the suddenly a priori stochastic processes of cosmic automation.

    Well! If total cosmic automation, then no mistakes, so we may as well take our time-bound chances and wager on fugitive negation in the precise form of a rejection of informatic totalitarianism. Let us sound the sedimented dead labor inherent in the world-system, its emergent computational armature and its iconic self-representations. Let us not forget that those machines are made out of embodied participation in capitalist digitization, no matter how disappeared those bodies may now seem. Marx says, “Consciousness is… from the very beginning a social product and remains so for as long as men exist at all” (Tucker 1978, 178). The inescapable sociality and historicity of knowledge, in short, its political ontology, follows from this—at least so long as humans “exist at all.”

    The notion of a computational cosmos, though not universally or even widely consented to by scientific consciousness, suggests that we respire in an aporiatic space—in the null set (itself a sign) found precisely at the intersection of a conclusion reached by Gödel in mathematics (Hofstadter 1979)—that there is no sufficiently powerful logical system that is internally closed such that logical statements cannot be formulated that can neither be proved nor disproved—and a different conclusion reached by Maturana and Varela (1992), and also Niklas Luhmann (1989), that a system’s self-knowing, its autopoesis, knows no outside; it can know only in its own terms and thus knows only itself. In Gödel’s view, systems are ineluctably open, there is no closure, complete self-knowledge is impossible and thus there is always an outside or a beyond, while in the latter group’s view, our philosophy, our politics and apparently our fate is wedded to a system that can know no outside since it may only render an outside in its own terms, unless, or perhaps, even if/as that encounter is catastrophic.

    Let’s observe the following: 1) there must be an outside or a beyond (Gödel); 2) we cannot know it (Maturana and Varela); 3) and yet…. In short, we don’t know ourselves and all we know is ourselves. One way out of this aporia is to say that we cannot know the outside and remain what we are. Enter history: Multiversal Cosmic Knoweldge, circa 2017, despite its awesome power, turns out to be pretty local. If we embrace the two admittedly humbling insights regarding epistemic limits—on the one hand, that even at the limits of computationally—informed knowledge (our autopoesis) all we can know is ourselves, along with Gödel’s insight that any “ourselves” whatsoever that is identified with what we can know is systemically excluded from being All—then it as axiomatic that nothing (in all valences of that term) fully escapes computation—for us. Nothing is excluded from what we can know except that which is beyond the horizon of our knowledge, which for us is precisely nothing. This is tantamount to saying that rational epistemology is no longer fully separable from the history of computing—at least for any us who are, willingly or not, participant in contemporary abstraction. I am going to skip a rather lengthy digression about fugitive nothing as precisely that bivalent point of inflection that escapes the computational models of consciousness and the cosmos, and just offer its conclusion as the next step in my discussion: We may think we think—algorithmically, computationally, autonomously, or howsoever—but the historically materialized digital infrastructure of the socius thinks in and through us as well. Or, as Marx put it, “The real subject remains outside the mind and independent of it—that is to say, so long as the mind adopts a purely speculative, purely theoretical attitude. Hence the subject, society, must always be envisaged as the premises of conception even when the theoretical method is employed” (Marx: vol. 28, 38-39).[3]

    This “subject, society” in Marx’s terms, is present even in its purported absence—it is inextricable from and indeed overdetermines theory and, thus, thought: in other words, language, narrative, textuality, ideology, digitality, cosmic consciousness. This absent structure informs Althusser’s Lacanian-Marxist analysis of Ideology (and of “the ideology of no ideology,” 1977) as the ideological moment par excellance: an analog way of saying “reality” is simulation) as well as his beguiling (because at once necessary and self-negating) possibility of a subjectless scientific discourse. This non-narrative, unsymbolizeable absent structure akin to the Lacanian “Real” also informs Jameson’s concept of the political unconscious as the black-boxed formal processor of said absent structure, indicated in his work by the term “History” with a capital “H” (1981).  We will take up Althusser and Jameson in due time (but not in this paper). For now, however, for the purposes of our mediological investigation, it is important to pursue the thought that precisely this functional overdetermination, which already informed Marx’s analysis of the historicity of the senses in the 1844 manuscripts, extends into the development of the senses and the psyche. As Jameson put it in The Political Unconscious thirty-five years ago: “That the structure of the psyche is historical and has a history, is… as difficult for us to grasp as that the senses are not themselves natural organs but rather the result of a long process of differentiation even within human history”(1981, 62).

    The evidence for the accuracy of this claim, built from Marx’s notion that “the forming of the five senses requires the history of the world down to the present” has been increasing. There is a host of work on the inseparability of technics and the so-called human (from Mauss to Simondon, Deleuze and Guattari, and Bernard Stiegler) that increasingly makes it possible to understand and even believe that the human, along with consciousness, the psyche, the senses and, consequently, the unconscious are historical formations. My own essay “The Unconscious of the Unconscious” from The Cinematic Mode of Production traces Lacan’s use of “montage,” “the cut,” the gap, objet a, photography and other optical tropes and argues (a bit too insistently perhaps) that the unconscious of the unconscious is cinema, and that a scrambling of linguistic functions by the intensifying instrumental circulation of ambient images (images that I now understand as derivatives of a larger calculus) instantiates the presumably organic but actually equally technical cinematic black box known as the unconscious.[iv] Psychoanalysis is the institutionalization of a managerial technique for emergent linguistic dysfunction (think literary modernism) precipitated by the onslaught of the visible.

    More recently, and in a way that suggests that the computational aspects of historical materialist critique are not as distant from the Lacanian Real as one might think, Lydia Liu’s The Freudian Robot (2010) shows convincingly that Lacan modeled the theory of the unconscious from information theory and cybernetic theory. Liu understands that Lacan’s emphasis on the importance of structure and the compulsion to repeat is explicitly addressed to “the exigencies of chance, randomness, and stochastic processes in general” (2010, 176). She combs Lacan’s writings for evidence that they are informed by information theory and provides us with some smoking guns including the following:

    By itself, the play of the symbol represents and organizes, independently of the peculiarities of its human support, this something which is called the subject. The human subject doesn’t foment this game, he takes his place in it, and plays the role of the little pluses and minuses in it. He himself is an element in the chain which, as soon as it is unwound, organizes itself in accordance with laws. Hence the subject is always on several levels, caught up in the crisscrossing of networks. (quoted in Liu 2010, 176)

    Liu argues that “the crisscrossing of networks” alludes not so much to linguistic networks but to communication networks, and precisely references the information theory that Lacan read, particularly that of George Gilbaud, the author of What is Cybernetics?. She writes that, “For Lacan, ‘the primordial couple of plus and minus’ or the game of even and odd should precede linguistic considerations and is what enables the symbolic order.”

    “You can play heads or tails by yourself,” says Lacan, “but from the point of view of speech, you aren’t playing by yourself – there is already the articulation of three signs comprising a win or a loss and this articulation prefigures the very meaning of the result. In other words, if there is no question, there is no game, if there is no structure, there is no question. The question is constituted, organized by the structure” (quoted in Liu 2010, 179). Liu comments that “[t]his notion of symbolic structure, consistent with game theory, [has] important bearings on Lacan’s paradoxically non-linguistic view of language and the symbolic order.”

    Let us not distract ourselves here with the question of whether or not game theory and statistical analysis represent discovery or invention. Heisenberg, Schrödinger, and information theory formalized the statistical basis that one way or another became a global (if not also multiversal) episteme. Norbert Wiener, another father, this time of cybernetics, defined statistics as “the science of distribution” (Weiner 1989, 8). We should pause here to reflect that, given that cybernetic research in the West was driven by military and, later, industrial applications, that is, applications deemed essential for the development of capitalism and the capitalist way of life, such a statement calls for a properly dialectical analysis. Distribution is inseparable from production under capitalism, and statistics is the science of this distribution. Indeed, we would want to make such a thesis resonate with the analysis of logistics recently undertaken by Moten and Harney and, following them, link the analysis of instrumental distribution to the Middle Passage, as the signal early modern consequence of the convergence of rationalization and containerization—precisely the “science” of distribution worked out in the French slave ship Adelaide or the British ship Brookes. For the moment, we underscore the historicity of the “science of distribution” and thus its historical emergence as socio-symbolic system of organization and control. Keeping this emergence clearly in mind helps us to understand that mathematical models quite literally inform the articulation of History and the unconscious—not only homologously as paradigms in intellectual history, but materially, as ways of organizing social production in all domains. Whether logistical, optical or informatic, the technics of mathematical concepts, which is to say programs, orchestrate meaning and constitute the unconscious.

    Perhaps more elusive even than this historicity of the unconscious grasped in terms of a digitally encoded matrix of materiality and epistemology that constitutes the unthought of subjective emergence, may be that the notion that the “subject, society” extends into our machines. Vilém Flusser, in Towards a Philosophy of Photography, tells us,

    Apparatuses were invented to simulate specific thought processes. Only now (following the invention of the computer), and as it were in hindsight, it is becoming clear what kind of thought processes we are dealing with in the case of all apparatuses. That is: thinking expressed in numbers. All apparatuses (not just computers) are calculating machines and in this sense “artificial intelligences,” the camera included, even if their inventors were not able to account for this. In all apparatuses (including the camera) thinking in numbers overrides linear, historical thinking. (Flusser 2000, 31)

    This process of thinking in numbers, and indeed the generalized conversion of multiple forms of thought and practice to an increasingly unified systems language of numeric processing, by capital markets, by apparatuses, by digital computers requires further investigation. And now that the edifice of computation—the fixed capital dedicated to computation that either recognizes itself as such or may be recognized as such—has achieved a consolidated sedimentation of human labor at least equivalent to that required to build a large nation (a superpower) from the ground up, we are in a position to ask in what way has capital-logic and the logic of private property, which as Marx points out is not the cause but the effect of alienated wage- (and thus quantified) labor, structured computational paradigms? In what way has that “subject, society” unconsciously structured not just thought, but machine-thought? Thinking, expressed in numbers, materialized first by means of commodities and then in apparatuses capable of automating this thought. Is computation what we’ve been up to all along without knowing it? Flusser suggests as much through his notion that 1) the camera is a black box that is a programme, and, 2) that the photograph or technical image produces a “magical” relation to the world in as much as people understand the photograph as a window rather than as information organized by concepts. This amounts to the technical image as itself a program for the bios and suggests that the world has long been unconsciously organized by computation vis-à-vis the camera. As Flusser has it, cameras have organized society in a feedback loop that works towards the perfection of cameras. If the computational processes inherent in photography are themselves an extension of capital logic’s universal digitization (an argument I made in The Cinematic Mode of Production and extended in The Message is Murder), then that calculus has been doing its work in the visual reorganization of everyday life for almost two centuries.

    Put another way, thinking expressed in numbers (the principles of optics and chemistry) materialized in machines automates thought (thinking expressed in numbers) as program. The program of say, the camera, functions as a historically produced version of what Katherine Hayles has recently called “nonconscious cognition” (Hayles 2016). Though locally perhaps no more self-aware than the sediment sorting process of a riverbed (another of Hayles’s computational examples) the camera nonetheless affects purportedly conscious beings from the domain known as the unconscious, as, to give but one shining example, feminist film theory clearly shows: The function of the camera’s program organizes the psycho-dynamics of the spectator in a way that at once structures film form through market feedback, gratifies the (white-identified) male ego and normalizes the violence of heteropatriarchy, and does so at a profit. Now that so much human time has gone into developing cameras, computer hardware and programming, such that hardware and programming are inextricable from the day to day and indeed nano-second to nano-second organization of life on planet earth (and not only in the form of cameras), we can ask, very pointedly, which aspects of computer function, from any to all, can be said to be conditioned not only by sexual difference but more generally still, by structural inequality and the logistics of racialization? Which computational functions perpetuate and enforce these historically worked up, highly ramified social differences ? Structural and now infra-structural inequalities include social injustices—what could be thought of as and in a certain sense are  algorithmic racism, sexism and homophobia, and also programmatically unequal access to the many things that sustain life, and legitimize murder (both long and short forms, executed by, for example, carceral societies, settler colonialism, police brutality and drone strikes), and catastrophes both unnatural (toxic mine-tailings, coltan wars) and purportedly natural (hurricanes, droughts, famines, ambient environmental toxicity). The urgency of such questions resulting from the near automation of geo-political emergence along with a vast conscription of agents is only exacerbated as we recognize that we are obliged to rent or otherwise pay tribute (in the form of attention, subscription, student debt) to the rentier capitalists of the infrastructure of the algorithm in order to access portions of the general intellect from its proprietors whenever we want to participate in thinking.

    For it must never be assumed that technology (even the abstract machine) is value-neutral, that it merely exists in some uninterested ideal place and is then utilized either for good or for ill by free men (it would be “men” in such a discourse). Rather, the machine, like Ariella Azoulay’s understanding of photography, has a political ontology—it is a social relation, and an ongoing one whose meaning is, as Azoulay says of the photograph, never at an end (2012, 25). Now that representation has been subsumed by machines, has become machinic (overcoded as Deleuze and Guattari would say) everything that appears, appears in and through the machine, as a machine. For the present (and as Plato already recognized by putting it at the center of the Republic), even the Sun is political. Going back to my opening, the cosmos is merely a collection of billions of suns—an infinite politics.

    But really, this political ontology of knowledge, machines, consciousness, praxis should be obvious. How could technology, which of course includes the technologies of knowledge, be anything other than social and historical, the product of social relations? How could these be other than the accumulation, objectification and sedminentation of subjectivities that are themselves an historical product? The historicity of knowledge and perception seems inescapable, if not fully intelligible, particularly now, when it is increasingly clear that it is the programmatic automation of thought itself that has been embedded in our apparatuses. The programming and overdetermination of “choice,” of options, by a rationality that was itself embedded in the interested circumstances of life and continuously “learns” vis-à-vis the feedback life provides has become ubiquitous and indeed inexorable (I dismiss “Object Oriented Ontology” and its desperate effort to erase white-boy subjectivity thusly: there are no ontological objects, only instrumental epistemic horizons). To universalize contemporary subjectivity by erasing its conditions of possibility is to naturalize history; it is therefore to depoliticize it and therefore to recapitulate its violence in the present.

    The short answer then regarding digital universality is that technology (and thus perception, thought and knowledge) can only be separated from the social and historical—that is, from racial capitalism—by eliminating both the social and historical (society and history) through its own operations. While computers, if taken as a separate constituency along with a few of their biotic avatars, and then pressed for an answer, might once have agreed with Margaret Thatcher’s view that “there is no such thing as society,” one would be hard-pressed to claim that this post-sociological (and post-Birmingham) “discovery” is a neutral result. Thatcher’s observation, that “the problem with socialism is that you eventually run out of other people’s money,” while admittedly pithy, if condescending, classist and deadly, subordinates social needs to existing property-relations and their financial calculus at the ontological level. She smugly valorizes the status quo by positing capitalism as an untranscendable horizon since the social product is by definition always already “other people’s money.” But neoliberalism has required some revisioning of late (which is a polite way of saying that fascism has needed some updating): the newish but by now firmly-established term “social media” tells us something more about the parasitic relation that the cold calculus this mathematical universe of numbers has to the bios. To preserve global digital apartheid requires social media, the process(ing) of society itself cybernetically-interfaced with the logistics of racial-capitalist computation. This relation, a means of digital expropriation aimed to profitably exploit an equally significant global aspiration towards planetary communicativity and democratization, has become the preeminent engine of capitalist growth. Society, at first seemingly negated by computation and capitalism, is now directly posited as a source of wealth, for what is now explicitly computational capital and actually computational racial capital. The attention economy, immaterial labor, neuropower, semio-capitalism: all of these terms, despite their differences, mean in effect that society, as a deterritorialized factory, is no longer disappeared as an economic object; it disappears only as a full beneficiary of the dominant economy which is now parasitical on its metabolism. The social revolution in planetary communicativity is being farmed and harvested by computational capitalism.

    Dialectics of the Human-Machine

    For biologists it has become au courant when speaking of humans to speak also of the second genome—one must consider not just the 26 chromosomes of the human genome that replicate what was thought of as the human being as an autonomous life-form, but the genetic information and epigenetic functionality of all the symbiotic bacteria and other organisms without which there are no humans. Pursuant to this thought, we might ascribe ourselves a third genome: information. No good scientist today believes that human beings are free standing forms, even if most (or really almost all) do not make the critique of humanity or even individuality through a framework that understands these categories as historically emergent interfaces of capitalist exchange. However, to avoid naturalizing the laws of capitalism as simply an expression of the higher (Hegalian) laws of energetics and informatics (in which, for example ATP can be thought to function as “capital”), this sense of “our” embeddedness in the ecosystem of the bios must be extended to that of the materiality of our historical societies, and particularly to their systems of mediation and representational practices of knowledge formation—including the operations of  textuality, visuality, data visualization and money—which, with convergence today, means precisely, computation.

    If we want to understand the emergence of computation (and of the anthropocene), we must attend to the transformations and disappearances of life forms—of forms of life in the largest sense. And we must do so in spite of the fact that the sedimentation of the history of computation would neutralize certain aspects of human aspiration and of humanity—including, ultimately, even the referent of that latter sign—by means of law, culture, walls, drones, derivatives, what have you. The biosynthetic process of computation and human being gives rise to post-humanism only to reveal that there were never any humans here in the first place: We have never been human—we know this now. “Humanity,” as a protracted example of maiconaissance—as a problem of what could be called the humanizing-machine or, better perhaps, the human-machine, is on the wane.

    Naming the human-machine, is of course a way of talking about the conquest, about colonialism, slavery, imperialism, and the racializing, sex-gender norm-enforcing regimes of the last 500 years of capitalism that created the ideological legitimation of its unprecedented violence in the so-called humanistic values it spat out. Aimé Césaire said it very clearly when he posed the scathing question in Discourse on Colonialism: “Civilization and Colonization?” (1972). “The human-machine” names precisely the mechanics of a humanism that at once resulted from and were deployed to do the work of humanizing planet Earth for the quantitative accountings of capital while at the same time divesting a large part of the planetary population of any claims to the human. Following David Golumbia, in The Cultural Logic of Computation (2009), we might look to Hobbes, automata and the component parts of the Leviathan for “human” emergence as a formation of capital. For so many, humanism was in effect more than just another name for violence, oppression, rape, enslavement and genocide—it was precisely a means to violence. “Humanity” as symptom of The Invisible Hand, AI’s avatar. Thus it is possible to see the end of humanism as a result of decolonization struggles, a kind of triumph. The colonized have outlasted the humans. But so have the capitalists.

    This is another place where recalling the dialectic is particularly useful. Enlightenment Humanism was a platform for the linear time of industrialization and the French revolution with “the human” as an operating system, a meta-ISA emerging in historical movement, one that developed a set of ontological claims which functioned in accord with the early period of capitalist digitality. The period was characterized by the institutionalization of relative equality (Cedric Robinson does not hesitate to point out that the precondition of the French Revolution was colonial slavery), privacy, property. Not only were its achievements and horrors inseparable the imposition of logics of numerical equivalence, they were powered by the labor of the peoples of Earth, by the labor-power of disparate peoples, imported as sugar and spices, stolen as slaves, music and art, owned as objective wealth in the form of lands, armies, edifices and capital, and owned again as subjective wealth in the form of cultural refinement, aesthetic sensibility, bourgeois interiority—in short, colonial labor, enclosed by accountants and the whip, was expatriated as profit, while industrial labor, also expropriated, was itself sustained by these endeavors. The accumulation of the wealth of the world and of self-possession for some was organized and legitimated by humanism, even as those worlded by the growth of this wealth struggled passionately, desultorily, existentially, partially and at times absolutely against its oppressive powers of objectification and quantification. Humanism was colonial software, and the colonized were the outsourced content providers—the first content providers—recruited to support the platform of so-called universal man. This platform humanism is not so much a metaphor; rather it is the tendency that is unveiled by the present platform post-humanism of computational racial capital. The anatomy of man is the key to the anatomy of the ape, as Marx so eloquently put the telos of man. Is the anatomy of computation the key to the anatomy of “man”?

    So the end of humanism, which in a narrow (white, Euro-American, technocratic) view seems to arrive as a result of the rise of cyber-technologies, must also be seen as having been long willed and indeed brought about by the decolonizing struggles against humanism’s self-contradictory and, from the point of view of its own self-proclaimed values, specious organization. Making this claim is consistent with Césaire’s insight that people of the third world built the European metropoles. Today’s disappearance of the human might mean for the colonizers who invested so heavily in their humanisms, that Dr. Moreau’s vivisectioned cyber-chickens are coming home to roost. Fatally, it seems, since Global North immigration policy, internment centers, border walls, police forces give the lie to any pretense of humanism. It might be gleaned that the revolution against the humans has also been impacted by our machines. However, the POTUSian defeat of the so-called humans is double-edged to say the least. The dialectic of posthuman abundance on the one hand and the posthuman abundance of dispossession on the other has no truck with humanity. Today’s mainstream futurologists mostly see “the singularity” and apocalypse. Critics of the posthuman with commitments to anti-racist world-making have clearly understood the dominant discourse on the posthuman as not the end of the white liberal human subject but precisely, when in the hands of those not committed to an anti-racist and decolonial project as a means for its perpetuation—a way of extending the unmarked, transcendental, sovereign, subject (of Hobbes, Descartes, C.B. Macpherson)—effectively the white male sovereign who was in possession of a body rather than forced to be a body. Sovereignty itself must change (in order, as Guiseppe Lampedusa taught us, to remain the same), for if one sees production and innovation on the side of labor, then capital’s need to contain labors’ increasing self-organization has driven it into a position where the human has become an impediment to its continued expansion. Human rights, though at times also a means to further expropriation, are today in the way.

    Let’s say that it is global labor that is shaking off the yoke of the human from without, as much as it the digital machines that are devouring it from within. The dialectic of computational racial capital devours the human as a way of revolutionizing the productive forces. Weapon-makers, states, and banks, along with Hollywood and student debt, invoke the human only as a skeuomorph—an allusion to an old technology that helps facilitate adoption of the new. Put another way, the human has become a barrier to production, it is no longer a sustainable form. The human, and those (human and otherwise) falling under the paradigm’s dominion, must be stripped, cut, bundled, reconfigured in derivative forms. All hail the dividual. Again, female and racialized bodies and subjects have long endured this now universal fragmentation and forced recomposition and very likely dividuality may also describe a precapitalist, pre-colonial interface with the social. However we are obliged to point out that this, the current dissolution of the human into the infrastructure of the world-system, is double-edged, neither fully positive, nor fully negative—the result of the dialectics of struggles for liberation distributed around the planet. As a sign of the times, posthumanism may be, as has been remarked about capitalism itself, among those simultaneously best and worst things to ever happen in history. On the one hand, the disappearance of presumably ontological protections and legitimating status for some (including the promise of rights never granted to most), on the other, the disappearance of a modality of dehumanization and exclusion that legitimated and normalized white supremacist patriarchy by allowing its values to masquerade as universals. However, it is difficult to maintain optimism of the will when we see that that which is coming, that which is already upon us may also be as bad or worse, in absolute numbers, is already worse, for unprecedented billions of concrete individuals. Frankly, in a world where the cognitive-linguistic functions of the species have themselves been captured by the ambient capitalist computation of social media and indeed of capitalized computational social relations, of what use is a theory of dispossession to the dispossessed?

    For those of us who may consider ourselves thinkers, it is our burden—in a real sense, our debt, living and ancestral—to make theory relevant to those who haunt it. Anything less is betrayal. The emergence of the universal value form (as money, the general form of wealth) with its human face (as white-maleness, the general form of humanity) clearly inveighs against the possibility of extrinsic valuation since the very notion of universal valuation is posited from within this economy. What Cedric Robinson shows in his extraordinary Black Marxism (1983) is that capitalism itself is a white mythology. The history of racialization and capitalization are inseparable, and the treatment of capital as a pure abstraction deracinates its origins and functions – both its conditions of possibility as well as its operations—including those of the internal critique of capitalism that has been the basis of much of the Marxist tradition. Both capitalism and its negation as Marxism have proceeded through a disavowal of racialization. The quantitative exchange of equivalents, circulating as exchange values without qualities, are the real abstractions that give rise to philosophy, science, and white liberal humanism wedded to the notion of the objective. Therefore, when it comes to values, there is no degree zero, only perhaps nodal points of bounded equilibrium. To claim neutrality for an early digital machine, say, money, that is, to argue that money as a medium is value-neutral because it embodies what has (in many respects correctly, but in a qualified way) been termed “the universal value form,” would be to miss the entire system of leveraged exploitation that sustains the money-system. In an isolated instance, money as the product of capital might be used for good (building shelters for the homeless) or for ill (purchasing Caterpillar bulldozers) or both (building shelters using Caterpillar machines), but not to see that the capitalist-system sustains itself through militarized and policed expropriation and large-scale, long-term universal degradation is to engage in mere delusional, utopianism and self-interested (might one even say psychotic?) naysaying.

    Will the apologists calmly bear witness to the sacrifice of billions of human beings so that the invisible hand may placidly unfurl its/their abstractions in Kubrikian sublimity? 2001’s (Kubrick 1968) cold longshot of the species lifespan as an instance of a cosmic program is not so distant from the endemic violence of postmodern—and, indeed, post-human—fascism he depicted in A Clockwork Orange (Kubrick 1971). Arguably, 2001 rendered the cosmology of early Posthuman Fascism while A Clockwork Orange portrayed its psychology. Both films explored the aesthetics of programming. For the individual and for the species, what we beheld in these two films was the annihilation of our agency (at the level of the individual and of the species) —and it was eerily seductive, Benjamin’s self-destruction as an aesthetic pleasure of the highest order taken to cosmic proportions and raised to the level of Art (1969).

    So what of the remainders of those who may remain? Here, in the face of the annihilation of remaindered life (to borrow a powerfully dialectical term from Neferti Tadiar, 2016) by various iterations of techné, we are posing the following question: how are computers and digital computing, as universals, themselves an iteration of long-standing historical inequality, violence, and murder, and what are the entry points for an understanding of computation-society in which our currently pre-historic (in Marx’s sense of the term) conditions of computation might be assessed and overcome? This question of technical overdetermination is not a matter of a Kittlerian-style anti-humanism in which “media determine our situation,” nor is it a matter of the post-Kittlerian, seemingly user-friendly repurposing of dialectical materialism which in the beer-drinking tradition of “good-German” idealism, offers us the poorly historicized, neo-liberal idea of “cultural techniques” courtesy of Cornelia Vismann and Bernhard Siegert (Vismann 2013, 83-93; Siegert 2013, 48-65). This latter is a conveniently deracinated way of conceptualizing the distributed agency of everything techno-human without having to register the abiding fundamental antagonisms, the life and death struggle, in anything. Rather, the question I want to pose about computing is one capable of both foregrounding and interrogating violence, assigning responsibility, making changes, and demanding reparations. The challenge upon us is to decolonize computing. Has the waning not just of affect (of a certain type) but of history itself brought us into a supposedly post-historical space? Can we see that what we once called history, and is now no longer, really has been pre-history, stages of pre-history? What would it mean to say in earnest “What’s past is prologue?”[6] If the human has never been and should never be, if there has been this accumulation of negative entropy first via linear time and then via its disruption, then what? Postmodernism, posthumanism, Flusser’s post-historical, and Berardi’s After the Future notwithstanding, can we take the measure of history?

    Hollerith punch card (image source: Library of Congress, http://memory.loc.gov/mss/mcc/023/0008.jpg)
    Figure 1. Hollerith punch card (image source: Library of Congress)

    Techno-Humanist Dehumanization

    I would like to conclude this essay with a few examples of techno-humanist dehumanization. In 1889, Herman Hollerith patented the punchcard system and mechanical tabulator that was used in the 1890 censuses in Germany, England, Italy, Russia, Austria, Canada, France, Norway, Puerto Rico, Cuba, and the Philippines. A national census, which normally took eight to ten years now took a single year. The subsequent invention of the plugboard control panel in 1906 allowed for tabulators to perform multiple sorts in whatever sequence was selected without having to be rebuild the tabulators—an early form of programming. Hollerith’s Tabulating Machine Company merged with three other companies in 1911 to become the Computing Tabulating Recording Company, which renamed itself IBM in 1924.

    While the census opens a rich field of inquiry that includes questions of statistics, computing, and state power that are increasingly relevant today (particularly taking into account the ever-presence of the NSA), for now I only want to extract two points: 1) humans became the fodder for statistical machines and 2) as Vince Rafael has shown regarding the Philippine census and as Edwin Black has shown with respect to the holocaust, the development of this technology was inseparable from racialization and genocide (Rafael 2000; Black 2001)

    Rafael shows that coupled to photographic techniques, the census at once “discerned” and imposed a racializing schema that welded historical “progress” to ever-whiter waves of colonization, from Malay migration to Spanish Colonialism to U.S. Imperialism (2000) Racial fantasy meets white mythology meets World Spirit. For his part, Edwin Black (2001) writes:

    Only after Jews were identified—a massive and complex task that Hitler wanted done immediately—could they be targeted for efficient asset confiscation, ghettoization, deportation, enslaved labor, and, ultimately, annihilation. It was a cross-tabulation and organizational challenge so monumental, it called for a computer. Of course, in the 1930s no computer existed.

    But IBM’s Hollerith punch card technology did exist. Aided by the company’s custom-designed and constantly updated Hollerith systems, Hitler was able to automate his persecution of the Jews. Historians have always been amazed at the speed and accuracy with which the Nazis were able to identify and locate European Jewry. Until now, the pieces of this puzzle have never been fully assembled. The fact is, IBM technology was used to organize nearly everything in Germany and then Nazi Europe, from the identification of the Jews in censuses, registrations, and ancestral tracing programs to the running of railroads and organizing of concentration camp slave labor.

    IBM and its German subsidiary custom-designed complex solutions, one by one, anticipating the Reich’s needs. They did not merely sell the machines and walk away. Instead, IBM leased these machines for high fees and became the sole source of the billions of punch cards Hitler needed (Black 2001).

    The sorting of populations and individuals by forms of social difference including “race,” ability and sexual preference (Jews, Roma, homosexuals, people deemed mentally or physically handicapped) for the purposes of sending people who failed to meet Nazi eugenic criteria off to concentration camps to be dispossessed, humiliated, tortured and killed, means that some aspects of computer technology—here, the Search Engine—emerged from this particular social necessity sometimes called Nazism (Black 2001). The Philippine-American War, in which Americans killed between 1/10th and 1/6th of the population of the Philippines, and the Nazi-administered holocaust are but two world historical events that are part of the meaning of early computational automation. We might say that computers bear the legacy of imperialism and fascism—it is inscribed in their operating systems.

    The mechanisms, as well as the social meaning of computation, were refined in its concrete applications. The process of abstraction hid the violence of abstraction, even as it integrated the result with economic and political protocols and directly effected certain behaviors. It is a well-known fact that Claude Shannon’s landmark paper, “A Mathematical Theory of Communication,” proposed a general theory of communication that was content-indifferent (1948, 379-423). This seminal work created a statistical, mathematical model of communication while simultaneously consigning any and all specific content to irrelevance as regards the transmission method itself. Like use-value under the management of the commodity form, the message became only a supplement to the exchange value of the code. Elsewhere I have more to say about the fact that some of the statistical information Shannon derived about letter frequency in English used as its ur-text, Jefferson The Virginian (1948), the first volume of Dumas Malone’s monumental six volume study of Jefferson, famously interrogated by Annette Gordon-Reed in her Thomas Jefferson and Sally Hemmings: An American Controversy for its suppression of information regarding Jefferson’s relation to slavery (1997).[7] My point here is that the rules for content indifference were themselves derived from a particular content and that the language used as a standard referent was a specific deployment of language. The representative linguistic sample did not represent the whole of language, but language that belongs to a particular mode of sociality and racialized enfranchisement. Shannon’s deprivileging of the referent of the logos as referent, and his attention only to the signifiers, was an intensification of the slippage of signifier from signified (“We, the people…”) already noted in linguistics and functionally operative in the elision of slavery in Jefferson’s biography, to say nothing of the same text’s elision of slave-narrative and African-American speech. Shannon brilliantly and successfully developed a re-conceptualization of language as code (sign system) and now as mathematical code (numerical system) that no doubt found another of its logical (and material) conclusions (at least with respect to metaphysics) in post-structuralist theory and deconstruction, with the placing of the referent under erasure. This recession of the real (of being, the subject, and experience—in short, the signified) from codification allowed Shannon’s mathematical abstraction of rules for the transmission of any message whatsoever to become the industry standard even as they also meant, quite literally, the dehumanization of communication—its severance from a people’s history.

    In a 1987 interview, Shannon was quoted as saying “I can visualize a time in the future when we will be to robots as dogs are to humans…. I’m rooting for the machines!” (1971). If humans are the robot’s companion species, they (or is it we?) need a manifesto. The difficulty is that the labor of our “being” such that it is/was is encrypted in their function. And “we” have never been “one.”

    Tara McPherson has brilliantly argued that the modularity achieved in the development of UNIX has its analogue in racial segregation. Modularity and encapsulation, necessary to the writing of UNIX code that still underpins contemporary operating systems were emergent general socio-technical forms, what we might call technologies, abstract machines, or real abstractions. “I am not arguing that programmers creating UNIX at Bell Labs and at Berkeley were consciously encoding new modes of racism and racial understanding into digital systems,” McPherson argues, “The emergence of covert racism and its rhetoric of colorblindness are not so much intentional as systemic. Computation is a primary delivery method of these new systems and it seems at best naïve to imagine that cultural and computational operating systems don’t mutually infect one another.” (in Nakamura 2012, 30-31; italics in original)

    This is the computational unconscious at work—the dialectical inscription and re-inscription of sociality and machine architecture that then becomes the substrate for the next generation of consciousness, ad infinitum. In a recent unpublished paper entitled “The Lorem Ipsum Project,” Alana Ramjit (2014) examines industry standards for the now-digital imaging of speech and graphic images. These include Kodak’s “Shirley cards” for standard skin tone (white), the Harvard Sentences for standard audio (white), the “Indian Head Test Pattern” for standard broadcast image (white fetishism), and “Lenna,” an image of Lena Soderberg taken from Playboy magazine (white patriarchal unconscious) that has become the reference standard image for the development of graphics processing. Each of these examples testifies to an absorption of the socio-historical at every step of mediological and computational refinement.

    More recently, as Chris Vitale, brought out in a powerful presentation on machine learning and neural networks given at Pratt Institute in 2016, Facebook’s machine has produced “Deep Face,” an image of the minimally recognizable human face. However, this ur-human face, purported to be, the minimally recognizable form of the human face turns out to be a white guy. This is a case in point of the extension of colonial relations into machine function. Given the racialization of poverty in the system of global apartheid (Federici 2012), we have on our hands (or, rather, in our machines) a new modality of automated genocide. Fascism and genocide have new mediations and may not just have adapted to new media but may have merged. Of course, the terms and names of genocidal regeimes change, but the consequences persist. Just yesterday it was called neo-liberal democracy. Today it’s called the end of neo-liberalism. The current world-wide crisis in migration is one of the symptoms of the genocidal tendencies of the most recent coalescence of the “practically” automated logistics of race, nation and class. Today racism is at once a symptom of the computational unconscious, an operation of non-conscious cognition, and still just the garden variety self-serving murderous stupidity that is the legacy of slavery, settler colonialism and colonialism.

    Thus we may observe that the statistical methods utilized by IBM to find Jews in the Shtetl are operative in Weiner’s anti-aircraft cybernetics as well as in Israel’s Iron Dome missile defense system. But, the prevailing view, even if it is not one of pure mathematical abstraction, in which computational process has its essence without reference to any concrete whatever, can be found in what follows. As an article entitled “Traces of Israel’s Iron Dome can be found in Tech Startups” for Bloomberg News almost giddily reports:

    The Israeli-engineered Iron Dome is a complex tapestry of machinery, software and computer algorithms capable of intercepting and destroying rockets midair. An offshoot of the missile-defense technology can also be used to sell you furniture. (Coppola 2014)[8]

    Not only is war good computer business, it’s good for computerized business. It is ironic that te is likened to a tapestry and now used to sell textiles – almost as if it were haunted by Lisa Nakamura’s recent findings regarding the (forgotten) role of Navajo women weavers in the making of early transistor’s for Silicon Valley legend and founding father, as well as infamous eugenicist, William Shockley’s company Fairchild.[9] The article goes on to confess that the latest consumer spin-offs that facilitate the real-time imaging of couches in your living room capable of driving sales on the domestic fronts exist thanks to the U. S. financial support for Zionism and its militarized settler colonialism in Palestine. “We have American-backed apartheid and genocide to thank for being able to visualize a green moderne couch in our very own living room before we click “Buy now.”” (Okay, this is not really a quotation, but it could have been.)

    Census, statistics, informatics, cryptography, war machines, industry standards, markets—all management techniques for the organization of otherwise unruly humans, sub-humans, posthumans and nonhumans by capitalist society. The ethos of content indifference, along with the encryption of social difference as both mode and means of systemic functionality is sustainable only so long as derivative human beings are themselves rendered as content providers, body and soul. But it is not only tech spinoffs from the racist war dividends we should be tracking. Wendy Chun (2004, 26-51) has shown in utterly convincing ways that the gendered history of the development of computer programming at ENIAC in which male mathematicians instructed female programmers to physically make the electronic connections (and remove any bugs) echoes into the present experiences of sovereignty enjoyed by users who have, in many respects, become programmers (even if most of us have little or no idea how programming works, or even that we are programming).

    Chun notes that “during World War II almost all computers were young women with some background in mathematics. Not only were women available for work then, they were also considered to be better, more conscientious computers, presumably because they were better at repetitious, clerical tasks” (Chun 2004, 33)  One could say that programming became programming and software became software when commands shifted from commanding a “girl” to commanding a machine. Clearly this puts the gender of the commander in question.

    Chun suggests that the augmentation of our power through the command-control functions of computation is a result of what she calls the “Yes sir” of the feminized operator—that is, of servile labor (2004). Indeed, in the ENIAC and other early machines the execution of the operator’s order was to be carried out by the “wren” or the “slave.” For the desensitized, this information may seem incidental, a mere development or advance beyond the instrumentum vocale (the “speaking tool” i.e., a roman term for “slave”) in which even the communicative capacities of the slave are totally subordinated to the master. Here we must struggle to pose the larger question: what are the implications for this gendered and racialized form of power exercised in the interface? What is its relation to gender oppression, to slavery? Is this mode of command-control over bodies and extended to the machine a universal form of empowerment, one to which all (posthuman) bodies might aspire, or is it a mode of subjectification built in the footprint of domination in such a way that it replicates the beliefs, practices and consequences of  “prior” orders of whiteness and masculinity in unconscious but nonetheless murderous ways.[10] Is the computer the realization of the power of a transcendental subject, or of the subject whose transcendence was built upon a historically developed version of racial masculinity based upon slavery and gender violence?

    Andrew Norman Wilson’s scandalizing film Workers Leaving the Googleplex (2011), the making of which got him fired from Google, depicts lower class, mostly of color workers leaving the Google Mountain View campus during off hours. These workers are the book scanners, and shared neither the spaces nor the perks with Google white collar workers, had different parking lots, entrances and drove a different class of vehicles. Wilson also has curated and developed a set of images that show the condom-clad fingers (black, brown, female) of workers next to partially scanned book pages. He considers these mis-scans new forms of documentary evidence. While digitization and computation may seem to have transcended certain humanistic questions, it is imperative that we understand that its posthumanism is also radically untranscendent, grounded as it is on the living legacies of oppression, and, in the last instance, on the radical dispossession of billions. These billions are disappeared, literally utilized as a surface of inscription for everyday transmissions. The dispossessed are the substrate of the codification process by the sovereign operators commanding their screens. The digitized, rewritable screen pixels are just the visible top-side (virtualized surface) of bodies dispossessed by capital’s digital algorithms on the bottom-side where, arguably, other metaphysics still pertain. Not Hegel’s world spirit—whether in the form of Kurzweil’s singularity or Tegmark’s computronium—but rather Marx’s imperative towards a ruthless critique of everything existing can begin to explain how and why the current computational eco-system is co-functional with the unprecedented dispossession wrought by racial computational capitalism and its system of global apartheid. Racial capitalism’s programs continue to function on the backs of those consigned to servitude. Data-visualization, whether in the form of selfie, global map, digitized classic or downloadable sound of the Big Bang, is powered by this elision. It is, shall we say, inescapably local to planet earth, fundamentally historical in relation to species emergence, inexorably complicit with the deferral of justice.

    The Global South, with its now world-wide distribution, is endemic to the geopolitics of computational racial capital—it is one of its extraordinary products. The computronics that organize the flow of capital through its materials and signs also organize the consciousness of capital and with it the cosmological erasure of the Global South. Thus the computational unconscious names a vast aspect of global function that still requires analysis. And thus we sneak up on the two principle meanings of the concept of the computational unconscious. On the one hand, we have the problematic residue of amortized consciousness (and the praxis thereof) that has gone into the making of contemporary infrastructure—meaning to say, the structural repression and forgetting that is endemic to the very essence of our technological buildout. On the other hand, we have the organization of everyday life taking place on the basis of this amortization, that is, on the basis of a dehistoricized, deracinated relation to both concrete and abstract machines that function by virtue of the fact that intelligible history has been shorn off of them and its legibility purged from their operating systems. Put simply, we have forgetting, the radical disappearance and expunging from memory, of the historical conditions of possibility of what is. As a consequence, we have the organization of social practice and futurity (or lack thereof) on the basis of this encoded absence. The capture of the general intellect means also the management of the general antagonism. Never has it been truer that memory requires forgetting – the exponential growth in memory storage means also an exponential growth in systematic forgetting – the withering away of the analogue. As a thought experiment, one might imagine a vast and empty vestibule, a James Ingo Freed global holocaust memorial of unprecedented scale, containing all the oceans and lands real and virtual, and dedicated to all the forgotten names of the colonized, the enslaved, the encamped, the statisticized, the read, written and rendered, in the history of computational calculus—of computer memory. These too, and the anthropocene itself, are the sedimented traces that remain among the constituents of the computational unconscious.

    _____

    Jonathan Beller is Professor of Humanities and Media Studies and Director of the Graduate Program in Media Studies at Pratt Institute. His books include The Cinematic Mode of Production: Attention Economy and the Society of the Spectacle (2006); Acquiring Eyes: Philippine Visuality, Nationalist Struggle, and the World-Media System (2006); and The Message Is Murder: Substrates of Computational Capital (2017). He is a member of the Social Text editorial collective..

    Back to the essay

    _____

    Notes

    [1] A reviewer of this essay for b2o: An Online Journal notes, “the phrase ‘digital computer’ suggests something like the Turing machine, part of which is characterized by a second-order process of symbolization—the marks on Turing’s tape can stand for anything, & the machine processing the tape does not ‘know’ what the marks ‘mean.’” It is precisely such content indifferent processing that the term “exchange value,” severed as it is of all qualities, indicates.

    [2] It should be noted that the reverse is also true: that race and gender can be considered and/as technologies. See Chun (2012), de Lauretis (1987).

    [3] To insist on first causes or a priori consciousness in the form of God or Truth or Reality is to confront Marx’s earlier acerbic statement against a form of abstraction that eliminates the moment of knowing from the known in The Economic and Philosophic Manuscripts of 1844,

    Who begot the first man and nature as a whole? I can only answer you: Your question is itself a product of abstraction. Ask yourself how you arrived at that question. Ask yourself it that question is not posed from a standpoint to which I cannot reply, because it is a perverse one. Ask yourself whether such a progression exists for a rational mind. When you ask about the creation of nature and man you are abstracting in so doing from man and nature. You postulate them as non-existent and yet you want me to prove them to you as existing. Now I say give up your abstraction and you will give up your question. Or, if you want to hold onto your abstraction, then be consistent, and if you think of man and nature as non-existent, then think of yourself as non-existent, for you too are surely man and nature. Don’t think, don’t ask me, for as soon as you think and ask, your abstraction from the existence of nature and man has no meaning. Or are you such an egoist that you postulate everything as nothing and yet want yourself to be?” (Tucker 1978, 92)

    [4] If one takes the derivative of computational process at a particular point in space-time one gets an image. If one integrates the images over the variables of space and time, one gets a calculated exploit, a pathway for value-extraction. The image is a moment in this process, the summation of images is the movement of the process.

    [5] See Harney and Moten (2013). See also Browne (2015), especially 43-50.

    [6] In practical terms, the Alternative Informatics Association, in the announcement for their Internet Ungovernance Forum puts things as follows:

    We think that Internet’s problems do not originate from technology alone, that none of these problems are independent of the political, social and economic contexts within which Internet and other digital infrastructures are integrated. We want to re-structure Internet as the basic infrastructure of our society, cities, education, heathcare, business, media, communication, culture and daily activities. This is the purpose for which we organize this forum.

    The significance of creating solidarity networks for a free and equal Internet has also emerged in the process of the event’s organization. Pioneered by Alternative Informatics Association, the event has gained support from many prestigious organizations worldwide in the field. In this two-day event, fundamental topics are decided to be ‘Surveillance, Censorship and Freedom of Expression, Alternative Media, Net Neutrality, Digital Divide, governance and technical solutions’. Draft of the event’s schedule can be reached at https://iuf.alternatifbilisim.org/index-tr.html#program (Fidaner, 2014).

    [7] See Beller (2016, 2017).

    [8] Coppola writes that “Israel owes much of its technological prowess to the country’s near—constant state of war. The nation spent $15.2 billion, or roughly 6 percent of gross domestic product, on defense last year, according to data from the International Institute of Strategic Studies, a U.K. think-tank. That’s double the proportion of defense spending to GDP for the U.S., a longtime Israeli ally. If there’s one thing the U.S. Congress can agree on these days, it’s continued support for Israel’s defense technology. Legislators approved $225 million in emergency spending for Iron Dome on Aug. 1, and President Barack Obama signed it into law three days later.”

    [9] Nakamura (2014).

    [10] For more on this, see Eglash (2007).

    _____

    Works Cited

    • Althusser, Louis. 1977. Lenin and Philosophy. London: NLB.
    • Azoulay, Ariella. 2012. Civil Imagination: A Political Ontology of Photography. London: Verso.
    • Beller, Jonathan. 2006. Cinematic Mode of Reproduction. Hanover, NH: Dartmouth College Press.
    • Beller, Jonathan. 2016. “Fragment from The Message is Murder.” Social Text 34:3. 137-152.
    • Beller, Jonathan. 2017. The Message is Murder: Substrates of Computational Capital. London: Pluto Press.
    • Benjamin, Walter. 1969. Illuminations. Schocken Books.
    • Black, Edwin. 2001. IBM and the Holocaust: The Strategic Alliance between Nazi Germany and America’s Most Powerful Corporation. New York: Crown Publishers.
    • Browne, Simone. 2015. Dark Matters: On the Surveillance of Blackness. Durham: Duke University Press.
    • Césaire, Aimé. 1972. Discourse on Colonialism New York: Monthly Review Press.
    • Coppola, Gabrielle. 2014. “Traces of Israel’s Iron Dome Can Be Found in Tech Startups.” Bloomberg News (Aug 11).
    • Chun, Wendy Hui Kyong. 2004. “On Software, or the Persistence of Visual Knowledge,” Grey Room 18, Winter: 26-51.
    • Chun, Wendy Hui Kyong. 2012. In Nakamura and Chow-White (2012). 38-69.
    • De Lauretis, Teresa. 1987. Technologies of Gender: Essays on Theory, Film, and Fiction Bloomington, IN: Indiana University Press.
    • Dyer-Witheford, Nick. 2012. “Red Plenty Platforms.” Culture Machine 14.
    • Eglash, Ron. 2007. “Broken Metaphor: The Master-Slave Analogy in Technical Literature.” Technology and Culture 48:3. 1-9.
    • Federici, Sylvia. 2012. Revolution at Point Zero. PM Press.
    • Fidaner, Işık Barış. 2014. Email broadcast on ICTs and Society listserv (Aug 29).
    • Flusser, Vilém. 2000. Towards a Philosophy of Photography. London: Reaktion Books.
    • Golumbia, David. 2009. The Cultural Logic of Computation. Cambridge: Harvard University Press.
    • Gordon-Reed, Annette. 1998. Thomas Jefferson and Sally Hemmings: An American Controversy. Charlottesville: University of Virginia Press.
    • Harney, Stefano and Fred Moten. 2013. The Undercommons: Fugitive Planning and Black Study. Brooklyn: Autonomedia.
    • Hayles, Katherine N. 2016. “The Cognitive NonConscious.” Critical Inquiry 42:4. 783-808.
    • Jameson, Fredric. 1981. The Political Unconscious: Narrative as a Socially Symbolic Act. Ithaca: Cornell University Press.
    • Hofstadter, Douglas. 1979. Godel, Escher, Bach: An Eternal Golden Braid. New York: Penguin Books.
    • Kubrick, Stanley, dir. 1968. 2001: A Space Odyssey. Film.
    • Kubrick, Stanley, dir. 1971. A Clockwork Orange. Film.
    • Liu, Lydia He. 2010. The Freudian Robot: Digital Media and the Future of the Unconscious. Chicago: University of Chicago Press.
    • Luhmann, Niklas. 1989. Ecological Communication. Chicago: University of Chicago Press.
    • McPherson, Tara. 2012. “U.S. Operating Systems at Mid-Century: The Intertwining of Race and UNIX.” In Nakamura and Chow-White (2012). 21-37.
    • Maturana Humberto and Francisco Varela. 1992. The Tree of Knowledge: The Biological Roots of Human Understanding. Shambala.
    • Malone, Dumas. 1948. Jefferson The Virginian. Boston: Little, Brown and Company.
    • Marx, Karl and Fredrick Engels. 1986. Collected Works, Vol. 28, New York: International Publishers.
    • Marx, Karl and Fredrick Engels. 1978. The German Ideology. In The Marx Engels Reader, 2nd edition. Edited by Robert C. Tucker. NY: Norton.
    • Mulvey, Laura. 1975. “Visual Pleasure and Narrative Cinema.” Screen 16:3. 6-18.
    • Nakamura, Lisa. 2014. “Indigenous Circuits.” Computer History Museum (Jan 2).
    • Nakamura, Lisa and Peter A. Chow-White, eds. 2012. Race After the Internet. New York: Routledge.
    • Siegert, Bernhard. 2013.“Cultural Techniques: Or the End of Intellectual Postwar Ear in German Media Theory.” Theory Culture and Society 30. 48-65.
    • Shannon, Claude. 1948 “A Mathematical Theory of Communication.” The Bell System Technical Journal. July: 379-423; October: 623-656.
    • Shannon, Claude and Warren Weaver. 1971. The Mathematical Theory of Communication. Chicago: University of Illinois Press.
    • Rafael, Vicente. 2000. White Love: And Other Events in Filipino History. Durham: Duke University Press.
    • Ramjit, Alana. 2014. “The Lorem Ipsum Project.” Unpublished Manuscript.
    • Rebooting the Cosmos: Is the Universe the Ultimate Computer? [Replay].” 2011. “In-Depth Report: The World Science Festival 2011: Encore Presentations and More.” Scientific American.
    • Robinson, Cedric. 1983. Black Marxism: The Making of the Black Radical Tradition. Chapel Hill: The University of North Carolina Press.
    • Tadiar, Neferti. 2016. “City Everywhere.” Theory, Culture and Society 33:7-8. 57-83.
    • Virno, Paolo. 2004. A Grammar of the Multitude. New York: Semiotext(e).
    • Vismann, Cornelia. 2013. “Cultural Techniques and Sovereignty.” Theory, Culture and Society 30. 83-93.
    • Weiner, Norbert. 1989. The Human Use of Human Beings: Cybernetics and Society. London: Free Association Books.
    • Wilson, Andrew Norman, dir. 2011. “Workers Leaving the Googleplex.” Video.

     

  • Michelle Moravec — The Endless Night of Wikipedia’s Notable Woman Problem

    Michelle Moravec — The Endless Night of Wikipedia’s Notable Woman Problem

    Michelle Moravec

    Millions of the sex whose names were never known beyond the circles of their own home influences have been as worthy of commendation as those here commemorated. Stars are never seen either through the dense cloud or bright sunshine; but when daylight is withdrawn from a clear sky they tremble forth. (Hale 1853, ix)

    As this poetic quote by Sarah Josepha Hale, nineteenth-century author and influential editor, reminds us, context is everything.   The challenge, if we wish to write women back into history via Wikipedia, is to figure out how to shift the frame of reference so that our stars can shine, since the problem of who precisely is “worthy of commemoration” so often seems to exclude women.  This essay takes on one of the “tests” used to determine whether content is worthy of inclusion in Wikipedia, notability, to explore how the purportedly neutral concept works against efforts to create entries about female historical figures.

    According to Wikipedia “notability,” a subject is considered notable if it  “has received significant coverage in reliable sources that are independent of the subject.” (“Wikipedia:Notability” 2017)   To a historian of women, the gender biases implicit in these criteria are immediately recognizable; for most of written history, women were de facto considered unworthy of consideration (Smith 2000). Unsurprisingly, studies have pointed to varying degrees of bias in coverage of female figures in Wikipedia compared to male figures.  One study of Encyclopedia Britannica and Wikipedia concluded,

    Overall, we find evidence of gender bias in Wikipedia coverage of biographies. While Wikipedia’s massive reach in coverage means one is more likely to find a biography of a woman there than in Britannica, evidence of gender bias surfaces from a deeper analysis of those articles each reference work misses. (Reagle and Rhue 2011)

    Five years later, another study found this bias persisted; women constituted only 15.5 percent of the biographical entries on the English Wikipedia, and that for women born prior to the 20th century, the problem of exclusion was wildly exacerbated by “sourcing and notability issues” (“Gender Bias on Wikipedia” 2017).

    One potential source for buttressing the case of notable women has been identified by literary scholar Alison Booth.  Booth identified more than 900 volumes of prosopography published during what might be termed the heyday of the genre, 1830-1940, when the rise of the middle class and increased literacy combined with relatively cheap production of books to make such volumes both practicable and popular (Booth 2004). Booth also points out that, lest we consign the genre to the realm of mere curiosity, the volumes were “indispensable aids in the formation of nationhood” (Booth 2004, 3).

    To reveal the historical contingency of the purportedly neutral criteria of notability, I utilized longitudinal data compiled by Booth which reveals that notability has never been the stable concept Wikipedia’s standards take it to be.  Since notability alone cannot explain which women make it into Wikipedia, I then turn to a methodology first put forth by historian Mary Ritter Beard in her critique of the Encyclopedia Britannica to identify missing entries (Beard 1977). Utilizing Notable American Women, as a reference corpus, I calculated the inclusion of individual women from those volumes in Wikipedia (Boyer and James 1971).  In this essay I extend that analysis to consider the difference between notability and notoriety from a historical perspective.  One might be well known while remaining relatively unimportant from a historical perspective.  Such distinctions are collapsed in Wikipedia, assuming that a body of writing about a historical subject stands as prima facie evidence of notability.

    While inclusion in Notable American Women does not necessarily translate into presence in Wikipedia, looking at the categories of women that have higher rates of inclusion offers insights into how female historical figures do succeed in Wikipedia.  My analysis suggests that criterion of notability restricts the women who succeed in obtaining pages in Wikipedia to those who mirror “the ‘Great Man Theory’ of history (Mattern 2015)  or are “notorious”  (Lerner 1975).

    Alison Booth has compiled a list of the most frequently mentioned women in a subset of female prosopographical volumes and tracked their frequency over time (2004, 394–396).   She made this data available on the web, allowing for the creation of Figure 1 which focuses on the inclusion of US historical figures in volumes published from 1850 to 1930.

    Figure 1. US women by publication date of books that included them (image source: author)
    Figure 1. US women by publication date of books that included them (image source: author)

    This chart clarifies what historians already know: notability is historically specific and contingent. For example, Mary Washington, mother of the first president, is notable in the nineteenth century but not in the twentieth. She drops off because over time, motherhood alone ceases to be seen as a significant contribution to history.  Wives of presidents remain quite popular, perhaps because they were at times understood as playing an important political role, so Mary Washington’s daughter-in-law Martha still appears in some volumes in the latter period. A similar pattern may be observed for foreign missionary Anne Hasseltine Judson in the twentieth century.  The novelty of female foreign missionaries like Judson faded as more women entered the field.  Other figures, like Laura Bridgman, “the first deaf-blind American child to gain a significant education in the English language,” were supplanted by later figures in what might be described as the “one and done” syndrome, where only a single spot is allotted for a specific kind of notable woman (“Laura Bridgman” 2017). In this case, Bridgman likely fell out of favor as Helen Keller’s fame rose.

    Although their notability changed over time, all the women depicted in figure 1 have Wikipedia pages; this is unsurprising as they were among the most mentioned women in the sort of volumes Wikipedia considers “reliable sources.” But what about more contemporary examples?  Does inclusion in a relatively recent work that declares women as notable mean that these women would meet Wikipedia’s notability standards? To answer this question, I relied on a methodology of calculating missing biographies in Wikipedia, utilizing a reference corpus to identify women who might reasonably be expected to appear in Wikipedia and to calculate the percentage that do not. Working with the digitized copy of Notable American Women in the Women and Social Movements database, I compiled a missing biographies quotient for individuals in selected sections of the “classified list of biographies” that appear at the end of the third volume of Notable American Women. The eleven categories with no missing entries offer some insights into how women do succeed in Wikipedia (Table 1).

    Classification % missing
    Astronomers 0
    Biologists 0
    Chemists & Physicists 0
    Heroines 0
    Illustrators 0
    Indian Captives 0
    Naturalists 0
    Psychologists 0
    Sculptors 0
    Wives of Presidents 0

    Table 1. Classifications from Notable American Women with no missing biographies in Wikipedia

    Characteristics that are highly predictive of success in Wikipedia for women include association with a powerful man, as in the wives of presidents, and recognition in a male-dominated field of science, social science and art. Additionally, extraordinary women, such as heroines, and those who are quite rare, such as Indian captives, also have a greater chance of success in Wikipedia.[1]

    Further analysis of the classifications with greater proportions of missing women reflects Gerda Lerner’s complaint that the history of notable women is the story of exceptional or deviant women (Lerner 1975).  “Social worker,” which has the highest percentage of missing biographies at 67%, illustrates that individuals associated with female-dominated endeavors are less likely to be considered notable unless they rise to a level of exceptionalism (Table 2).

    Name Included?
    Dinwiddie, Emily Wayland

    no

    Glenn, Mary Willcox Brown

    no

    Kingsbury, Susan Myra

    no

    Lothrop, Alice Louise Higgins

    no

    Pratt, Anna Beach

    no

    Regan, Agnes Gertrude

    no

    Breckinridge, Sophonisba Preston

    page

    Richmond, Mary Ellen

    page

    Smith, Zilpha Drew

    stub

    Table 2. Social Workers from Notable American Women by inclusion in Wikipedia

    Sophonisba Preston Breckinridge’s Wikipedia entry describes her as “an American activist, Progressive Era social reformer, social scientist and innovator in higher education” who was also “the first woman to earn a Ph.D. in political science and economics then the J.D. at the University of Chicago, and she was the first woman to pass the Kentucky bar” (“Sophonisba Breckinridge” 2017). While the page points out that “She led the process of creating the academic professional discipline and degree for social work,” her page is not linked to the category of American social workers (“Category:American Social Workers” 2015).  If a female historical figure isn’t as exceptional as Breckinridge, she needs to be a “first” like Mary Ellen Richmond who makes it into Wikipedia as the  “social work pioneer” (“Mary Richmond” 2017).

    This conclusion that being a “first” facilitates success in Wikipedia is supported by analysis of the classification of nurses. Of the ten nurses who have Wikipedia entries, 80% are credited with some sort of temporally marked achievement, generally a first or pioneering role (Table 3).

    Individual Was she a first? Was she a participant in a male-dominated historical event? Was she a founder?
    Delano, Jane Arminda leading pioneer World War I founder of the American Red Cross Nursing Service
    Fedde, Sister Elizabeth* established the Norwegian Relief Society
    Maxwell, Anna Caroline pioneering activities Spanish-American War
    Nutting, Mary Adelaide world’s first professor of nursing World War I founded the American Society of superintendents of Training Schools for Nurses
    Richards, Linda first professionally trained American nurse, pioneering modern nursing in the United States No Richards pioneered the founding and superintending of nursing training schools across the nation.
    Robb, Isabel Adams Hampton early leader (held many “first” positions) No helped to found …the National League for Nursing, the International Council of Nurses, and the American Nurses Association.
    Stimson, Julia Catherine first woman to attain the rank of Major World War I
    Wald, Lillian D. coined the term “public health nurse” & the founder of American community nursing No founded Henry Street Settlement
    Mahoney, Mary Eliza first African American to study and work as a professionally trained nurse in the US No co-founded the National Association of Colored Graduate Nurses
    Thoms, Adah B. Samuels World War I co-founded the National Association of Colored Graduate Nurses

    * Fredde appears in Wikipedia primarily as a Norwegian Lutheran Deaconess. The word “nurse” does not appear on her page.

    Table 3. Classifications from Notable American Women with no missing biographies in Wikipedia

    As the entries for nurses reveal, in addition to being first, a combination of several additional factors work in a female subject’s favor in achieving success in Wikipedia.  Nurses who founded an institution or organization or participated in a male-dominated event already recognized as historically significant, such as war, were more successful than those who did not.

    If distinguishing oneself, by being “first” or founding something, as part of a male-dominated event facilitates higher levels of inclusion in Wikipedia for women in female dominated fields, do these factors also explain how women from classifications that are not female-dominated succeed? Looking at labor leaders, it appears these factors can offer only a partial explanation (Table 4).

    Individual Was she a first? Was she a participant in a male-dominated historical event? Was she a founder? Description from Wikipedia
    Bagley, Sarah G. “probably the first”  No formed the Lowell Female Labor Reform Association headed up female department of newspaper until fired because “a female department. … would conflict with the opinions of the mushroom aristocracy … and beside it would not be dignified”
    Barry, Leonora Marie Kearney “only woman” “first woman” KNIGHTS OF LABOR “difficulties faced by a woman attempting to organize men in a male-dominated society.
     Employers also refused to allow her to investigate their factories.”
    Bellanca, Dorothy Jacobs  “first full-time female organizer”  No 0rganized the Baltimore buttonhole makers into Local 170 of the United Garment Workers of America, one of four women who attended founding convention of Amalgamated Clothing Workers of America   “ “men resented” her
    Haley, Margaret Angela “pioneer leader”  No  No dubbed the “lady labor slugger”
    Jones, Mary Harris  No KNIGHTS OF LABOR IWW “most dangerous woman in America”
    Nestor, Agnes  No WOMEN’S TRADE UNION LEAGUE founded  International Glove Workers Union
    O’Reilly, Leonora  No WOMEN’S TRADE UNION LEAGUE founded the Wage Earners Suffrage League “O’Reilly as a public speaker was thought to be out of place for women at this time in New York’s history.”
    O’Sullivan, Mary Kenney the first woman AFL employed WOMEN’S TRADE UNION LEAGUE founder of the Women’s Trade Union League
    Stevens, Alzina Parsons first probation officer KNIGHTS OF LABOR

    Table 4. Classifications from Notable American Women with no missing biographies in Wikipedia

    In addition to being a “first” or founding something, two other variables emerge from the analysis of labor leaders that predict success in Wikipedia.  One is quite heartening: affiliation with the Women’s Trade Union League (WTUL), a significant female-dominated historical organization, seems to translate into greater recognition as historically notable.  Less optimistically, it also appears that what Lerner labeled as “notorious” behavior predicts success: six of the nine women were included for a wide range of reasons, from speaking out publicly to advocating resistance.

    The conclusions here can be spun two ways. If we want to get women into Wikipedia, to surmount the obstacle of notability, we should write about women who fit well within the great man school of history. This could be reinforced within the architecture of Wikipedia by creating links within a woman’s entry to men and significant historical events, while also making sure that the entry emphasizes a woman’s “firsts” and her institutional ties. Following these practices will make an entry more likely to overcome challenges and provide a defense against proposed deletion.  On the other hand, these are narrow criteria for meeting notability that will likely not encompass a wide range of female figures from the past.

    The larger question remains: should we bother to work in Wikipedia at all? (Raval 2014). Wikipedia’s content is biased not only by gender, but also by race and region (“Racial Bias on Wikipedia” 2017).   A concrete example of this intersectional bias can be seen if the fact that “only nine of Haiti’s 37 first ladies have Wikipedia articles, whereas all 45 first ladies of the United States have entries” (Frisella 2017).  Critics have also pointed to the devaluation of Indigenous forms of knowledge within Wikipedia (Senier 2014; Gallart and van der Velden 2015).

    Wikipedia, billed as “the encyclopedia anyone can edit” and purporting to offer “the sum of all human knowledge,” is notorious for achieving neither goal. Wikipedia’s content suffers from systemic bias related to the unbalanced demographics of its contributor base (Wikipedia, 2004, 2009c). I have highlighted here disparities in gendered content, which parallel the well-documented gender biases against female contributors (“Wikipedia:WikiProject Countering Systemic Bias” 2017).   The average editor of Wikipedia is white, from Western Europe or the United States, between 30-40, and overwhelmingly male.   Furthermore,  “super users” contribute most of Wikipedia’s content.  A 2014 analysis revealed that  “the top 5,000 article creators on English Wikipedia have created 60% of all articles on the project.  The top 1,000 article creators account for 42% of all Wikipedia articles alone.”   A study of a small sample of these super users revealed that they are not writing about women.  “The amount of these super page creators only exacerbates the [gender] problem, as it means that the users who are mass-creating pages are probably not doing neglected topics, and this tilts our coverage disproportionately towards male-oriented topics” (Hale 2014).  For example, the “List of Pornographic Actresses” on Wikipedia is lengthier and more actively edited than the “List of Female Poets” (Kleeman 2015).

    The hostility within Wikipedia against female contributors remains a significant barrier to altering its content since the major mechanism for rectifying the lack of entries about women is to encourage women to contribute them (New York Times 2011; Peake 2015; Paling 2015).   Despite years of concerted efforts to make Wikipedia more hospitable toward women, to organize editathons, and place Wikipedians in residencies specifically designed to add women to the online encyclopedia, the results have been disappointing (MacAulay and Visser 2016; Khan 2016). Authors of a recent study of  “Wikipedia’s infrastructure and the gender gap” point to “foundational epistemologies that exclude women, in addition to other groups of knowers whose knowledge does not accord with the standards and models established through this infrastructure” which includes “hidden layers of gendering at the levels of code, policy and logics” (Wajcman and Ford 2017).

    Among these policies is the way notability is implemented to determine whether content is worthy of inclusion.  The issues I raise here are not new; Adrianne Wadewitz, an early and influential feminist Wikipedian, noted in 2013 “A lack of diversity amongst editors means that, for example, topics typically associated with femininity are underrepresented and often actively deleted”(Wadewitz 2013). Wadewitz pointed to efforts to delete articles about Kate Middleton’s wedding gown, as well as the speedy nomination for deletion of an entry for reproductive rights activist Sandra Fluke.   Both pages survived, Wadewicz emphasized, reflecting the way in which Wikipedia guidelines develop through practice, despite their ostensible stability.

    This is important to remember – Wikipedia’s policies, like everything on the site, evolves and changes as the community changes. … There is nothing more essential than seeing that these policies on Wikipedia are evolving and that if we as feminists and academics want them to evolve in ways we feel reflect the progressive politics important to us, we must participate in the conversation. Wikipedia is a community and we have to join it. (Wadewitz 2013)

    While I have offered some pragmatic suggestions here about how to surmount the notability criteria in Wikipedia, I want to close by echoing Wadewitz’s sentiment that the greater challenge must be to question how notability is implemented in Wikipedia praxis.

    _____

    Michelle Moravec is an associate professor of history at Rosemont College.

    Back to the essay

    _____

    Notes

    [1] Seven of the eleven categories in my study with fewer than ten individuals have no missing individuals.

    _____

    Works Cited