boundary 2

Tag: digital politics

  • Zachary Loeb — Burn It All (Review of Mullaney, Peters, Hicks and Philip, eds., Your Computer Is on Fire)

    Zachary Loeb — Burn It All (Review of Mullaney, Peters, Hicks and Philip, eds., Your Computer Is on Fire)

    a review of Thomas S. Mullaney, Benjamin Peters, Mar Hicks and Kavita Philip, eds., Your Computer Is on Fire (MIT Press, 2021)

    by Zachary Loeb

    ~

    It often feels as though contemporary discussions about computers have perfected the art of talking around, but not specifically about, computers. Almost every week there is a new story about Facebook’s malfeasance, but usually such stories say little about the actual technologies without which such conduct could not have happened. Stories proliferate about the unquenchable hunger for energy that cryptocurrency mining represents, but the computers eating up that power are usually deemed less interesting than the currency being mined. Debates continue about just how much AI can really accomplish and just how soon it will be able to accomplish even more, but the public conversation winds up conjuring images of gleaming terminators marching across a skull-strewn wasteland instead of rows of servers humming in an undisclosed location. From Zoom to dancing robots, from Amazon to the latest Apple Event, from misinformation campaigns to activist hashtags—we find ourselves constantly talking about computers, and yet seldom talking about computers.

    All of the aforementioned specifics are important to talk about. If anything, we need to be talking more about Facebook’s malfeasance, the energy consumption of cryptocurrencies, the hype versus the realities of AI, Zoom, dancing robots, Amazon, misinformation campaigns, and so forth. But we also need to go deeper. Case in point, though it was a very unpopular position to take for many years, it is now a fairly safe position to say that “Facebook is a problem;” however, it still remains a much less acceptable position to suggest that “computers are a problem.” At a moment in which it has become glaringly obvious that tech companies have politics, there still remains a common sentiment that computers are neutral. And thus such a view can comfortably disparage Bill Gates and Jeff Bezos and Sundar Pichai and Mark Zuckerberg for the ways in which they have warped the potential of computing, while still holding out hope that computing can be a wonderful emancipatory tool if it can just be put in better hands.

    But what if computers are themselves, at least part of, the problem? What if some of our present technological problems have their roots deep in the history of computing, and not just in the dorm room where Mark Zuckerberg first put together FaceSmash?

    These are the sorts of troubling and provocative questions with which the essential new book Your Computer Is on Fire engages. It is a volume that recognizes that when we talk about computers, we need to actually talk about computers. A vital intervention into contemporary discussions about technology, this book wastes no energy on carefully worded declarations of fealty to computers and the Internet, there’s a reason why the book is not titled Your Computer Might Be on Fire but Your Computer Is on Fire.

    The editors of the volume are quite upfront about the confrontational stance of the volume, Thomas Mullaney opens the book by declaring that “Humankind can no longer afford to be lulled into complacency by narratives of techno-utopianism or technoneutrality” (4). This is a point that Mullaney drives home as he notes that “the time for equivocation is over” before emphasizing that despite its at moments woebegone tonality, the volume is not “crafted as a call of despair but as a call to arms” (8). While the book sets out to offer a robust critique of computers, Mar Hicks highlights that the editors and contributors of the book shall do this in a historically grounded way, which includes a vital awareness that “there are almost always red flags and warning signs before a disaster, if one cares to look” (14). Though unfortunately many of those who attempted to sound the alarm about the potential hazards of computing were either ignored or derided as technophobes. Where Mullaney had described the book as “a call to arms,” Hicks describes what sorts of actions this call may entail: “we have to support workers, vote for regulation, and protest (or support those protesting) widespread harms like racist violence” (23). And though the focus is on collective action, Hicks does not diminish the significance of individual ethical acts, noting powerfully (in words that may be particularly pointed at those who work for the big tech companies): “Don’t spend your life as a conscientious cog in a terribly broken system” (24).

    Your Computer Is on Fire begins like a political manifesto; as the volume proceeds the contributors maintain the sense of righteous fury. In addition to introductions and conclusions, the book is divided into three sections: “Nothing is Virtual” wherein contributors cut through the airy talking points to bring ideas about computing back to the ground; “This is an Emergency” sounds the alarm on many of the currently unfolding crises in and around computing; and “Where Will the Fire Spread” turns a prescient gaze towards trajectories to be mindful of in the swiftly approaching future. Hicks notes, “to shape the future, look to the past” (24), and this is a prompt that the contributors take up with gusto as they carefully demonstrate how the outlines of our high-tech society were drawn long before Google became a verb.

    Drawing attention to the physicality of the Cloud, Nathan Ensmenger begins the “Nothing is Virtual” section by working to resituate “the history of computing within the history of industrialization” (35). Arguing that “The Cloud is a Factory,” Ensmenger digs beneath the seeming immateriality of the Cloud metaphor to extricate the human labor, human agendas, and environmental costs that get elided when “the Cloud” gets bandied about. The role of the human worker hiding behind the high-tech curtain is further investigated by Sarah Roberts, who explores how many of the high-tech solutions that purport to use AI to fix everything, are relying on the labor of human beings sitting in front of computers. As Roberts evocatively describes it, the “solutionist disposition toward AI everywhere is aspirational at its core” (66), and this desire for easy technological solutions covers up challenging social realities. While the Internet is often hailed as an American invention, Benjamin Peters discusses the US ARPANET alongside the ultimately unsuccessful network attempts of the Soviet OGAS and Chile’s Cybersyn, in order to show how “every network history begins with a history of the wider word” (81), and to demonstrate that networks have not developed by “circumventing power hierarchies” but by embedding themselves into those hierarchies (88). Breaking through the emancipatory hype surrounding the Internet, Kavita Philip explores the ways in which the Internet materially and ideologically reifies colonial logics, of dominance and control, demonstrating how “the infrastructural internet, and our cultural stories about it, are mutually constitutive.” (110). Mitali Thakor brings the volume’s first part to a close, with a consideration of how the digital age is “dominated by the feeling of paranoia” (120), by discussing the development and deployment of sophisticated surveillance technologies (in this case, for the detection of child pornography).

    “Electronic computing technology has long been an abstraction of political power into machine form” (137), these lines from Mar Hicks eloquently capture the leitmotif that plays throughout the chapters that make up the second part of the volume. Hicks’ comment comes from an exploration of the sexism that has long been “a feature, not a bug” (135) of the computing sector, with particular consideration of the ways in which sexist hiring and firing practices undermined the development of England’s computing sector. Further exploring how the sexism of today’s tech sector has roots in the development of the tech sector, Corinna Schlombs looks to the history of IBM to consider how that company suppressed efforts by workers to organize by framing the company as a family—albeit one wherein father still knew best. The biases built into voice recognition technologies (such as Siri) are delved into by Halcyon Lawrence who draws attention to the way that these technologies are biased towards those with accents, a reflection of the lack of diversity amongst those who design these technologies. In discussing robots, Safiya Umoja Noble explains how “Robots are the dreams of their designers, catering to the imaginaries we hold about who should do what in our societies” (202), and thus these robots reinscribe particular viewpoints and biases even as their creators claim they are creating robots for good. Shifting away from the flashiest gadgets of high-tech society, Andrea Stanton considers the cultural logics and biases embedded in word processing software that treat the demands of languages that are not written left to write as somehow aberrant. Considering how much of computer usage involves playing games, Noah Wardrip-Fruin argues that the limited set of video game logics keeps games from being about very much—a shooter is a shooter regardless of whether you are gunning down demons in hell or fanatics in a flooded ruin dense with metaphors.

    Oftentimes hiring more diverse candidates is hailed as the solution to the tech sector’s sexism and racism, but as Janet Abbate notes in the first chapter of the “Where Will the Fire Spread?” section, this approach generally attempts to force different groups to fit into Silicon Valley’s warped view of what attributes make for a good programmer. Abbate contends that equal representation will not be enough “until computer work is equally meaningful for groups who do not necessarily share the values and priorities that currently dominate Silicon Valley” (266). While computers do things to society, they also perform specific technical functions, and Ben Allen comments on source code to show the power that programmers have to insert nearly undetectable hacks into the systems they create. Returning to the question of code as empowerment, Sreela Sarkar discusses a skills training class held in Seelampur (near New Delhi), to show that “instead of equalizing disparities, IT-enabled globalization has created and further heightened divisions of class, caste, gender, religion, etc.” (308). Turning towards infrastructure, Paul Edwards considers how the speed with which platforms have developed to become infrastructure has been much swifter than the speed with which older infrastructural systems were developed, which he explores by highlighting three examples in various African contexts (FidoNet, M-Pesa, and Free Basiscs). And Thomas Mullaney closes out the third section with a consideration of the way that the QWERTY keyboard gave rise to pushback and creative solutions from those who sought to type in non-Latin scripts.

    Just as two of the editors began the book with a call to arms, so too the other two editors close the book with a similar rallying cry. In assessing the chapters that had come before, Kavita Philip emphasizes that the volume has chosen “complex, contradictory, contingent explanations over just-so stories.” (364) The contributors, and editors, have worked with great care to make it clear that the current state of computers was not inevitable—that things currently are the way they are does not mean they had to be that way, or that they cannot be changed. Eschewing simplistic solutions, Philip notes that language, history, and politics truly matter to our conversations about computing, and that as we seek for the way ahead we must be cognizant of all of them. In the book’s final piece, Benjamin Peters sets the computer fire against the backdrop of anthropogenic climate change and the COVID-19 pandemic, noting the odd juxtaposition between the progress narratives that surround technology and the ways in which “the world of human suffering has never so clearly appeared on the brink of ruin” (378). Pushing back against a simple desire to turn things off, Peters notes that “we cannot return the unasked for gifts of new media and computing” (380). Though the book has clearly been about computers, truly wrestling with the matters must force us to reflect on what it is that we really talk about when we talk about computers, and it turns out that “the question of life becomes how do not I but we live now?” (380)

    It is a challenging question, and it provides a fitting end to a book that challenges many of the dominant public narratives surrounding computers. And though the book has emphasized repeatedly how important it is to really talk about computers, this final question powers down the computer to force us to look at our own reflection in the mirrored surface of the computer screen.

    Yes, the book is about computers, but more than that it is about what it has meant to live with these devices—and what it might mean to live differently with them in the future.

    *

    With the creation of Your Computer Is on Fire the editors (Hicks, Mullaney, Peters, and Philip) have achieved an impressive feat. The volume is timely, provocative, wonderfully researched, filled with devastating insights, and composed in such a way as to make the contents accessible to a broad audience. It might seem a bit hyperbolic to suggest that anyone who has used a computer in the last week should read this book, but anyone who has used a computer in the last week should read this book. Scholars will benefit from the richly researched analysis, students will enjoy the forthright tone of the chapters, and anyone who uses computers will come away from the book with a clearer sense of the way in which these discussions matter for them and the world in which they live.

    For what this book accomplishes so spectacularly is to make it clear that when we think about computers and society it isn’t sufficient to just think about Facebook or facial recognition software or computer skills courses—we need to actually think about computers. We need to think about the history of computers, we need to think about the material aspects of computers, we need to think about the (oft-unseen) human labor that surrounds computers, we need to think about the language we use to discuss computers, and we need to think about the political values embedded in these machines and the political moments out of which these machines emerged. And yet, even as we shift our gaze to look at computers more critically, the contributors to Your Computer Is on Fire continually remind the reader that when we are thinking about computers we need to be thinking about deeper questions than just those about machines, we need to be considering what kind of technological world we want to live in. And moreover we need to be thinking about who is included and who is excluded when the word “we” is tossed about casually.

    Your Computer Is on Fire is simultaneously a book that will make you think, and a good book to think with. In other words, it is precisely the type of volume that is so desperately needed right now.

    The book derives much of its power from the willingness on the parts of the contributors to write in a declarative style. In this book criticisms are not carefully couched behind three layers of praise for Silicon Valley, and odes of affection for smartphones, rather the contributors stand firm in declaring that there are real problems (with historical roots) and that we are not going to be able to address them by pledging fealty to the companies that have so consistently shown a disregard for the broader world. This tone results in too many wonderful turns of phrase and incendiary remarks to be able to list all of them here, but the broad discussion around computers would be greatly enhanced with more comments like Janet Abbate’s “We have Black Girls Code, but we don’t have ‘White Boys Collaborate’ or ‘White Boys Learn Respect.’ Why not, if we want to nurture the full set of skills needed in computing?” (263) While critics of technology often find themselves having to argue from a defensive position, Your Computer Is on Fire is a book that almost gleefully goes on the offense.

    It almost seems like a disservice to the breadth of contributions to the volume to try to sum up its core message in a few lines, or to attempt to neatly capture the key takeaways in a few sentences. Nevertheless, insofar as the book has a clear undergirding position, beyond the titular idea, it is the one eloquently captured by Mar Hicks thusly:

    High technology is often a screen for propping up idealistic progress narratives while simultaneously torpedoing meaningful social reform with subtle and systemic sexism, classism, and racism…The computer revolution was not a revolution in any true sense: it left social and political hierarchies untouched, at times even strengthening them and heightening inequalities. (152)

    And this is the matter with which each contributor wrestles, as they break apart the “idealistic progress narratives” to reveal the ways that computers have time and again strengthened the already existing power structures…even if many people get to enjoy new shiny gadgets along the way.

    Your Computer Is on Fire is a jarring assessment of the current state of our computer dependent societies, and how they came to be the way they are; however, in considering this new book it is worth bearing in mind that it is not the first volume to try to capture the state of computers in a moment in time. That we find ourselves in the present position, is unfortunately a testament to decades of unheeded warnings.

    One of the objectives that is taken up throughout Your Computer Is on Fire is to counter the techno-utopian ideology that never so much dies as much as it shifts into the hands of some new would-be techno-savior wearing a crown of 1s and 0s. However, even as the mantle of techno-savior shifts from Mark Zuckerberg to Elon Musk, it seems that we may be in a moment when fewer people are willing to uncritically accept the idea that technological progress is synonymous with social progress. Though, if we are being frank, adoring faith in technology remains the dominant sentiment (at least in the US). Furthermore, this isn’t the first moment when a growing distrust and dissatisfaction with technological forces has risen, nor is this the first time that scholars have sought to speak out. Therefore, even as Your Computer is on Fire provides fantastic accounts of the history of computing, it is worthwhile to consider where this new vital volume fits within the history of critiques of computing. Or, to frame this slightly differently, in what ways is the 21st century critique of computing, different from the 20th century critique of computing?

    In 1979 the MIT Press published the edited volume The Computer Age: A Twenty Year View. Edited by Michael Dertouzos and Joel Moses, that book brought together a variety of influential figures from the early history of computing including J.C.R. Licklider, Herbert Simon, Marvin Minsky, and many others. The book was an overwhelmingly optimistic affair, and though the contributors anticipated that the mass uptake of computers would lead to some disruptions, they imagined that all of these changes would ultimately be for the best. Granted, the book was not without a critical voice. The computer scientist turned critic, Joseph Weizenbaum was afforded a chapter in a quarantined “Critiques” section from which to cast doubts on the utopian hopes that had filled the rest of the volume. And though Weizenbaum’s criticisms were presented, the book’s introduction politely scoffed at his woebegone outlook, and Weizenbaum’s chapter was followed by not one but two barbed responses, which ensured that his critical voice was not given the last word. Any attempt to assess The Computer Age at this point will likely say as much about the person doing the assessing as about the volume itself, and yet it would take a real commitment to only seeing the positive sides of computers to deny that the volume’s disparaged critic was one of its most prescient contributors.

    If The Computer Age can be seen as a reflection of the state of discourse surrounding computers in 1979, than Your Computer Is on Fire is a blazing demonstration of how greatly those discussions have changed by 2021. This is not to suggest that the techno-utopian mindset that so infused The Computer Age no longer exists. Alas, far from it.

    As the contributors to Your Computer Is on Fire make clear repeatedly, much of the present discussion around computing is dominated by hype and hopes. And a consideration of those conversations in the second half of the twentieth century reveals that hype and hope were dominant forces then as well. Granted, for much of that period (arguably until the mid-1980s and not really taking off until the 1990s), computers remained technologies with which most people had relatively little direct interaction. The mammoth machines of the 1960s and 1970s were not all top-secret (though some certainly were), but when social critics warned about computers in the 50s, 60s, and 70s they were not describing machines that had become ubiquitous—even if they warned that those machines would eventually become so. Thus, when Lewis Mumford warned in 1956, that:

    In creating the thinking machine, man has made the last step in submission to mechanization; and his final abdication before this product of his own ingenuity has given him a new object of worship: a cybernetic god. (Mumford, 173)

    It is somewhat understandable that his warning would be met with rolled eyes and impatient scoffs. For “the thinking machine” at that point remained isolated enough from most people’s daily lives that the idea that this was “a new object of worship” seemed almost absurd. Though he continued issuing dire predictions about computers, by 1970 when Mumford wrote of the development of “computer dominated society” this warning could still be dismissed as absurd hyperbole. And when Mumford’s friend, the aforementioned Joseph Weizenbaum, laid out a blistering critique of computers and the “artificial intelligentsia” in 1976 those warnings were still somewhat muddled as the computer remained largely out of sight and out of mind for large parts of society. Of course, these critics recognized that this “cybernetic god” had not as of yet become the new dominant faith, but they issued such warnings out of a sense that this was the direction in which things were developing.

    Already by the 1980s it was apparent to many scholars and critics that, despite the hype and revolutionary lingo, computers were primarily retrenching existing power relations while elevating the authority of a variety of new companies. And this gave rise to heated debates about how (and if) these technologies could be reclaimed and repurposed—Donna Haraway’s classic Cyborg Manifesto emerged out of those debates. By the time of 1990’s “Neo-Luddite Manifesto,” wherein Chellis Glendinning pointed to “computer technologies” as one of the types of technologies the Neo-Luddites were calling to be dismantled, the computer was becoming less and less an abstraction and more and more a feature of many people’s daily work lives. Though there is not space here to fully develop this argument, it may well be that the 1990s represent the decade in which many people found themselves suddenly in a “computer dominated society.”  Indeed, though Y2K is unfortunately often remembered as something of a hoax today, delving back into what was written about that crisis as it was unfolding makes it clear that in many sectors Y2K was the moment when people were forced to fully reckon with how quickly and how deeply they had become highly reliant on complex computerized systems. And, of course, much of what we know about the history of computing in those decades of the twentieth century we owe to the phenomenal research that has been done by many of the scholars who have contributed chapters to Your Computer Is on Fire.

    While Your Computer Is on Fire provides essential analyses of events from the twentieth century, as a critique it is very much a reflection of the twenty-first century. It is a volume that represents a moment in which critics are no longer warning “hey, watch out, or these computers might be on fire in the future” but in which critics can now confidently state “your computer is on fire.” In 1956 it could seem hyperbolic to suggest that computers would become “a new object of worship,” by 2021 such faith is on full display. In 1970 it was possible to warn of the threat of “computer dominated society,” by 2021 that “computer dominated society” has truly arrived. In the 1980s it could be argued that computers were reinforcing dominant power relations, in 2021 this is no longer a particularly controversial position. And perhaps most importantly, in 1990 it could still be suggested that computer technologies should be dismantled, but by 2021 the idea of dismantling these technologies that have become so interwoven in our daily lives seems dangerous, absurd, and unwanted. Your Computer Is on Fire is in many ways an acknowledgement that we are now living in the type of society about which many of the twentieth century’s technological critics warned. In the book’s final conclusion, Benjamin Peters pushes back against “Luddite self-righteousness” to note that “I can opt out of social networks; many others cannot” (377), and it is the emergence of this moment wherein the ability to “opt out” has itself become a privilege is precisely the sort of danger about which so many of the last century’s critics were so concerned.

    To look back at critiques of computers made throughout the twentieth century is in many ways a fairly depressing activity. For it reveals that many of those who were scorned as “doom mongers” had a fairly good sense of what computers would mean for the world. Certainly, some will continue to mock such figures for their humanism or borderline romanticism, but they were writing and living in a moment when the idea of living without a smartphone had not yet become unthinkable. As the contributors to this essential volume make clear, Your Computer Is on Fire, and yet too many of us still seem to believe that we are wearing asbestos gloves, and that if we suppress the flames of Facebook we will be able to safely warm our toes on our burning laptop.

    What Your Computer Is on Fire achieves so masterfully is to remind its readers that the wired up society in which they live was not inevitable, and what comes next is not inevitable either. And to remind them that if we are going to talk about what computers have wrought, we need to actually talk about computers. And yet the book is also a discomforting testament to a state of affairs wherein most of us simply do not have the option of swearing off computers. They fill our homes, they fill our societies, they fill our language, and they fill our imaginations. Thus, in dealing with this fire a first important step is to admit that there is a fire, and to stop absentmindedly pouring gasoline on everything. As Mar Hicks notes:

    Techno-optimist narratives surrounding high-technology and the public good—ones that assume technology is somehow inherently progressive—rely on historical fictions and blind spots that tend to overlook how large technological systems perpetuate structures of dominance and power already in place. (137)

    And as Kavita Philip describes:

    it is some combination of our addiction to the excitement of invention, with our enjoyment of individualized sophistications of a technological society, that has brought us to the brink of ruin even while illuminating our lives and enhancing the possibilities of collective agency. (365)

    Historically rich, provocatively written, engaging and engaged, Your Computer Is on Fire is a powerful reminder that when it is properly controlled fire can be useful, but when fire is allowed to rage out of control it turns everything it touches to ash. This book is not only a must read, but a must wrestle with, a must think with, and a must remember. After all, the “your” in the book’s title refers to you.

    Yes, you.

    _____

    Zachary Loeb earned his MSIS from the University of Texas at Austin, an MA from the Media, Culture, and Communications department at NYU, and is currently a PhD candidate in the History and Sociology of Science department at the University of Pennsylvania. Loeb works at the intersection of the history of technology and disaster studies, and his research focusses on the ways that complex technological systems amplify risk, as well as the history of technological doom-saying. He is working on a dissertation on Y2K. Loeb writes at the blog Librarianshipwreck, and is a frequent contributor to The b2o Review Digital Studies section.

    Back to the essay

    Works Cited

    • Lewis Mumford. The Transformations of Man. New York: Harper and Brothers, 1956.

     

     

     

     

     

  • Audrey Watters — Education Technology and The Age of Surveillance Capitalism (Review of Shoshana Zuboff, The Age of Surveillance Capitalism)

    Audrey Watters — Education Technology and The Age of Surveillance Capitalism (Review of Shoshana Zuboff, The Age of Surveillance Capitalism)

    a review of Shoshana Zuboff, The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power (PublicAffairs, 2019)

    by Audrey Watters

    ~

    The future of education is technological. Necessarily so.

    Or that’s what the proponents of ed-tech would want you to believe. In order to prepare students for the future, the practices of teaching and learning – indeed the whole notion of “school” – must embrace tech-centered courseware and curriculum. Education must adopt not only the products but the values of the high tech industry. It must conform to the demands for efficiency, speed, scale.

    To resist technology, therefore, is to undermine students’ opportunities. To resist technology is to deny students’ their future.

    Or so the story goes.

    Shoshana Zuboff weaves a very different tale in her book The Age of Surveillance Capitalism. Its subtitle, The Fight for a Human Future at the New Frontier of Power, underscores her argument that the acquiescence to new digital technologies is detrimental to our futures. These technologies foreclose rather than foster future possibilities.

    And that sure seems plausible, what with our social media profiles being scrutinized to adjudicate our immigration status, our fitness trackers being monitored to determine our insurance rates, our reading and viewing habits being manipulated by black-box algorithms, our devices listening in and nudging us as the world seems to totter towards totalitarianism.

    We have known for some time now that tech companies extract massive amounts of data from us in order to run (and ostensibly improve) their services. But increasingly, Zuboff contends, these companies are now using our data for much more than that: to shape and modify and predict our behavior – “‘treatments’ or ‘data pellets’ that select good behaviors,” as one ed-tech executive described it to Zuboff. She calls this “behavioral surplus,” a concept that is fundamental to surveillance capitalism, which she argues is a new form of political, economic, and social power that has emerged from the “internet of everything.”

    Zuboff draws in part on the work of B. F. Skinner to make her case – his work on behavioral modification of animals, obviously, but also his larger theories about behavioral and social engineering, best articulated perhaps in his novel Walden Two and in his most controversial book Beyond Freedom and Dignity. By shaping our behaviors – through nudges and rewards “data pellets” and the like – technologies circumscribe our ability to make decisions. They impede our “right to the future tense,” Zuboff contends.

    Google and Facebook are paradigmatic here, and Zuboff argues that the former was instrumental in discovering the value of behavioral surplus when it began, circa 2003, using user data to fine-tune ad targeting and to make predictions about which ads users would click on. More clicks, of course, led to more revenue, and behavioral surplus became a new and dominant business model, at first for digital advertisers like Google and Facebook but shortly thereafter for all sorts of companies in all sorts of industries.

    And that includes ed-tech, of course – most obviously in predictive analytics software that promises to identify struggling students (such as Civitas Learning) and in behavior management software that’s aimed at fostering “a positive school culture” (like ClassDojo).

    Google and Facebook, whose executives are clearly the villains of Zuboff’s book, have keen interests in the education market too. The former is much more overt, no doubt, with its Google Suite product offerings and its ubiquitous corporate evangelism. But the latter shouldn’t be ignored, even if it’s seen as simply a consumer-facing product. Mark Zuckerberg is an active education technology investor; Facebook has “learning communities” called Facebook Education; and the company’s engineers helped to build the personalized learning platform for the charter school chain Summit Schools. The kinds of data extraction and behavioral modification that Zuboff identifies as central to surveillance capitalism are part of Google and Facebook’s education efforts, even if laws like COPPA prevent these firms from monetizing the products directly through advertising.

    Despite these companies’ influence in education, despite Zuboff’s reliance on B. F. Skinner’s behaviorist theories, and despite her insistence that surveillance capitalists are poised to dominate the future of work – not as a division of labor but as a division of learning – Zuboff has nothing much to say about how education technologies specifically might operate as a key lever in this new form of social and political power that she has identified. (The quotation above from the “data pellet” fellow notwithstanding.)

    Of course, I never expect people to write about ed-tech, despite the importance of the field historically to the development of computing and Internet technologies or the theories underpinning them. (B. F. Skinner is certainly a case in point.) Intertwined with the notion that “the future of education is necessarily technological” is the idea that the past and present of education are utterly pre-industrial, and that digital technologies must be used to reshape education (and education technologies) – this rather than recognizing the long, long history of education technologies and the ways in which these have shaped what today’s digital technologies generally have become.

    As Zuboff relates the history of surveillance capitalism, she contends that it constitutes a break from previous forms of capitalism (forms that Zuboff seems to suggest were actually quite benign). I don’t buy it. She claims she can pinpoint this break to a specific moment and a particular set of actors, positing that the origin of this new system was Google’s development of AdSense. She does describe a number of other factors at play in the early 2000s that led to the rise of surveillance capitalism: notably, a post–9/11 climate in which the US government was willing to overlook growing privacy concerns about digital technologies and to use them instead to surveil the population in order to predict and prevent terrorism. And there are other threads she traces as well: neoliberalism and the pressures to privatize public institutions and deregulate private ones; individualization and the demands (socially and economically) of consumerism; and behaviorism and Skinner’s theories of operant conditioning and social engineering. While Zuboff does talk at length about how we got here, the “here” of surveillance capitalism, she argues, is a radically new place with new markets and new socioeconomic arrangements:

    the competitive dynamics of these new markets drive surveillance capitalists to acquire ever-more-predictive sources of behavioral surplus: our voices, personalities, and emotions. Eventually, surveillance capitalists discovered that the most-predictive behavioral data come from intervening in the state of play in order to nudge, coax, tune, and herd behavior toward profitable outcomes. Competitive pressures produced this shift, in which automated machine processes not only know our behavior but also shape our behavior at scale. With this reorientation from knowledge to power, it is no longer enough to automate information flows about us; the goal now is to automate us. In this phase of surveillance capitalism’s evolution, the means of production are subordinated to an increasingly complex and comprehensive ‘means of behavioral modification.’ In this way, surveillance capitalism births a new species of power that I call instrumentarianism. Instrumentarian power knows and shapes human behavior toward others’ ends. Instead of armaments and armies, it works its will through the automated medium of an increasingly ubiquitous computational architecture of ‘smart’ networked devices, things, and spaces.

    As this passage indicates, Zuboff believes (but never states outright) that a Marxist analysis of capitalism is no longer sufficient. And this is incredibly important as it means, for example, that her framework does not address how labor has changed under surveillance capitalism. Because even with the centrality of data extraction and analysis to this new system, there is still work. There are still workers. There is still class and plenty of room for an analysis of class, digital work, and high tech consumerism. Labor – digital or otherwise – remains in conflict with capital. The Age of Surveillance Capitalism as Evgeny Morozov’s lengthy review in The Baffler puts it, might succeed as “a warning against ‘surveillance dataism,’” but largely fails as a theory of capitalism.

    Yet the book, while ignoring education technology, might be at its most useful in helping further a criticism of education technology in just those terms: as surveillance technologies, relying on data extraction and behavior modification. (That’s not to say that education technology criticism shouldn’t develop a much more rigorous analysis of labor. Good grief.)

    As Zuboff points out, B. F. Skinner “imagined a pervasive ‘technology of behavior’” that would transform all of society but that, at the very least he hoped, would transform education. Today’s corporations might be better equipped to deliver technologies of behavior at scale, but this was already a big business in the 1950s and 1960s. Skinner’s ideas did not only exist in the fantasy of Walden Two. Nor did they operate solely in the psych lab. Behavioral engineering was central to the development of teaching machines; and despite the story that somehow, after Chomsky denounced Skinner in the pages of The New York Review of Books, that no one “did behaviorism” any longer, it remained integral to much of educational computing on into the 1970s and 1980s.

    And on and on and on – a more solid through line than the all-of-a-suddenness that Zuboff narrates for the birth of surveillance capitalism. Personalized learning – the kind hyped these days by Mark Zuckerberg and many others in Silicon Valley – is just the latest version of Skinner’s behavioral technology. Personalized learning relies on data extraction and analysis; it urges and rewards students and promises everyone will reach “mastery.” It gives the illusion of freedom and autonomy perhaps – at least in its name; but personalized learning is fundamentally about conditioning and control.

    “I suggest that we now face the moment in history,” Zuboff writes, “when the elemental right to the future tense is endangered by a panvasive digital architecture of behavior modification owned and operated by surveillance capital, necessitated by its economic imperatives, and driven by its laws of motion, all for the sake of its guaranteed outcomes.” I’m not so sure that surveillance capitalists are assured of guaranteed outcomes. The manipulation of platforms like Google and Facebook by white supremacists demonstrates that it’s not just the tech companies who are wielding this architecture to their own ends.

    Nevertheless, those who work in and work with education technology need to confront and resist this architecture – the “surveillance dataism,” to borrow Morozov’s phrase – even if (especially if) the outcomes promised are purportedly “for the good of the student.”

    _____

    Audrey Watters is a writer who focuses on education technology – the relationship between politics, pedagogy, business, culture, and ed-tech. Her stories have appeared on NPR/KQED’s education technology blog MindShift, in the data section of O’Reilly Radar, on Inside Higher Ed, in The School Library Journal, in The Atlantic, on ReadWriteWeb, and Edutopia. She is the author of the recent book The Monsters of Education Technology (Smashwords, 2014) and working on a book called Teaching Machines, forthcoming from The MIT Press. She maintains the widely-read Hack Education blog, on which earlier version of this piece first appeared. and writes frequently for The b2o Review Digital Studies section on digital technology and education.

    Back to the essay

  • Zachary Loeb — Hashtags Lean to the Right (Review of Schradie, The Revolution that Wasn’t: How Digital Activism Favors Conservatives)

    Zachary Loeb — Hashtags Lean to the Right (Review of Schradie, The Revolution that Wasn’t: How Digital Activism Favors Conservatives)

    a review of Jen Schradie,The Revolution that Wasn’t: How Digital Activism Favors Conservatives (Harvard University Press, 2019)

    by Zachary Loeb

    ~

    Despite the oft-repeated, and rather questionable, trope that social media is biased against conservatives; and beyond the attention that has been lavished on tech-savvy left-aligned movements (such as Occupy!) in recent years—this does not necessarily mean that social media is of greater use to the left. It may be quite the opposite. This is a topic that documentary filmmaker, activist and sociologist Jen Schradie explores in depth in her excellent and important book The Revolution That Wasn’t: How Digital Activism Favors Conservatism. Engaging with the political objectives of activists on the left and the right, Schradie’s book considers the political values that are reified in the technical systems themselves and the ways in which those values more closely align with the aims of conservative groups. Furthermore, Schradie emphasizes the socio-economic factors that allow particular groups to successfully harness high-tech tools, thereby demonstrating how digital activism reinforces the power of those who are already enjoying a fair amount of power. Rather than suggesting that high-tech tools have somehow been stolen from the left by the right, The Revolution That Wasn’t argues that these were not the left’s tools in the first place.

    The background against which Schradie’s analysis unfolds is the state of North Carolina in the years after 2011. Generally seen as a “red state,” North Carolina had flipped blue for Barack Obama in 2008, leading to the state being increasingly seen as a battleground. Even though the state was starting to take on a purplish color, North Carolina was still home to a deeply entrenched conservativism that was reflected (and still is reflected) in many aspects of the state’s laws, and in the legacy of racist segregation that is still felt in the state. Though the Occupy! movement lingers in the background of Schradie’s account, her focus is on struggles in North Carolina around unionization, the rapid growth of the Tea Party, and the emergence of the “Moral Monday” movement which inspired protests across the state (starting in 2013). While many considerations of digital activism have focused on hip young activists festooned with piercings, hacker skills, and copies of The Coming Insurrection—the central characters of Schradie’s book are members of the labor movement, campus activists, Tea Party members, Preppers, people associated with “Patriot” groups, as well as a smattering of paid organizers working for large organizations. And though Schradie is closely attuned to the impact that financial resources have within activist movements, she pushes back against the “astroturf” accusation that is sometimes aimed at right-wing activists, arguing that the groups she observed on both the right and the left reflected genuine populist movements.

    There is a great deal of specificity to Schradie’s study, and many of the things that Schradie observes are particular to the context of North Carolina, but the broader lessons regarding political ideology and activism are widely applicable. In looking at the political landscape in North Carolina, Schradie carefully observes the various groups that were active around the unionization issue, and pays close attention to the ways in which digital tools were used in these groups’ activism. The levels of digital savviness vary across the political groups, and most of the groups demonstrate at least some engagement with digital tools; however, some groups embraced the affordances of digital tools to a much greater extent than others. And where Schradie’s book makes its essential intervention is not simply in showing these differing levels of digital use, but in explaining why. For one of the core observations of Schradie’s account of North Carolina, is that it was not the left-leaning groups, but the right-leaning groups who were able to make the most out of digital tools. It’s a point which, to a large degree, runs counter to general narratives on the left (and possibly also the right) about digital activism.

    In considering digital activism in North Carolina, Schradie highlights the “uneven digital terrain that largely abandoned left working-class groups while placing right-wing reformist groups at the forefront of digital activism” (Schradie, 7). In mapping out this terrain, Schradie emphasizes three factors that were pivotal in tilting this ground, namely class, organization, and ideology. Taken independently of one another, each of these three factors provides valuable insight into the challenges posed by digital activism, but taken together they allow for a clear assessment of the ways that digital activism (and digital tools themselves) favor conservatives. It is an analysis that requires some careful wading into definitions (the different ways that right and left groups define things like “freedom” really matters), but these three factors make it clear that “rather than offering a quick technological fix to repair our broken democracy, the advent of digital activism has simply ended up reproducing, and in some cases, intensifying, preexisting power imbalances” (Schradie, 7).

    Considering that the core campaign revolves around unionization, it should not particularly be a surprise that class is a major issue in Schradie’s analysis. Digital evangelists have frequently suggested that high-tech tools allow for the swift breaking down of class barriers by providing powerful tools (and informational access) to more and more people—but the North Carolinian case demonstrates the ways in which class endures. Much of this has to do with the persistence of the digital divide, something which can easily be overlooked by onlookers (and academics) who have grown accustomed to digital tools. Schradie points to the presence of “four constraints” that have a pivotal impact on the class aspect of digital activism: “Access, Skills, Empowerment, and Time” (or ASETs for short; Schradie, 61). “Access” points to the most widely understood part of the digital divide, the way in which some people simply do not have a reliable and routine way of getting ahold of and/or using digital tools—it’s hard to build a strong movement online, when many of your members have trouble getting online. This in turn reverberates with “Skills,” as those who have less access to digital tools often lack the know-how that develops from using those tools—not everyone knows how to craft a Facebook post, or how best to make use of hashtags on Twitter. While digital tools have often been praised precisely for the ways in which they empower users, this empowerment is often not felt by those lacking access and skills, leading many individuals from working-class groups to see “digital activism as something ‘other people’ do” (Schradie, 64). And though it may be the easiest factor to overlook, engaging in digital activism requires Time, something which is harder to come by for individuals working multiple jobs (especially of the sort with bosses that do not want to see any workers using phones at work).

    When placed against the class backgrounds of the various activist groups considered in the book, the ASETs framework clearly sets up a situation in which conservative activists had the advantage. What Schradie found was “not just a question of the old catching up with the young, but of the poor never being able to catch up with the rich” (Schradie, 79), as the more financially secure conservative activists simply had more ASETs than the working-class activists on the left. And though the right-wing activists skewed older than the left-wing activists, they proved quite capable of learning to use new high-tech tools. Furthermore, an extremely important aspect here is that the working-class activists (given their economic precariousness) had more to lose from engaging in digital activism—the conservative retiree will be much less worried about losing their job, than the garbage truck driver interested in unionizing.

    Though the ASETs echo throughout the entirety of Schradie’s account, “Time” plays an essential connective role in the shift from matters of class to matters of organization. Contrary to the way in which the Internet has often been praised for invigorating horizontal movements (such as Occupy!), the activist groups in North Carolina attest to the ways in which old bureaucratic and infrastructural tools are still essential. Or, to put it another way, if the various ASETs are viewed as resources, then having a sufficient quantity of all four is key to maintaining an organization. This meant that groups with hierarchical structures, clear divisions of labor, and more staff (be these committed volunteers or paid workers) were better equipped to exploit the affordances of digital tools.

    Importantly, this was not entirely one-sided. Tea Party groups were able to tap into funding and training from larger networks of right-wing organizations, but national unions and civil rights organizations were also able to support left-wing groups. In terms of organization, the overwhelming bias is less pronounced in terms of a right/left dichotomy and more a reflection of a clash between reformist/radical groups. When it came to organization the bias was towards “reformist” groups (right and left) that replicated present power structures and worked within the already existing social systems; the groups that lose out here tend to be the ones that more fully eschew hierarchy (an example of this being student activists). Though digital democracy can still be “participatory, pluralist, and personalized,” Schradie’s analysis demonstrates how “the internet over the long-term favored centralized activism over connective action; hierarchy over horizontalism; bureaucratic positions over networked persons” (Schradie, 134). Thus, the importance of organization, demonstrates not how digital tools allowed for a new “participatory democracy” but rather how standard hierarchical techniques continue to be key for groups wanting to participate in democracy.

    Beyond class and organization (insofar as it is truly possible to get past either), the ideology of activists on the left and activists on the right has a profound influence on how these groups use digital tools. For it isn’t the case that the left and the right try to use the Internet for the exact same purpose. Schradie captures this as a difference between pursuing fairness (the left), and freedom (the right)—this largely consisted of left-wing groups seeking a “fairer” allocation of societal power, while those on the right defined “freedom” largely in terms of protecting the allocation of power already enjoyed by these conservative activists. Believing that they had been shut out by the “liberal media,” many conservatives flocked to and celebrated digital tools as a way of getting out “the Truth,” their “digital practices were unequivocally focused on information” (Schradie, 167). As a way of disseminating information, to other people already in possession of ASETs, digital means provided right-wing activists with powerful tools for getting around traditional media gatekeepers. While activists on the left certainly used digital tools for spreading information, their use of the internet tended to be focused more heavily on organizing: on bringing people together in order to advocate for change. Further complicating things for the left is that Schradie found there to be less unity amongst leftist groups in contrast to the relative hegemony found on the right. Comparing the intersection of ideological agendas with digital tools, Schradie is forthright in stating, “the internet was simply more useful to conservatives who could broadcast propaganda and less effective for progressives who wanted to organize people” (Schradie, 223).

    Much of the way that digital activism has been discussed by the press, and by academics, has advanced a narrative that frames digital activism as enhancing participatory democracy. In these standard tales (which often ground themselves in accounts of the origins of the internet that place heavy emphasis on the counterculture), the heroes of digital activism are usually young leftists. Yet, as Schradie argues, “to fully explain digital activism in this era, we need to take off our digital-tinted glasses” (Schradie, 259). Removing such glasses reveals the way in which they have too often focused attention on the spectacular efforts of some movements, while overlooking the steady work of others—thus, driving more attention to groups like Occupy!, than to the buildup of right-wing groups. And looking at the state of digital activism through clearer eyes reveals many aspects of digital life that are obvious, yet which are continually forgotten, such as the fact that “the internet is a tool that favors people with more money and power, often leaving those without resources in the dust” (Schradie, 269). The example of North Carolina shows that groups on the left and the right are all making use of the Internet, but it is not just a matter of some groups having more ASETs, it is also the fact that the high-tech tools of digital activism favor certain types of values and aims better than others. And, as Schradie argues throughout her book, those tend to be the causes and aims of conservative activists.

    Despite the revolutionary veneer with which the Internet has frequently been painted, “the reality is that throughout history, communications tools that seemed to offer new voices are eventually owned or controlled by those with more resources. They eventually are used to consolidate power, rather than to smash it into pieces and redistribute it” (Schradie, 25). The question with which activists, particularly those on the left, need to wrestle is not just whether or not the Internet is living up to its emancipatory potential—but whether or not it ever really had that potential in the first place.

    * * *

    In an iconic photograph from 1948, a jubilant Harry S. Truman holds aloft a copy of The Chicago Daily Tribune emblazoned with the headline “Dewey Beats Truman.” Despite the polls having predicted that Dewey would be victorious, when the votes were counted Truman had been sent back to the White House and the Democrats took control of the House and the Senate. An echo of this moment occurred some sixty-eight years later, though there was no comparable photo of Donald Trump smirking while holding up a newspaper carrying the headline “Clinton Beats Trump.” In the aftermath of Trump’s victory pundits ate crow in a daze, pollsters sought to defend their own credibility by emphasizing that their models had never actually said that there was no chance of a Trump victory, and even some in Trump’s circle seemed stunned by his victory.

    As shock turned to resignation, the search for explanations and scapegoats began in earnest. Democrats blamed Russian hackers, voter suppression, the media’s obsession with Trump, left-wing voters who didn’t fall in line, and James Comey; while Republicans claimed that the shock was simply proof that the media was out of touch with the voters. Yet, Republicans and Democrats seemed to at least agree on one thing: to understand Trump’s victory, it was necessary to think about social media. Granted, Republicans and Democrats were divided on whether this was a matter of giving credit or assigning blame. On the one hand, Trump had been able to effectively use Twitter to directly engage with his fan base; on the other hand, platforms like Facebook had been flooded with disinformation that spread rapidly through the online ecosystem. It did not take long for representatives, including executives, from the various social media companies to find themselves called before Congress, where these figures were alternately grilled about supposed bias against conservatives on their platforms, and taken to task for how their platforms had been so easily manipulated into helping Trump win election.

    If the tech companies were only finding themselves summoned before Congress it would have been bad enough, but they were also facing frustrated employees, as well as disgruntled users, and the word “techlash” was being used to describe the wave of mounting frustration with these companies. Certainly, unease with the power and influence of the tech titans had been growing for years. Cambridge Analytica was hardly the first tech scandal. Yet much of that earlier displeasure was tempered by an overwhelmingly optimistic attitude towards the tech giants, as though the industry’s problematic excesses were indicative of growing pains as opposed to being signs of intrinsic anti-democratic (small d) biases. There were many critics of the tech industry before the arrival of the “techlash,” but they were liable to find themselves denounced as Luddites if they failed to show sufficient fealty to the tech companies. From company CEOs to an adoring tech press to numerous technophilic academics, in the years prior to the 2016 election smart phones and social media were hailed for their liberating and democratizing potential. Videos shot on smart phone cameras and uploaded to YouTube, political gatherings organized on Facebook, activist campaigns turning into mass movements thanks to hashtags—all had been treated as proof positive that high tech tools were breaking apart the old hierarchies and ushering in a new era of high-tech horizontal politics.

    Alas, the 2016 election was the rock against which many of these high-tech hopes crashed.

    And though there are many strands contributing to the “techlash,” it is hard to make sense of this reaction without seeing it in relation to Trump’s victory. Users of Facebook and Twitter had been frustrated with those platforms before, but at the core of the “techlash” has been a certain sense of betrayal. How could Facebook have done this? Why was Twitter allowing Trump to break its own terms of service on a daily basis? Why was Microsoft partnering with ICE? How come YouTube’s recommendation algorithms always seemed to suggest far-right content?

    To state it plainly: it wasn’t supposed to be this way.

    But what if it was? And what if it had always been?

    In a 1985 interview with MIT’s newspaper The Tech, the computer scientist and social critic, Joseph Weizenbaum had some blunt words about the ways in which computers had impacted society, telling his interviewer: “I think the computer has from the beginning been a fundamentally conservative force. It has made possible the saving of institutions pretty much as they were, which otherwise might have had to be changed” (ben-Aaron, 1985). This was not a new position for Weizenbaum; he had largely articulated the same idea in his 1976 book Computer Power and Human Reason, wherein he had pushed back at those he termed the “artificial intelligentsia” and the other digital evangelists of his day. Articulating his thoughts to the interviewer from The Tech, Weizenbaum raised further concerns about the close links between the military and computer work at MIT, and cast doubt on the real usefulness of computers for society—couching his dire fears in the social critic’s common defense “I hope I’m wrong” (ben-Aaron, 1985). Alas, as the decades passed, Weizenbaum unfortunately felt he had been right. When he turned his critical gaze to the internet in a 2006 interview, he decried the “flood of disinformation,” while noting “it just isn’t true that everyone has access to the so-called Information age” (Weizenbaum and Wendt 2015, 44-45).

    Weizenbaum was hardly the only critic to have looked askance at the growing importance that was placed on computers during the 20th century. Indeed, Weizenbaum’s work was heavily influenced by that of his friend and fellow social critic Lewis Mumford who had gone so far as to identify the computer as the prototypical example of “authoritarian” technology (even suggesting that it was the rebirth of the “sun god” in technical form). Yet, societies that are in love with their high-tech gadgets, and which often consider technological progress and societal progress to be synonymous, generally have rather little time for such critics. When times are good, such social critics are safely quarantined to the fringes of academic discourse (and completely ignored within broader society), but when things get rocky they have their woebegone revenge by being proven right.

    All of which is to say, that thinkers like Weizenbaum and Mumford would almost certainly agree with The Revolution That Wasn’t. However, they would probably not be surprised by it. After all, The Revolution That Wasn’t is a confirmation that we are today living in the world about which previous generations of critics warned. Indeed, if there is one criticism to be made of Schradie’s work, it is that the book could have benefited by more deeply grounding its analysis in the longstanding critiques of technology that have been made by the likes of Weizenbaum, Mumford, and quite a few other scholars and critics. Jo Freeman and Langdon Winner are both mentioned, but it’s important to emphasize that many social critics warned about the conservative biases of computers long before Trump got a Twitter account, and long before Mark Zuckerberg was born. Our widespread refusal to heed these warnings, and the tendency to mock those issuing these warnings as Luddites, technophobes, and prophets of doom, is arguably a fundamental cause of the present state of affairs which Schradie so aptly describes.

    With The Revolution That Wasn’t, Jen Schradie has made a vital intervention in current discussions (inside the academy and amongst activists) regarding the politics of social media. Eschewing a polemical tone, which refuses to sing the praises of social media or to condemn it outright, Schradie provides a measured assessment that addresses the way in which social media is actually being used by activists of varying political stripes—with a careful emphasis on the successes these groups have enjoyed. There is a certain extent to which Schradie’s argument, and some of her conclusions, represent a jarring contrast to much of the literature that has framed social media as being a particular boon to left-wing activists. Yet, Schradie’s book highlights with disarming detail the ways in which a desire (on the part of left-leaning individuals) to believe that the Internet favors people on the left has been a sort of ideological blinder that has prevented them from fully coming to terms with how the Internet has re-entrenched the dominant powers in society.

    What Schradie’s book reveals is that “the internet did not wipe out barriers to activism; it just reflected them, and even at times exacerbated existing power differences” (Schradie, 245). Schradie allows the activists on both sides to speak in their own words, taking seriously their claims about what they were doing. And while the book is closely anchored in the context of a particular struggle in North Carolina, the analytical tools that Schradie develops (such as the ASET framework, and the tripartite emphasis on class/organization/ideology) allow Schradie’s conclusions to be mapped onto other social movements and struggles.

    While the research that went into The Revolution That Wasn’t clearly predates the election of Donald Trump, and though he is not a main character in the book, the 45th president lurks in the background of the book (or perhaps just in the reader’s mind). Had Trump lost the election, every part of Schradie’s analysis would be just as accurate and biting; however, those seeking to defend social media tools as inherently liberating would probably not be finding themselves on the defensive today (a position that most of them were never expecting themselves to be in). Yet, what makes Schradie’s account so important, is that the book is not simply concerned with whether or not particular movements used digital tools; rather, Schradie is able to step back to consider the degree to which the use of social media tools has been effective in fulfilling the political aims of the various groups. Yes, Occupy! might have made canny use of hashtags (and, if one wants to be generous one can say that it helped inject the discussion of inequality back into American politics), but nearly ten years later the wealth gap is continuing to grow. For all of the hopeful luster that has often surrounded digital tools, Schradie’s book shows the way in which these tools have just placed a fresh coat of paint on the same old status quo—even if this coat of paint is shiny and silvery.

    As the technophiles scramble to rescue the belief that the Internet is inherently democratizing, The Revolution That Wasn’t takes its place amongst a growing body of critical works that are willing to challenge the utopian aura that has been built up around the Internet. While it must be emphasized, as the earlier allusion to Weizenbaum shows, that there have been thinkers criticizing computers and the Internet for as long as there have been computers and the Internet—of late there has been an important expansion of such critical works. There is not the space here to offer an exhaustive account of all of the critical scholarship being conducted, but it is worthwhile to mention some exemplary recent works. Safiya Umoja Noble’s Algorithms of Oppression provides an essential examination of the ways in which societal biases, particularly about race and gender, are reinforced by search engines. The recent work on the “New Jim Code” by Ruha Benjamin as seen in such works as Race After Technology, and the Captivating Technology volume she edited, foreground the ways in which technological systems reinforce white supremacy. The work of Virginia Eubanks, both Digital Dead End (whose concerns make it likely the most important precursor to Schradie’s book) and her more recent Automating Inequality, discuss the ways in which high tech systems are used to police and control the impoverished. Examinations of e-waste (such as Jennifer Gabry’s Digital Rubbish) and infrastructure (such as Nicole Starosielski’s The Undersea Network, and Tung-Hui Hu’s A Prehistory of the Cloud) point to the ways in which colonial legacies are still very much alive in today’s high tech systems. While the internationalist sheen that is often ascribed to digital media is carefully deconstructed in works like Ramesh Srnivasan’s Whose Global Village? Works like Meredith Broussard’s Artificial Unintelligence and Shoshana Zuboff’s Age of Surveillance Capitalism raise deep questions about the overall politics of digital technology. And, with its deep analysis of the way that race and class are intertwined with digital access and digital activism, The Revolution That Wasn’t deserves a place amongst such works.

    What much of this recent scholarship has emphasized is that technology is never neutral. And while this may be a point which is accepted wisdom amongst scholars in these relevant fields, these works (and scholars) have taken great care to make this point to the broader public. It is not just that tools can be used for good, or for bad—but that tools have particular biases built into them. Pretending those biases aren’t there, doesn’t make them go away. Kranzberg’s Laws asserted that technology is not good, or bad, or neutral—but when one moves away from talking about technology to particular technologies, it is quite important to be able to say that certain technologies may actually be bad. This is a particular problem when one wants to consider things like activism. There has always been something asinine to the tactic of mocking activists pushing for social change while using devices created by massive multinational corporations (as the well-known comic by Matt Bors notes); however, the reason that this mockery is so often repeated is that it has a kernel of troubling truth to it. After all, there is something a little discomforting about using a device running on minerals mined in horrendous conditions, which was assembled in a sweatshop, and which will one day go on to be poisonous e-waste—for organizing a union drive.

    Matt Bors, detail from "Mister Gotcha" (2016)
    Matt Bors, detail from “Mister Gotcha” (2016)

    Or, to put it slightly differently, when we think about the democratizing potential of technology, to what extent are we privileging those who get to use (and discard) these devices, over those whose labor goes into producing them? That activists may believe that they are using a given device or platform for “good” purposes, does not mean that the device itself is actually good. And this is a tension Schradie gets at when she observes that “instead of a revolutionary participatory tool, the internet just happened to be the dominant communication tool at the time of my research and simply became normalized into the groups’ organizing repertoire” (Schradie, 133). Of course, activists (of varying political stripes) are making use of the communication tools that are available to them and widely used in society. But just because activists use a particular communication tool, doesn’t mean that they should fall in love with it.

    This is not in any way to call activists using these tools hypocritical, but it is a further reminder of the ways in which high-tech tools inscribe their users within the very systems they may be seeking to change. And this is certainly a problem that Schradie’s book raises, as she notes that one of the reasons conservative values get a bump from digital tools is that these conservatives are generally already the happy beneficiaries of the systems that created these tools. Scholarship on digital activism has considered the ideologies of various technologically engaged groups before, and there have been many strong works produced on hackers and open source activists, but often the emphasis has been placed on the ideologies of the activists without enough consideration being given to the ways in which the technical tools themselves embody certain political values (an excellent example of a work that truly considers activists picking their tools based on the values of those tools is Christina Dunbar-Hester’s Low Power to the People). Schradie’s focus on ideology is particularly useful here, as it helps to draw attention to the way in which various groups’ ideologies map onto or come into conflict with the ideologies that these technical systems already embody. What makes Schradie’s book so important is not just its account of how activists use technologies, but its recognition that these technologies are also inherently political.

    Yet the thorny question that undergirds much of the present discourse around computers and digital tools remains “what do we do if, instead of democratizing society, these tools are doing just the opposite?” And this question just becomes tougher the further down you go: if the problem is just Facebook, you can pose solutions such as regulation and breaking it up; however, if the problem is that digital society rests on a foundation of violent extraction, insatiable lust for energy, and rampant surveillance, solutions are less easily available. People have become so accustomed to thinking that these technologies are fundamentally democratic that they are loathe to believe analyses, such as Mumford’s, that they are instead authoritarian by nature.

    While reports of a “techlash” may be overstated, it is clear that at the present moment it is permissible to be a bit more critical of particular technologies and the tech giants. However, there is still a fair amount of hesitance about going so far as to suggest that maybe there’s just something inherently problematic about computers and the Internet. After decades of being told that the Internet is emancipatory, many people remain committed to this belief, even in the face of mounting evidence to the contrary. Trump’s election may have placed some significant cracks in the dominant faith in these digital devices, but suggesting that the problem goes deeper than Facebook or Amazon is still treated as heretical. Nevertheless, it is a matter that is becoming harder and harder to avoid. For it is increasingly clear that it is not a matter of whether or not these devices can be used for this or that political cause, but of the overarching politics of these devices themselves. It is not just that digital activism favors conservatism, but as Weizenbaum observed decades ago, that “the computer has from the beginning been a fundamentally conservative force.”

    With The Revolution That Wasn’t, Jen Schradie has written an essential contribution to current conversations around not only the use of technology for political purposes, but also about the politics of technology. As an account of left-wing and right-wing activists, Schradie’s book is a worthwhile consideration of the ways that various activists use these tools. Yet where this, altogether excellent, work really stands out is in the ways in which it highlights the politics that are embedded and reified by high-tech tools. Schradie is certainly not suggesting that activists abandon their devices—in so far as these are the dominant communication tools at present, activists have little choice but to use them—but this book puts forth a nuanced argument about the need for activists to really think critically about whether they’re using digital tools, or whether the digital tools are using them.

    _____

    Zachary Loeb earned his MSIS from the University of Texas at Austin, an MA from the Media, Culture, and Communications department at NYU, and is currently a PhD candidate in the History and Sociology of Science department at the University of Pennsylvania. Loeb works at the intersection of the history of technology and disaster studies, and his research focusses on the ways that complex technological systems amplify risk, as well as the history of technological doom-saying. He is working on a dissertation on Y2K. Loeb writes at the blog Librarianshipwreck, and is a frequent contributor to The b2 Review Digital Studies section.

    Back to the essay

    _____

    Works Cited

    • ben-Aaron, Diana. 1985. “Weizenbaum Examines Computers and Society.” The Tech (Apr 9).
    • Weizenbaum, Joseph, and Gunna Wendt. 2015. Islands in the Cyberstream: Seeking Havens of Reason in a Programmed Society. Duluth, MN: Litwin Books.
  • “Dennis Erasmus” — Containment Breach: 4chan’s /pol/ and the Failed Logic of “Safe Spaces” for Far-Right Ideology

    “Dennis Erasmus” — Containment Breach: 4chan’s /pol/ and the Failed Logic of “Safe Spaces” for Far-Right Ideology

    “Dennis Erasmus”

    This essay has been peer-reviewed by “The New Extremism” special issue editors (Adrienne Massanari and David Golumbia), and the b2o: An Online Journal editorial board.

    Author’s Note: This article was written prior to the events of the deadly far-right riot in Charlottesville, Virginia, on August 11-12, 2017. Footnotes have been added with updated information where it is possible or necessary, but it has otherwise been largely unchanged.

    Introduction

    This piece is a discussion of one place on the internet where the far right meets, formulates their propaganda and campaigns, and ultimately reproduces and refines its ideology.

    4chan’s Politically Incorrect image board (like other 4chan boards, regularly referred to by the last portion of its URL, “/pol/”) is one of the most popular boards on the highly active and gently-moderated website, as well as a major online hub for far-right politics, memes, and coordinated harassment campaigns. Unlike most of the hobby-oriented boards on 4chan, /pol/ came into its current form through a series of board deletions and restorations with the intent of improving the discourse of the hobby boards by restricting unrelated political discussion, often of a bigoted nature, to a single location on the website. /pol/ is thus often referred to as a “containment board” with the understanding that far-right content is meant to be kept in that single forum.

    After deleting the /new/ – News board on January 17, 2011, /pol/ – Politically Incorrect was added to the website on November 10, 2011. 4chan’s original owner (and current Google employee) Christopher Poole (alias “moot”) deleted /new/ for having a disproportionately high proportion of racist discussion. In Poole’s words:

    As for /new/, anybody who used it knows exactly why it was removed. When I re-added the board last year, I made a note that if it devolved into /stormfront/, I’d remove it. It did — ages ago. Now it’s gone, as promised.[1]

    “/stormfront/” is a reference to Stormfront.org, one of the oldest and largest white supremacist forums on the internet. Stormfront was founded by a former KKK leader and is listed as an extremist group by the Southern Poverty Law Center (Southern Poverty Law Center 2017c).

    Despite once showing this commitment to maintaining a news board that was not dominated by far-right content, /pol/ nevertheless followed suit and gained a reputation as a haven for white supremacist politics (Dewey 2014).

    While there was the intention to keep political discussion contained in /pol/, far-right politics is a frequent theme on the other major discussion boards on the website and has come to be strongly associated with 4chan in general.

    The Logic of Containment

    The nature of 4chan means that for every new thread created, an old thread “falls off” of the website and is deleted or archived. Because of its high worldwide popularity and the fast pace of discussion, it has sometimes been viewed as necessary to split up boards into specific topics so that the rate of thread creation does not prematurely end productive, on-topic, ongoing conversations.

    The most significant example of a topic requiring “containment” is perhaps My Little Pony. The premiere of the 2010 animated series My Little Pony: Friendship is Magic led to a surge of interest in the franchise and a major fan following composed largely of young adult males (covered extensively in the media as “bronies”), 4chan’s key demographic (Whatisabrony.com 2017).

    Posters who wished to discuss other cartoons on the /co/ – Comics and Cartoons board were often left feeling crowded out by the intense and rapid pace of the large and excited fanbase that was only interested in discussing ponies. After months of complaints, a new board, /mlp/ – My Little Pony, was opened to accommodate both fans and detractors by giving the franchise a dedicated platform for discussion. For the most part, fans have been happy to stay and discuss the series among one another. There is also a site-wide rule that pony-related discussion must be confined in /mlp/, and while enforcement of the rules of 4chan is notoriously lax, this has mostly been applied (4chan 2017).

    A similar approach has been taken for several other popular hobbies; for instance, the creation of /vp/ – Pokémon for all media—be it video games, comics, or television—related to the very popular Japanese franchise.

    A common opinion on 4chan is that /pol/ serves as a “containment board” for the neo-Nazi, racist, and other far-right interests of many who use the website (Anonymous /q/ poster 2012). Someone who posts a blatantly political message on the /tv/ – Television and Film board, for instance, may be told “go back to your containment board.” One could argue, as well, that the popular and rarely moderated /b/ – Random board was originally a “containment board” for all of the off-topic discussion that would otherwise have derailed the specific niche or hobby boards.

    Moderators as Humans

    Jay Irwin, a moderator of 4chan and an advertising technology professional, wrote an article for The Observer.[2] The piece was published April 25, 2017, arguing that unwelcome “liberal agenda” in entertainment was serving to inspire greater conservatism on 4chan’s traditionally apolitical boards. Generalizations about the nature of 4chan’s userbase can be difficult, but Irwin’s status as a moderator means he has the ability to remove certain discussion threads while allowing others to flourish, shaping the discourse and apparent consensus of the website’s users.

    Irwin’s writing in The Observer shows a clear personal distaste for what he perceives as a liberal political agenda: in this specific case, Bill Nye’s assertion, backed up by today’s scientific consensus regarding human biology, that gender is a spectrum and not a binary:

    The show shuns any scientific approach to these topics, despite selling itself—and Bill Nye—as rigorously reason-based. Rather than providing evidence for the multitude of claims made on the show by Nye and his guests, the series relies on the kind of appeals to emotion one would expect in a gender studies class…The response on /tv/ was swift. The most historically apolitical 4channers are almost unanimously and vehemently opposed to the liberal agenda and lack of science on display in what is billed as a science talk show. Scores of 4chan users who have always avoided and discouraged political conversations have expressed horror at what they see as a significant uptick in the entertainment industry’s attempts to indoctrinate viewers with leftist ideology. (Irwin 2017)

    As Irwin believes the users of /tv/ are becoming less tolerant of liberal media, he expects them to also become warmer to far-right ideas and discussions that they once would have dismissed as off-topic and out of place on a television and film discussion board. Whether or not this is true of the /tv/ userbase, his obvious bias in favor of these ideas is able to inform the moderation that is applied when determining just how “off-topic” an anti-liberal thread might be.

    On the other end of the spectrum, a 4chan moderator was previously removed from the moderation team after issuing a warning against a user with explicitly political reasoning. In the aftermath of the December 2, 2016 fatal fire at the Ghost Ship warehouse, an artist’s space and venue in Oakland, California that killed thirty-six people, users of /pol/ attempted to organize a campaign to shut down DIY (“Do-it-yourself”) spaces across the United States by reporting noncompliance with fire codes to local authorities, in order to “crush the radical left” (KnowYourMeme 2017). As another moderator confirmed in a thread on /qa/, the board designed for discussions about 4chan, the fired moderator clearly stated their belief that the campaign to shut down DIY spaces is an attack on marginalized communities by neo-Nazis. (Anonymous##Mod 2016).

    The anti-DIY campaign is a clear example of the kind of “brigading”—use of /pol/ as an organizational and propaganda hub for right-wing political activities on other sites or in real life—that regularly occurs on the mostly-anonymous imageboard. The fired moderator’s error was not having an political agenda—as Irwin’s writing in The Observer demonstrates, he has an agenda of his own—but expressing it directly. They could have done as Irwin has the capacity to do, selectively deleting threads not to their liking with no justification required, so as to continue to maintain a facade of neutrality that is so important for the financially struggling site’s brand.

    He Will Not Divide Us

    Another such example of brigading activities would be the harassment surrounding the art project “He Will Not Divide Us” (HWNDU) by Shia LaBeouf, Nastja Säde Rönkkö & Luke Turner. Launched during the inauguration of President Trump on January 20, 2017, the project was to broadcast a 24-hour live stream for four years from outside of the Museum of the Moving Image in New York City. LaBeouf was frequently at the location leading crowds in relatively inoffensive chants: “he will not divide us,” and the like.

    LaBeouf, Rönkkö & Turner, HE WILL NOT DIVIDE US (2017)
    LaBeouf, Rönkkö & Turner, HE WILL NOT DIVIDE US (2017). Image source: Nylon

    Within a day, threads calling for raids against the exhibit on /pol/ were amassing hundreds of replies, with suggestions ranging from leaving booby-trapped racist posters taped on top of razor blades so as to cut people who tried to remove them, to simply sending in “the right wing death squads” (Anonymous /pol/ poster 2017). Notably, in part because it was noted by the /pol/ brigaders, two of the three HWNDU artists, LaBeouf and Turner, are Jewish.

    Raid participants who coordinated on /pol/ and other far-right websites flashed white nationalist paraphernalia, neo-Nazi tattoos, and within five days of opening, directly told LaBeouf “Hitler did nothing wrong” while he was present at the exhibit (Horton 2017). LaBeouf was later arrested and charged with misdemeanor assault against one of the people who went to his art exhibit with the intent of disrupting it, though the charges were later dismissed (France 2017).

    On February 10, less than a month into the intended four-year run of the project, the Museum of the Moving Image released a statement declaring its intent to shut down HWNDU, perhaps at the urging of the NYPD, which had to dedicate resources to monitoring the space after regular clashes:

    The installation created a serious and ongoing public safety hazard for the museum, its visitors, its staff, local residents and businesses. The installation had become a flashpoint for violence and was disrupted from its original intent. While the installation began constructively, it deteriorated markedly after one of the artists was arrested at the site of the installation and ultimately necessitated this action. (Saad 2017)

    High-profile liberal advocates of free speech causes did not draw attention to the implications of a Jewish artist’s exhibit being cancelled due to constant harassment by neo-Nazis and other far-right elements. New York magazine’s Jonathan Chait, one of the most high-profile liberal opponents of “politically correct” suppression of speech, spent his time policing the limits of discourse by criticizing anti-fascist political activists (Chait 2017). The American Civil Liberties Union spent its energy defending former right-wing celebrity and noted pederasty advocate Milo Yiannopoulos against his critics (NPR 2017).

    Containment Failure

    Among those who sincerely believed themselves to be politically neutral or at least not far-right, 4chan’s leadership was mistaken to view far-right politics as simply another hobby, rather than the basis of an ideology.

    Ideology is not easily compartmentalized. Unlike a hobby, an ideology has the power to follow its adherents into all areas of their lives. Whether that ideology is cultivated in a “safe space” that is digital or physical, it is nonetheless brought with its possessor out into the world.

    Attempting to contain far-right ideology in physical and virtual spaces provides its followers with one of the essential requirements it needs to thrive and contribute to society’s reactionary movements.

    By way of comparison, the users of /mlp/ or other successful containment boards do not use their discussion space to organize raids and targeted harassment campaigns because, basically, hobbies do not traditionally have antagonists (with Gamergate being a notable exception). Adherents to far-right ideology, on the other hand, see liberal protesters, Hollywood activists, “cultural Marxists,” “globalist Jews,” white people comfortable with interracial marriages, black and brown people of all persuasions, and anti-fascist street fighters to be in direct opposition to their interests. When gathered with like-minded people, they will discuss the urgency of combating these forces and, if possible, encourage one another to act against these enemies.

    It seems obvious that a board which has been documented organizing campaigns to harass a Jewish artist until his art exhibit is shut down, or to attempt to force the closure of spaces they believe belong to the “far left,” is anything but contained.

    If anything, the DIY venue example shows exactly how the average /pol/ user views designated ideological spaces: leftists will use those venues to organize, they assert, and if we take that away, we can decrease their capacity. If a DIY venue meant the leftists would be contained, then it would be advantageous for them to remain and let leftists keep talking among themselves. Rather, the far-right /pol/ userbase demonstrates through their actions that they believe leftists use their political spaces in the same way as they do, as a base for launching attacks against their enemies.

    Countdown: What Comes Next

    The political right in the United States remains divided in tactics, aesthetics, and capacity.

    Footage surfaced of a June 10, 2017 rally in Houston, Texas, of an alt-right activist being choked by an Oath Keeper—a member of a right-wing paramilitary organization—following a disagreement (Kragie and Lewis 2017). The alt-right activist is clearly signaling his affiliation with the internet-fueled right one might find in or inspired by /pol/, displaying posters that represent several recognizable 4chan memes (Pepe, Wojak/”feels guy”, Baneposting), in addition to neo-Nazi imagery (a stylized SS in the words “The Fire Rises,” an American flag modified to contain the Nazi-associated Black Sun or Sonnenrad). Which element of his approach provoked the ire of the Oathkeepers—identified by the SPLC as one of the largest anti-government organizations in the country—is not clear (Southern Poverty Law Center 2017b). The differences between the far-right inspired by 4chan and the paramilitary far-right mostly derived from ex-military and ex-police may be mostly aesthetic, but these differences nonetheless matter.[3]

    None of this is to discount the threat to life posed by the young and awkward meme-spouting members of the far-right. Brandon Russell, aged 21, was found in possession of bomb-making materials including explosive chemicals and radioactive materials, and arrested by authorities in Florida. He admitted his affiliation with an online neo-nazi group called Atom Waffen, German for “Atomic weapon,” an SPLC-identified hate group (Southern Poverty Law Center 2017a).

    Russell was not found due to an investigation into terroristic far-right groups, but because of a bizarre series of events in which one of his three roommates, who claimed to have originally shared the neo-Nazi beliefs of the others, allegedly converted to Islam and murdered the other two for disrespecting his new faith. Police only found Russell’s bomb and radioactive materials while examining this crime scene (Elfrink 2017).

    The Trump regime and its Department of Justice, then headed by Jefferson Beauregard Sessions, indicated that it plans to cut off what little funding has been directed towards investigating far-right and white supremacist extremist groups, instead focusing purely on the specter of Islamic extremism (Pasha-Robinson 2017).

    By several metrics, far-right terrorism is a greater threat to Americans than terrorism connected to Islamism, and seems on track to maintain this record (Parkin et al. 2017).

    A federal judge ruled that Russell, who was found to own a framed photograph of Oklahoma City bomber Timothy McVeigh—whose ammonium nitrate bomb killed 168 people in 1995—may be released on bond, writing that there was no evidence that he used or planned to use a homemade radioactive bomb (Phillips 2017). Admitted affiliation with neo-Nazi ideology, which glorifies a regime known for massacring leftists, minorities, and Jews, was not taken as evidence of a desire to maim or kill leftists, minorities, or Jews.

    Just like the well-intentioned 4chan moderators who believed in the compartmentalization or “containability” of ideology, U.S. Magistrate Judge Thomas McCoun III seemed to believe that neo-Nazi ideology is little more than a hobby that can be pursued separately from one’s procurement and assembly of chemical bombs. McCoun did not consider that far-right politics is not a simple interest, but produces a worldview that generates answers to why one assembles a dirty bomb and how it is ultimately used.

    Judge McCoun only changed his mind and revoked the order to grant Russell bail after seeing video testimony from Russell’s former roommate, who claimed Russell planned to use a radioactive bomb to attack a nuclear power plant in Florida with the intention of irradiating ocean water and wiping out “parts of the Eastern Seaboard” (Sullivan 2017). Living with other neo-Nazis, it seems, gave Russell the confidence and safe space he needed to plan to carry out a McVeigh-style attack to inflict massive loss of life.[4]

    Finally, one should note that Russell, who may still be free were it not for the brash murders allegedly committed by his roommate, is also a member of the Florida National Guard. The internet far-right may look and sound quite differently from the paramilitary Oathkeepers today, but that difference may change in time, as well.

    _____

    Dennis Erasmus (pseudonym) (@erasmusNYT) lived in Charlottesville, Virginia for six years prior to 2016. He has studied political theory and was active on 4chan for roughly eight years.

    Back to the essay

    _____

    Notes
    [1] Statement posted by moot on Nov at the /tmp/ board at http://content.4chan.org/tmp/r9knew.txt, and previously archived at the Webcite 4chan archive http://www.webcitation.org/6159jR9pC, and accessed by the author on July 9, 2017. The archive was deleted in early 2019.

    [2] The New York Observer, now a web-only publication, came under the ownership of Jared Kushner, President Donald J. Trump’s son-in-law, in 2006. The Observer is one of relatively few papers to have endorsed Trump during the 2016 Republican primary.

    [3] The alt-right activist who said “these are good memes” is supposedly William Fears, who was present at the Charlottesville 2017 riot and was arrested later that year in connection with a shooting directed at anti-racist protesters in Florida. While Fears’ brother plead guilty to accessory after the fact for attempted first degree murder, charges were dropped against Fears so he could be extradited for Texas for hitting and choking his ex-girlfriend. See Brett Barrouquere, “Texas Judge Hikes Bond on White Supremacist William Fears” (SPLC, Apr 17, 2018) and Brett Barrouquere, “Cops Say Richard Spencer Supporter William Fears IV Choked Girlfriend Days Before Florida Shooting” (SPLC, Jan 23, 2018).

    [4] Russell pled guilty to possession of a unlicensed destructive device and improper storage of explosive materials. He was sentenced to five years in prison. U.S. District Judge Susan Bucklew said “it’s a difficult case” and that Russell seemed “like a very smart young man.” See “Florida Neo-Nazi Leader Gets 5 Years for Having Explosive Material” (AP, Jan 9, 2018).
    _____

    Works Cited

     

  • Leif Weatherby — Irony and Redundancy: The Alt Right, Media Manipulation, and German Idealism

    Leif Weatherby — Irony and Redundancy: The Alt Right, Media Manipulation, and German Idealism

    Leif Weatherby

    This essay has been peer-reviewed by “The New Extremism” special issue editors (Adrienne Massanari and David Golumbia), and the b2o: An Online Journal editorial board.

    Take three minutes to watch this clip from a rally in New York City just after the 2016 presidential election.[i] In the impromptu interview, we learn that Donald Trump is going to “raise the ancient city of Thule” and “complete the system of German Idealism.” In what follows, I’m going to interpret what the troll in the video—known only by his twitter handle, @kantbot2000—is doing here. It involves Donald Trump, German Idealism, metaphysics, social media, and above all irony. It’s a diagnosis of the current relationship between mediated speech and politics. I’ll come back to Kantbot presently, but first I want to lay the scene he’s intervening in.

    A small but deeply networked group of self-identifying trolls and content-producers has used the apparently unlikely rubric of German philosophy to diagnose our media-rhetorical situation. There’s less talk of trolls now than there was in 2017, but that doesn’t mean they’re gone.[ii] Take the recent self-introductory op-ed by Brazil’s incoming foreign minister, Ernesto Araùjo, which bizarrely accuses Ludwig Wittgenstein of undermining the nationalist identity of Brazilians (and everyone else). YouTube remains the global channel of this Alt Right[iii] media game, as Andre Pagliarini has documented: one Olavo de Carvalho, whose channel is dedicated to the peculiar philosophical obsessions of the global Alt Right, is probably responsible for this foreign minister taking the position, apparently intended as policy, “I don’t like Wittgenstein,” and possibly for his appointment in the first place. The intellectuals playing this game hold that Marxist and postmodern theory caused the political world to take its present shape, and argue that a wide variety of theoretical tools should be reappropriated to the Alt Right. This situation presents a challenge to the intellectual Left on both epistemological and political grounds.

    The core claim of this group—one I think we should take seriously—is that mediated speech is essential to politics. In a way, this claim is self-fulfilling. Araùjo, for example, imagines that Wittgenstein’s alleged relativism is politically efficacious; Wittgenstein arrives pre-packaged by the YouTube phenomenon Carvalho; Araùjo’s very appointment seems to have been the result of Carvalho’s influence. That this tight ideological loop should realize itself by means of social media is not surprising. But in our shockingly naïve public political discussions—at least in the US—emphasis on the constitutive role of rhetoric and theory appears singular. I’m going to argue that a crucial element of this scene is a new tone and practice of irony that permeates the political. This political irony is an artefact of 2016, most directly, but it lurks quite clearly beneath our politics today. And to be clear, the self-styled irony of this group is never at odds with a wide variety of deeply held, and usually vile, beliefs. This is because irony and seriousness are not, and have never been, mutually exclusive. The idea that the two cannot cohabit is one of the more obvious weak points of our attempt to get an analytical foothold on the global Alt Right—to do so, we must traverse the den of irony.

    Irony has always been a difficult concept, slippery to the point of being undefinable. It usually means something like “when the actual meaning is the complete opposite from the literal meaning,” as Ethan Hawke tells Wynona Ryder in 1994’s Reality Bites. Ryder’s plaint, “I know it when I see it” points to just how many questions this definition raises. What counts as a “complete opposite”? What is the channel—rhetorical, physical, or otherwise—by which this dual expression can occur? What does it mean that what we express can contain not only implicit or connotative content, but can in fact make our speech contradict itself to some communicative effect? And for our purposes, what does it mean when this type of question embeds itself in political communication?

    Virtually every major treatment of irony since antiquity—from Aristotle to Paul de Man—acknowledges these difficulties. Quintilian gives us the standard definition: that the meaning of a statement is in contradiction to what it literally extends to its listener. But he still equivocates about its source:

    eo vero genere, quo contraria ostenduntur, ironia est; illusionem vocant. quae aut pronuntiatione intelligitur aut persona aut rei nature; nam, si qua earum verbis dissentit, apparet diversam esse orationi voluntatem. Quanquam id plurimis id tropis accidit, ut intersit, quid de quoque dicatur, quia quoddicitur alibi verum est.

    On the other hand, that class of allegory in which the meaning is contrary to that suggested by the words, involve an element of irony, or, as our rhetoricians call it, illusio. This is made evident to the understanding either by the delivery, the character of the speaker or the nature of the subject. For if any one of these three is out of keeping with the words, it at once becomes clear that the intention of the speaker is other than what he actually says. In the majority of tropes it is, however, important to bear in mind not merely what is said, but about whom it is said, since what is said may in another context be literally true. (Quintilian 1920, book VIII, section 6, 53-55)

    Speaker, ideation, context, addressee—all of these are potential sources for the contradiction. In other words, irony is not limited to the intentional use of contradiction, to a wit deploying irony to produce an effect. Irony slips out of precise definition even in the version that held sway for more than a millennium in the Western tradition.

    I’m going to argue in what follows that irony of a specific kind has re-opened what seemed a closed channel between speech and politics. Certain functions of digital, and specifically social, media enable this kind of irony, because the very notion of a digital “code” entailed a kind of material irony to begin with. This type of irony can be manipulated, but also exceeds anyone’s intention, and can be activated accidentally (this part of the theory of irony comes from the German Romantic Friedrich Schlegel, as we will see). It not only amplifies messages, but does so by resignifying, exploiting certain capacities of social media. Donald Trump is the master practitioner of this irony, and Kantbot, I’ll propose, is its media theorist. With this irony, political communication has exited the neoliberal speech regime; the question is how the Left responds.

    i. “Donald Trump Will Complete the System of German Idealism”

    Let’s return to our video. Kantbot is trolling—hard. There’s obvious irony in the claim that Trump will “complete the system of German Idealism,” the philosophical network that began with Immanuel Kant’s Critique of Pure Reason (1781) and ended (at least on Kantbot’s account) only in the 1840s with Friedrich Schelling’s philosophy of mythology. Kant is best known for having cut a middle path between empiricism and rationalism. He argued that our knowledge is spontaneous and autonomous, not derived from what we observe but combined with that observation and molded into a nature that is distinctly ours, a nature to which we “give the law,” set off from a world of “things in themselves” about which we can never know anything. This philosophy touched off what G.W.F. Hegel called a “revolution,” one that extended to every area of human knowledge and activity. History itself, Hegel would famously claim, was the forward march of spirit, or Geist, the logical unfolding of self-differentiating concepts that constituted nature, history, and institutions (including the state). Schelling, Hegel’s one-time roommate, had deep reservations about this triumphalist narrative, reserving a place for the irrational, the unseen, the mythological, in the process of history. Hegel, according to a legend propagated by his students, finished his 1807 Phenomenology of Spirit while listening to the guns of the battle of Auerstedt-Jena, where Napoleon defeated the Germans and brought a final end to the Holy Roman Empire. Hegel saw himself as the philosopher of Napoleon’s moment, at least in 1807; Kantbot sees himself as the Hegel to Donald Trump (more on this below).

    Rumor has it that Kantbot is an accountant in NYC, although no one has been able to doxx him yet. His twitter has more than 26,000 followers at the time of writing. This modest fame is complemented by a deep lateral network among the biggest stars on the Far Right. To my eye he has made little progress in gaining fame—but also in developing his theory, on which he has recently promised a book “soon”—in the last year. Conservative media reported that he was interviewed by the FBI in 2018. His newest line of thought involves “hate hoaxes” and questioning why he can’t say the n-word—a regression to platitudes of the extremist Right that have been around for decades, as David Neiwert has extensively documented (Neiwert 2017). Sprinkled between these are exterminationist fantasies—about “Spinozists.” He toggles between conspiracy, especially of the false-flag variety, hate-speech-flirtation, and analysis. He has recently started a podcast. The whole presentation is saturated in irony and deadly serious:

    Asked how he identifies politically, Kantbot recently claimed to be a “Stalinist, a TERF, and a Black Nationalist.” Mike Cernovich, the Alt Right leader who runs the website Danger and Play, has been known to ask Kantbot for advice. There is also an indirect connection between Kantbot and “Neoreaction” or NRx, a brand of “accelerationism” which itself is only blurrily constituted by the blog-work of Curtis Yarvin, aka Mencius Moldbug and enthusiasm for the philosophy of Nick Land (another reader of Kant). Kantbot also “debated” White Nationalist thought leader Richard Spencer, presenting the spectacle of Spencer, who wrote a Masters thesis on Adorno’s interpretation of Wagner, listening thoughtfully to Kantbot’s explanation of Kant’s rejection of Johann Gottfried Herder, rather than the body count, as the reason to reject Marxism.

    When conservative pundit Ann Coulter got into a twitter feud with Delta over a seat reassignment, Kantbot came to her defense. She retweeted the captioned image below, which was then featured on Breitbart News in an article called “Zuckerberg 2020 Would be a Dream Come True for Republicans.”

    Kantbot’s partner-in-crime, @logo-daedalus (the very young guy in the maroon hat in the video) has recently jumped on a minor fresh wave of ironist political memeing in support of UBI-focused presidential candidate, Andrew Yang – #yanggang. He was once asked by Cernovich if he had read Michael Walsh’s book, The Devil’s Pleasure Palace: The Cult of Critical Theory and the Subversion of the West:

    The autodidact intellectualism of this Alt Right dynamic duo—Kantbot and Logodaedalus—illustrates several roles irony plays in the relationship between media and politics. Kantbot and Logodaedalus see themselves as the avant-garde of a counterculture on the brink of a civilizational shift, participating in the sudden proliferation of “decline of the West” narratives. They alternate targets on Twitter, and think of themselves as “producers of content” above all. To produce content, according to them, is to produce ideology. Kantbot is singularly obsessed the period between about 1770 and 1830 in Germany. He thinks of this period as the source of all subsequent intellectual endeavor, the only period of real philosophy—a thesis he shares with Slavoj Žižek (Žižek 1993).

    This notion has been treated monographically by Eckart Förster in The Twenty-Five Years of Philosophy, a book Kantbot listed in May of 2017 under “current investigations.” His twist on the thesis is that German Idealism is saturated in a form of irony. German Idealism never makes culture political as such. Politics comes from a culture that’s more capacious than any politics, so any relation between the two is refracted by a deep difference that appears, when they are brought together, as irony. Marxism, and all that proceeds from Marxism, including contemporary Leftism, is a deviation from this path.


    This reading of German Idealism is a search for the metaphysical origins of a common conspiracy theory in the Breitbart wing of the Right called “cultural Marxism” (the idea predates Breibart: see Jay 2011; Huyssen 2017; Berkowitz 2003. Walsh’s 2017 The Devil’s Pleasure Palace, which LogoDaedalus mocked to Cernovich, is one of the touchstones of this theory). Breitbart’s own account states that there is a relatively straight line from Hegel’s celebration of the state to Marx’s communism to Woodrow Wilson’s and Franklin Delano Roosevelt’s communitarianism—and on to critical theory of Theodor W. Adorno and Herbert Marcuse (this is the actual “cultural Marxism,” one supposes), Saul Alinsky’s community organizing, and (surprise!) Barack Obama’s as well (Breitbart 2011, 105-37). The phrase “cultural Marxism” is a play on the Nazi phrase “cultural Bolshevism,” a conspiracy theory that targeted Jews as alleged spies and collaborators of Stalin’s Russia. The anti-Semitism is only slightly more concealed in the updated version. The idea is that Adorno and Marcuse took control of the cultural matrix of the United States and made the country “culturally communist.” In this theory, individual freedom is always second to an oppressive community in the contemporary US. Between Breitbart’s adoption of critical theory and NRx (see Haider 2017; Beckett 2017; Noys 2014)—not to mention the global expansion of this family of theories by figures like Carvalho—it’s clear that the “Alt Right” is a theory-deep assemblage. The theory is never just analysis, though. It’s always a question of intervention, or media manipulation (see Marwick and Lewis 2017).

    Breitbart himself liked to capture this blend in his slogan “politics is downstream from culture.” Breitbart’s news organization implicitly cedes the theoretical point to Adorno and Marcuse, trying to build cultural hegemony in the online era. Reform the cultural, dominate the politics—all on the basis of narrative and media manipulation. For the Alt Right, politics isn’t “online” or “not,” but will always be both.

    In mid-August of 2017, a flap in the National Security Council was caused by a memo, probably penned by staffer Rich Higgins (who reportedly has ties to Cernovich), that appeared to accuse then National Security Adviser, H. R. McMaster, of supporting or at least tolerating Cultural Marxism’s attempt to undermine Trump through narrative (see Winter and Groll 2017). Higgins and other staffers associated with the memo were fired, a fact which Trump learned from Sean Hannity and which made him “furious.” The memo, about which the president “gushed,” defines “the successful outcome of cultural Marxism [as] a bureaucratic state beholden to no one, certainly not the American people. With no rule of law considerations outside those that further deep state power, the deep state truly becomes, as Hegel advocated, god bestriding the earth” (Higgins 2017). Hegel defined the state as the goal of all social activity, the highest form of human institution or “objective spirit.” Years later, it is still Trump vs. the state, in its belated thrall to Adorno, Marcuse, and (somehow) Hegel. Politics is downstream from German Idealism.

    Kantbot’s aspiration was to expand and deepen the theory of this kind of critical manipulation of the media—but he wants to rehabilitate Hegel. In Kantbot’s work we begin to glimpse how irony plays a role in this manipulation. Irony is play with the very possibility of signification in the first place. Inflected through digital media—code and platform—it becomes not just play but its own expression of the interface between culture and politics, overlapping with one of the driving questions of the German cultural renaissance around 1800. Kantbot, in other words, diagnosed and (at least at one time) aspired to practice a particularly sophisticated combination of rhetorical and media theory as political speech in social media.

    Consider this tweet:



    After an innocuous webcomic frog became infamous in 2016, after the Clinton campaign denounced its use and the Anti-Defamation League took the extraordinary step of adding the meme to its Hate Database, Pepe the Frog gained a kind of cult status. Kantbot’s reading of the phenomenon is that the “point is demonstration of power to control meaning of sign in modern media environment.” If this sounds like French Theory, then one “Johannes Schmitt” (whose profile thumbnail appears to be an SS officer) agrees. “Starting to sound like Derrida,” he wrote. To which Kantbot responds, momentously: “*schiller.”



    The asterisk-correction contains multitudes. Kantbot is only too happy to jettison the “theory,” but insists that the manipulation of the sign in its relation to the media environment maintains and alters the balance between culture and politics. Friedrich Schiller, whose classical aesthetic theory claims just this, is a recurrent figure for Kantbot. The idea, it appears, is to create a culture that is beyond politics and from which politics can be downstream. To that end, Kantbot opened his own online venue, the “Autistic Mercury,” named after Der teutsche Merkur, one of the German Enlightenment’s central organs.[iv] For Schiller, there was a “play drive” that mediated between “form” and “content” drives. It preserved the autonomy of art and culture and had the potential to transform the political space, but only indirectly. Kantbot wants to imitate the composite culture of the era of Kant, Schiller, and Hegel—just as they built their classicism on Johann Winckelmann’s famous doctrine that an autonomous and inimitable culture must be built on imitation of the Greeks. Schiller was suggesting that art could prevent another post-revolutionary Terror like the one that had engulfed France. Kantbot is suggesting that the metaphysics of communication—signs as both rhetoric and mediation—could resurrect a cultural vitality that got lost somewhere along the path from Marx to the present. Donald Trump is the instrument of that transformation, but its full expression requires more than DC politics. It requires (online) culture of the kind the campaign unleashed but the presidency has done little more than to maintain. (Kantbot uses Schiller for his media analysis too, as we will see.) Spencer and Kanbot agreed during their “debate” that perhaps Trump had done enough before he was president to justify the disappointing outcomes of his actual presidency. Conservative policy-making earns little more than scorn from this crowd, if it is detached from the putative real work of building the Alt Right avant-garde.



    According to one commenter on YouTube, Kantbot is “the troll philosopher of the kek era.” Kek is the god of the trolls. His name is based on a transposition of the letters LOL in the massively-multiplayer online role-playing game World of Warcraft. “KEK” is what the enemy sees when you laugh out loud to someone on your team, in an intuitively crackable code that was made into an idol to worship. Kek—a half-fake demi-God—illustrates the balance between irony and ontology in the rhetorical media practice known as trolling.


    The name of the idol, it turned out, was also the name of an actual ancient Egyptian demi-god (KEK), a phenomenon that confirmed his divine status, in an example of so-called “meme magic.” Meme magic is when—often by praying to KEK or relying on a numerological system based on the random numbers assigned to users of 4Chan and other message boards—something that exists only online manifests IRL, “in real life” (Burton 2016). Examples include Hillary Clinton’s illness in the late stages of the campaign (widely and falsely rumored—e.g. by Cernovich—before a real yet minor illness was confirmed), and of course Donald Trump’s actual election. Meme magic is everywhere: it names the channel between online and offline.

    Meme magic is both drenched in irony and deeply ontological. What is meant is just “for the lulz,” while what is said is magic. This is irony of the rhetorical kind—right up until it works. The case in point is the election, where the result, and whether the trolls helped, hovers between reality and magic. First there is meme generation, usually playfully ironic. Something happens that resembles the meme. Then the irony is retroactively assigned a magical function. But statements about meme magic are themselves ironic. They use the contradiction between reality and rhetoric (between Clinton’s predicted illness and her actual pneumonia) as the generator of a second-order irony (the claim that Trump’s election was caused by memes is itself a meme). It’s tempting to see this just as a juvenile game, but we shouldn’t dismiss the way the irony scales between the different levels of content-production and interpretation. Irony is rhetorical and ontological at once. We shouldn’t believe in meme magic, but we should take this recursive ironizing function very seriously indeed. It is this kind of irony that Kantbot diagnoses in Trump’s manipulation of the media.

    ii. Coding Irony: Friedrich Schlegel, Claude Shannon, and Twitter

    The ongoing inability of the international press to cover Donald Trump in a way that measures the impact of his statements rather than their content stems from this use of irony. We’ve gotten used to fake news and hyperbolic tweets—so used to these that we’re missing the irony that’s built in. Every time Trump denies something about collusion or says something about the coal industry that’s patently false, he’s exploiting the difference between two sets of truth-valuations that conflict with one another (e.g. racism and pacifism). That splits his audience—something that the splitting of the message in irony allows—and works both to fight his “enemies” and to build solidarity in his base. Trump has changed the media’s overall expression, making not his statements but the very relation between content and platform ironic. This objective form of media irony is not to be confused with “wit.” Donald Trump is not “witty.” He is, however, a master of irony as a tool for manipulation built into the way digital media allow signification to occur. He is the master of an expanded sense of irony that runs throughout the history of its theory.

    When White Nationalists descended on Charlottesville, Virginia, on August 11, 2017, leading to the death of one counter-protester the next day, Trump dragged his feet in naming “racism.” He did, eventually, condemn the groups by name—prefacing his statements with a short consideration of the economy, a dog-whistle about what comes first (actually racism, for which “economy” has become an erstwhile cipher). In the interim, however, his condemnations of violence “as such” led Spencer to tweet this:

    Of course, two days later, Trump would explicitly blame the “Alt Left” for violence it did not commit. Before that, however, Spencer’s irony here relied on Trump’s previous—malicious—irony. By condemning “all” violence when only one kind of violence was at issue, Trump was attempting to split the signal of his speech. The idea was to let the racists know that they could continue through condemnation of their actions that pays lip service to the non-violent ideal of the liberal media. Spencer gleefully used the internal contradiction of Trump’s speech, calling attention to the side of the message that was supposed to be “hidden.” Even the apparently non-ironic condemnation of “both sides” exploited a contradiction not in the statement itself, but in the way it is interpreted by different outlets and political communities. Trump’s invocation of the “Alt Left” confirmed the suspicions of those on the Right, panics the Center, and all but forced the Left to adopt the term. The filter bubbles, meanwhile, allowed this single message to deliver contradictory meanings on different news sites—one reason headlines across the political spectrum are often identical as statements, but opposite in patent intent. Making the dog whistle audible, however, doesn’t spell the “end of the ironic Nazi,” as Brian Feldman commented (Feldman 2017). It just means that the irony isn’t opposed to but instead part of the politics. Today this form of irony is enabled and constituted by digital media, and it’s not going away. It forms an irreducible part of the new political situation, one that we ignore or deny at our own peril.

    Irony isn’t just intentional wit, in other words—as Quintilian already knew. One reason we nevertheless tend to confuse wit and irony is that the expansion of irony beyond the realm of rhetoric—usually dated to Romanticism, which also falls into Kantbot’s period of obsession—made irony into a category of psychology and style. Most treatments of irony take this as an assumption: modern life is drenched in the stuff, so it isn’t “just” a trope (Behler 1990). But it is a feeling, one that you get from Weird Twitter but also from the constant stream of Facebooks announcements about leaving Facebook. Quintilian already points the way beyond this gestural understanding. The problem is the source of the contradiction. It is not obvious what allows for contradiction, where it can occur, what conditions satisfy it, and thus form the basis for irony. If the source is dynamic, unstable, then the concept of irony, as Paul de Man pointed out long ago, is not really a concept at all (de Man 1996).

    The theoretician of irony who most squarely accounts for its embeddedness in material and media conditions is Friedrich Schlegel. In nearly all cases, Schlegel writes, irony serves to reinforce or sharpen some message by means of the reflexivity of language: by contradicting the point, it calls it that much more vividly to mind. (Remember when Trump said, in the 2016 debates, that he refused to invoke Bill Clinton’s sexual history for Chelsea’s sake?) But there is another, more curious type:

    The first and most distinguished [kind of irony] of all is coarse irony; to be found most often in the actual nature of things and which is one of its most generally distributed substances [in der wirklichen Natur der Dinge und ist einer ihrer allgemein verbreitetsten Stoffe]; it is most at home in the history of humanity (Schlegel 1958-, 368).





    In other words, irony is not merely the drawing of attention to formal or material conditions of the situation of communication, but also a widely distributed “substance” or capacity in material. Twitter irony finds this substance in the platform and its underlying code, as we will see. If irony is both material and rhetorical, this means that its use is an activation of a potential in the interface between meaning and matter. This could allow, in principle, an intervention into the conditions of signification. In this sense, irony is the rhetorical term for what we could call coding, the tailoring of language to channels in technologies of transmission. Twitter reproduces an irony that built into any attempt to code language, as we are about to see. And it’s the overlap of code, irony, and politics that Kantbot marshals Hegel to address.

    Coded irony—irony that is both rhetorical and digitally enabled—exploded onto the political scene in 2016 through Twitter. Twitter was the medium through which the political element of the messageboards has broken through (not least because of Trump’s nearly 60 million followers, even if nearly half of them are bots). It is far from the only politicized social medium, as a growing literature is describing (Philips and Milner, 2017; Phillips 2016; Milner 2016; Goerzen 2017). But it has been a primary site of the intimacy of media and politics over the course of 2016 and 2017, and I think that has something to do with twitter itself, and with the relationship between encoded communications and irony.

    Take this retweet, which captures a great deal about Twitter:

    “Kim Kierkegaardashian,” or @KimKierkegaard, joined twitter in June 2012 and has about 259,00 followers at the time of writing. The account mashes up Kardashian’s self- and brand-sales oriented tweet style with the proto-existentialism of Søren Kierkegaard. Take, for example, an early tweet from 8 July, 2012: “I have majorly fallen off my workout-eating plan! AND it’s summer! But to despair over sin is to sink deeper into it.” The account sticks close to Kardashian’s actual tweets and Kierkegaard’s actual words. In the tweet above, from April 2017, @KimKierkegaard has retweeted Kardashian herself incidentally formulating one of Kierkegaard’s central ideas in the proprietary language of social media. “Omg” as shorthand takes the already nearly entirely secular phrase “oh my god” and collapses any trace of transcendence. The retweet therefore returns us to the opposite extreme, in which anxiety points us to the finitude of human existence in Kierkegaard. If we know how to read this, it is a performance of that other Kierkegaardian bellwether, irony.

    If you were to encounter Kardashian’s tweet without the retweet, there would be no irony at all. In the retweet, the tweet is presented as an object and resignified as its opposite. Note that this is a two-way street: until November 2009, there were no retweets. Before then, one had to type “RT” and then paste the original tweet in. Twitter responded, piloting a button that allows the re-presentation of a tweet (Stone 2009). This has vastly contributed to the sense of irony, since the speaker is also split between two sources, such that many accounts have some version of “RTs not endorsements” in their description. Perhaps political scandal is so often attached to RTs because the source as well as the content can be construed in multiple different and often contradictory ways. Schlegel would have noted that this is a case where irony swallows the speaker’s authority over it. That situation was forced into the code by the speech, not the other way around.

    I’d like to call the retweet a resignificatory device, distinct from amplificatory. Amplificatory signaling cannibalizes a bit of redundancy in the algorithm: the more times your video has been seen on YouTube, the more likely it is to be recommended (although the story is more complicated than that). Retweets certainly amplify the original message, but they also reproduce it under another name. They have the ability to resignify—as the “repost” function on Facebook also does, to some extent.[v] Resignificatory signaling takes the unequivocal messages at the heart of the very notion of “code” and makes them rhetorical, while retaining their visual identity. Of course, no message is without an effect on its receiver—a point that information theory made long ago. But the apparent physical identity of the tweet and the retweet forces the rhetorical aspect of the message to the fore. In doing so, it draws explicit attention to the deep irony embedded in encoded messages of any kind.

    Twitter was originally written in the object-oriented programming language and module-view-controller (MVC) framework Ruby on Rails, and the code matters. Object-oriented languages allow any term to be treated either as an object or as an expression, making Shannon’s observations on language operational.[vi] The retweet is an embedding of this ability to switch any term between these two basic functions. We can do this in language, of course (that’s why object-oriented languages are useful). But when the retweet is presented not as copy-pasted but as a visual reproduction of the original tweet, the expressive nature of the original tweet is made an object, imitating the capacity of the coding language. In other words, Twitter has come to incorporate the object-oriented logic of its programming language in its capacity to signify. At the level of speech, anything can be an object on Twitter—on your phone, you literally touch it and it presents itself. Most things can be resignified through one more touch, and if not they can be screencapped and retweeted (for example, the number of followers one has, a since-deleted tweet, etc.). Once something has come to signify in the medium, it can be infinitely resignified.

    When, as in a retweet, an expression is made into an object of another expression, its meaning is altered. This is because its source is altered. A statement of any kind requires the notion that someone has made that statement. This means that a retweet, by making an expression into an object, exemplifies the contradiction between subject and object—the very contradiction on which Kant had based his revolutionary philosophy. Twitter is fitted, and has been throughout its existence retrofitted, to generalize this speech situation. It is the platform of the subject-object dialectic, as Hegel might have put it. By presenting subject and object in a single statement—the retweet as expression and object all at once—Twitter embodies what rhetorical theory has called irony since the ancients. It is irony as code. This irony resignifies and amplifies the rhetorical irony of the dog whistle, the troll, the President.

    Coding is an encounter between two sets of material conditions: the structure of a language, and the capacity of a channel. This was captured in truly general form for the first time in Claude Shannon’s famous 1948 paper, “A Mathematical Theory of Communication,” in which the following diagram is given:

    Shannon’s achievement was a general formula for the relation between the structure of the source and the noise in the channel.[vii] If the set of symbols can be fitted to signals complex or articulated enough to arrive through the noise, then nearly frictionless communication could be engineered. The source—his preferred example was written English—had a structure that limited its “entropy.” If you’re looking at one letter in English, for example, and you have to guess what the next one will be, you theoretically have 26 choices (including a space). But the likelihood, if the letter you’re looking at is, for example, “q,” that the next letter will be “u” is very high. The likelihood for “x” is extremely low. The higher likelihood is called “redundancy,” a limitation on the absolute measure of chaos, or entropy, that the number of elements imposes. No source for communication can be entirely random, because without patterns of one kind or another we can’t recognize what’s being communicated.[viii]

    We tend to confuse entropy and the noise in the channel, and it is crucial to see that they are not the same thing. The channel is noisy, while the source is entropic. There is, of course, entropy in the channel—everything is subject to the second law of thermodynamics, without exception. But “entropy” is not in any way comparable to noise in Shannon, because “entropy” is a way of describing the conditional restraints on any structured source for communication, like the English language, the set of ideas in the brain, or what have you. Entropy is a way to describe the opposite of redundancy in the source, it expresses probability rather than the slow disintegration, the “heat death,” with which it is usually associated.[ix] If redundancy = 1, we have a kind of absolute rule or pure pattern. Redundancy works syntactically, too: “then” or “there” after the phrase “see you” is a high-level redundancy that is coded into SMS services.

    This is what Shannon calls a “conditional restraint” on the theoretical absolute entropy (based on number of total parts), or freedom in choosing a message. It is also the basis for autocorrect technologies, which obviously have semantic effects, as the genre of autocorrect bloopers demonstrates.

    A large portion of Shannon’s paper is taken up with calculating the redundancy of written English, which he determines to be nearly 50%, meaning that half the letters can be removed from most sentences or distorted without disturbing our ability to understand them.[x]

    The general process of coding, by Shannon’s lights, is a manipulation of the relationship between the structure of the source and the capacity of the channel as a dynamic interaction between two sets of evolving rules. Shannon’s statement that the “semantic aspects” of messages were “irrelevant to the engineering problem” has often been taken to mean he played fast and loose with the concept of language (see Hayles 1999; but see also Liu 2010; and for the complex history of Shannon’s reception Floridi 2010). But rarely does anyone ask exactly what Shannon did mean, or at least conceptually sketch out, in his approach to language. It’s worth pointing to the crucial role that source-structure redundancy plays in his theory, since it cuts close to Schlegel’s notion of material irony.

    Neither the source nor the channel is static. The scene of coding is open to restructuring at both ends. English is evolving; even its statistical structure changes over time. The channels, and the codes use to fit source to them, are evolving too. There is no guarantee that integrated circuits will remain the hardware of the future. They did not yet exist when Shannon published his theory.

    This point can be hard to see in today’s world, where we encounter opaque packets of already-established code at every turn. It would have been less hard to see for Shannon and those who followed him, since nothing was standardized, let alone commercialized, in 1948. But no amount of stack accretion can change the fact that mediated communication rests on the dynamic relation between relative entropy in the source and the way the channel is built.

    Redundancy points to this dynamic by its very nature. If there is absolute redundancy, nothing is communicated, because we already know the message with 100% certainty. With no redundancy, no message arrives at all. In between these two extremes, messages are internally objectified or doubled, but differ slightly from one another, in order to be communicable. In other words, every interpretable signal is a retweet. Redundancy, which stabilizes communicability by providing pattern, also ensures that the rules are dynamic. There is no fully redundant message. Every message is between 0 and 1, and this is what allows it to function as expression or object. Twitter imitates the rules of source structure, showing that communication is the locale where formal and material constraints encounter one another. It illustrates this principle of communication by programming it into the platform as a foundational principle. Twitter exemplifies the dynamic situation of coding as Shannon defined it. Signification is resignification.

    If rhetoric is embedded this deeply into the very notion of code, then it must possess the capacity to change the situation of communication, as Schlegel suggested. But it cannot do this by fiat or by meme magic. The retweeted “this anxiety omg” hardly stands to change the statistical structure of English much. It can, however, point to the dynamic material condition of mediated signification in general, something Warren Weaver, who wrote a popularizing introduction to Shannon’s work, acknowledged:

    anyone would agree that the probability is low for such a sequence of words as “Constantinople fishing nasty pink.” Incidentally, it is low, but not zero; for it is perfectly possible to think of a passage in which one sentence closes with “Constantinople fishing,” and the next begins with “Nasty pink.” And we might observe in passing that the unlikely four-word sequence under discussion has occurred in a single good English sentence, namely the one above. (Shannon and Weaver 1964, 11)

    There is no further reflection in Weaver’s essay on this passage, but then, that is the nature of irony. By including the phrase “Constantinople fishing nasty pink” in the English language, Weaver has shifted its entropic structure, however slightly. This shift is marginal to our ability to communicate (I am amplifying it very slightly right now, as all speech acts do), but some shifts are larger-scale, like the introduction of a word or concept, or the rise of a system of notions that orient individuals and communities (ideology). These shifts always have the characteristic that Weaver points to here, which is that they double as expressions and objects. This doubling is a kind of generalized redundancy—or capacity for irony—built into semiotic systems, material irony flashing up into the rhetorical irony it enables. That is a Romantic notion enshrined in a founding document of the digital age.

    Now we can see one reason that retweeting is often the source of scandal. A retweet or repetition of content ramifies the original redundancy of the message and fragments the message’s effect. This is not to say it undermines that effect. Instead, it uses the redundancy in the source and the noise in the channel to split the message according to any one of the factors that Quintilian announced: speaker, audience, context. In the retweet, this effect is distributed across more than one of these areas, producing more than one contrary item, or internally multiple irony. Take Trump’s summer 2016 tweet of this anti-Semitic attack on Clinton—not a proper retweet, but a resignfication of the same sort:



    The scandal that ensued mostly involved the source of the original content (white supremacists), and Trump skated through the incident by claiming that it wasn’t anti-Semitic anyway, it was a sheriff’s star, and that he had only “retweeted” the content. In disavowing the content in separate and seemingly contradictory ways,[xi] he signaled that he was still committed to its content to his base, while maintaining that he wasn’t at the level of statement. The effect was repeated again and again, and is a fundamental part of our government now. Trump’s positions are neither new nor interesting. What’s new is the way he amplifies his rhetorical maneuvers in social media. It is the exploitation of irony—not wit, not snark, not sarcasm—at the level of redundancy to maintain a signal that is internally split in multiple ways. This is not bad faith or stupidity; it’s an invasion of politics by irony. It’s also a kind of end to the neoliberal speech regime.

    iii. Irony and Politics after 2016, or Uncommunicative Capitalism

    The channel between speech and politics is open—again. That channel is saturated in irony, of a kind we are not used to thinking about. In 2003, following what were widely billed as the largest demonstrations in the history of the world, with tens of millions gathering in the streets globally to resist the George W. Bush administration’s stated intent to go to war, the United States did just that, invading Iraq on 20 March of that year. The consequences of that war have yet to be fully assessed. But while it is clear that we are living in its long foreign policy shadow, the seemingly momentous events of 2016 echo 2003 in a different way. 2016 was the year that blew open the neoliberal pax between the media, speech, and politics.

    No amount of noise could prevent the invasion of Iraq. As Jodi Dean has shown, “communicative capitalism” ensured that the circulation of signs was autotelic, proliferating language and ideology sealed off from the politics of events like war or even domestic policy. She writes that:

    In communicative capitalism, however, the use value of a message is less important than its exchange value, its contribution to a larger pool, flow or circulation of content. A contribution need not be understood; it need only be repeated, reproduced, forwarded. Circulation is the context, the condition for the acceptance or rejection of a contribution… Some contributions make a difference. But more significant is the system, the communicative network. (Dean 2005, 56)

    This situation no longer entirely holds. Dean’s brilliant analysis—along with those of many others who diagnosed the situation of media and politics in neoliberalism (e.g. Fisher 2009; Liu 2004)—forms the basis for understanding what we are living through and in now, even as the situation has changed. The notion that the invasion of Iraq could have been stopped by the protests recalls the optimism about speech’s effect on national politics of the New Left in the 1960s and after (begging the important question of whether the parallel protests against the Vietnam War played a causal role in its end). That model of speech is no longer entirely in force. Dean’s notion of a kind of metastatic media with few if any contributions that “make a difference” politically has yielded to a concerted effort to break through that isolation, to manipulate the circulatory media to make a difference. We live with communicative capitalism, but added to it is the possibility of complex rhetorical manipulation, a political possibility that resides in the irony of the very channels that made capitalism communicative in the first place.

    We know that authoritarianism engages in a kind of double-speak, talks out of “both sides of its mouth,” uses the dog whistle. It might be unusual to think of this set of techniques as irony—but I think we have to. Trump doesn’t just dog-whistle, he sends cleanly separate messages to differing effect through the same statement, as he did after Charlottesville. This technique keeps the media he is so hostile to on the hook, since their click rates are dependent on covering whatever extreme statement he’d made that day. The constant and confused coverage this led to was then a separate signal sent through the same line—by means of the contradiction between humility and vanity, and between content and effect—to his own followers. In other words, he doesn’t use Twitter only to amplify his message, but to resignify it internally. Resignificatory media allows irony to create a vector of efficacy through political discourse. That is not exactly “communicative capitalism,” but something more like the field-manipulations recently described by Johanna Drucker: affective, indirect, non-linear (Drucker 2018). Irony happens to be the tool that is not instrumental, a non-linear weapon, a kind of material-rhetorical wave one can ride but not control. As Quinn Slobodian has been arguing, we have in no way left the neoliberal era in economics. But perhaps we have left its speech regime behind. If so, that is a matter of strategic urgency for the Left.

    iv. Hegelian Media Theory

    The new Right is years ahead on this score, in practice but also in analysis. In one of the first pieces in what has become a truly staggering wave of coverage of the NRx movement, Rosie Gray interviewed Kantbot extensively (Gray 2017). Gray’s main target was the troll Mencius Moldbug (Curtis Yarvin) whose political philosophy blends the Enlightenment absolutism of Frederick the Great with a kind of avant-garde corporatism in which the state is run not on the model of a corporation but as a corporation. On the Alt Right, the German Enlightenment is unavoidable.

    In his prose, Kantbot can be quite serious, even theoretical. He responded to Gray’s article in a Medium post with a long quotation from Schiller’s 1784 “The Theater as Moral Institution” as its epigraph (Kanbot 2017b). For Schiller, one had to imitate the literary classics to become inimitable. And he thought the best means of transmission would be the theater, with its live audience and electric atmosphere. The Enlightenment theater, as Kantbot writes, “was not only a source of entertainment, but also one of radical political education.”

    Schiller argued that the stage educated more deeply than secular law or morality, that its horizon extended farther into the true vocation of the human. Culture educates where the law cannot. Schiller, it turns out, also thought that politics is downstream from culture. Kantbot finds, in other words, a source in Enlightenment literary theory for Breitbart’s signature claim. That means that narrative is crucial to political control. But Kantbot extends the point from narrative to the medium in which narrative is told.

    Schiller gives us reason to think that the arrangement of the medium—its physical layout, the possibilities but also the limits of its mechanisms of transmission—is also crucial to cultural politics (this is why it makes sense to him to replace a follower’s reference to Derrida with “*schiller”). He writes that “The theater is the common channel through which the light of wisdom streams down from the thoughtful, better part of society, spreading thence in mild beams throughout the entire state.” Story needs to be embedded in a politically effective channel, and politically-minded content-producers should pay attention to the way that channel works, what it can do that another means of communication—say, the novel—can’t.

    Kantbot argues that social media is the new Enlightenment Stage. When Schiller writes that the stage is the “common channel” for light and wisdom, he’s using what would later become Shannon’s term—in German, der Kanal. Schiller thought the channel of the stage was suited to tempering barbarisms (both unenlightened “savagery” and post-enlightened Terrors like Robespierre’s). For him, story in the proper medium could carry information and shape habits and tendencies, influencing politics indirectly, eventually creating an “aesthetic state.” That is the role that social media have today, according to Kantbot. In other words, the constraints of a putatively biological gender or race are secondary to their articulation through the utterly complex web of irony-saturated social media. Those media allow the categories in the first place, but are so complex as to impose their own constraint on freedom. For those on the Alt Right, accepting and overcoming that constraint is the task of the individual—even if it is often assigned mostly to non-white or non-male individuals, while white males achieve freedom through complaint. Consistency aside, however, the notion that media form their own constraint on freedom, and the tool for accepting and overcoming that constraint is irony, runs deep.

    Kantbot goes on to use Schiller to critique Gray’s actual article about NRx: “Though the Altright [sic] is viewed primarily as a political movement, a concrete ideology organizing an array of extreme political positions on the issues of our time, I believe that understanding it is a cultural phenomena [sic], rather than a purely political one, can be an equally valuable way of conceptualizing it. It is here that the journos stumble, as this goes directly to what newspapers and magazines have struggled to grasp in the 21st century: the role of social media in the future of mass communication.” It is Trump’s retrofitting of social media—and now the mass media as well—to his own ends that demonstrates, and therefore completes, the system of German Idealism. Content production on social media is political because it is the locus of the interface between irony and ontology, where meme magic also resides. This allows the Alt Right to sync what we have long taken to be a liberal form of speech (irony) with extremist political commitments that seem to conflict with the very rhetorical gesture. Misogyny and racism have re-entered the public sphere. They’ve done so not in spite of but with the explicit help of ironic manipulations of media.

    The trolls sync this transformation of the media with misogynist ontology. Both are construed as constraints in the forward march of Trump, Kek, and culture in general. One disturbing version of the essentialist suggestion for understanding how Trump will complete the system of German Idealism comes from one “Jef Costello” (a troll named for a character in Alain Delon’s 1967 film, Le Samouraï)

    Ironically, Hegel himself gave us the formula for understanding exactly what must occur in the next stage of history. In his Philosophy of Right, Hegel spoke of freedom as “willing our determination.” That means affirming the social conditions that make the array of options we have to choose from in life possible. We don’t choose that array, indeed we are determined by those social conditions. But within those conditions we are free to choose among certain options. Really, it can’t be any other way. Hegel, however, only spoke of willing our determination by social conditions. Let us enlarge this to include biological conditions, and other sorts of factors. As Collin Cleary has written: Thus, for example, the cure for the West’s radical feminism is for the feminist to recognize that the biological conditions that make her a woman—with a woman’s mind, emotions, and drives—cannot be denied and are not an oppressive “other.” They are the parameters within which she can realize who she is and seek satisfaction in life. No one can be free of some set of parameters or other; life is about realizing ourselves and our potentials within those parameters.

    As Hegel correctly saw, we are the only beings in the universe who seek self-awareness, and our history is the history of our self-realization through increased self-understanding. The next phase of history will be one in which we reject liberalism’s chimerical notion of freedom as infinite, unlimited self-determination, and seek self-realization through embracing our finitude. Like it or not, this next phase in human history is now being shepherded by Donald Trump—as unlikely a World-Historical Individual as there ever was. But there you have it. Yes! Donald Trump will complete the system of German Idealism. (Costello 2017)

    Note the regular features of this interpretation: it is a nature-forward argument about social categories, universalist in application, misogynist in structure, and ultra-intellectual. Constraint is shifted not only from the social into the natural, but also back into the social again. The poststructuralist phrase “embracing our finitude” (put into the emphatic italics of Theory) underscores the reversal from semiotics to ontology by way of German Idealism. Trump, it seems, will help us realize our natural places in an old-world order even while pushing the vanguard trolls forward into the utopian future. In contrast to Kantbot’s own content, this reading lacks irony. That is not to say that the anti-Gender Studies and generally viciously misogynist agenda of the Alt Right is not being amplified throughout the globe, as we increasingly hear. But this dry analysis lack the lacks the manipulative capacity that understanding social media in German Idealist terms brings with it. It does not resignify.

    Costello’s understanding is crude compared with that of Kantbot himself. The constraints, for Kantbot, are not primarily those of a naturalized gender, but instead the semiotic or rhetorical structure of the media through which any naturalization flows. The media are not likely, in this vision, to end any gender regimes—but recognizing that such regimes are contingent on representation and the manipulation of signs has never been the sole property of the Left. That manipulation implies a constrained, rather than an absolute, understanding of freedom. This constraint is an important theoretical element of the Alt Right, and in some sense they are correct to call on Hegel for it. Their thinking wavers—again, ironically—between essentialism about things like gender and race, and an understanding of constraint as primarily constituted by the media.

    Kantbot mixes his andrism and his media critique seamlessly. The trolls have some of their deepest roots in internet misogyny, including so-called Men Right’s Activism and the hashtag #redpill. The red pill that Neo takes in The Matrix to exit the collective illusion is here compared to “waking up” from the “culturally Marxist” feminism that inflects the putative communism that pervades contemporary US culture. Here is Kantbot’s version:

    The tweet elides any difference between corporate diversity culture and the Left feminism that would also critique it, but that is precisely the point. Irony does not undermine (it rather bolsters) serious misogyny. When Angela Nagle’s book, Kill All Normies: Online Culture Wars from 4Chan and Tumblr to Trump and the Alt-Right, touched off a seemingly endless Left-on-Left hot-take war, Kantbot responded with his own review of the book (since taken down). This review contains a plea for a “nuanced” understanding of Eliot Rodger, who killed six people in Southern California in 2014 as “retribution” for women rejecting him sexually.[xii] We can’t allow (justified) disgust at this kind of content to blind us to the ongoing irony—not jokes, not wit, not snark—that enables this vile ideology. In many ways, the irony that persists in the heart of this darkness allows Kantbot and his ilk to take the Left more seriously than the Left takes the Right. Gender is a crucial, but hardly the only, arena in which the Alt Right’s combination of essentialist ontology and media irony is fighting the intellectual Left.

    In the sub-subculture known as Men Going Their Own Way, or MGTOW, the term “volcel” came to prominence in recent years. “Volcel” means “voluntarily celibate,” or entirely ridding one’s existence of the need for or reliance on women. The trolls responded to this term with the notion of an “incel,” someone “involuntarily celibate,” in a characteristically self-deprecating move. Again, this is irony: none of the trolls actually want to be celibate, but they claim a kind of joy in signs by recoding the ridiculous bitterness of the Volcel.

    Literalizing the irony already partly present in this discourse, sometime in the fall of 2016 the trolls started calling the Left –in particular the members of the podcast team Chapo Trap House and the journalist and cultural theorist Sam Kriss (since accused of sexual harassment)—“ironycels.” The precise definition wavers, but seems to be that the Leftists are failures at irony, “irony-celibate,” even “involuntarily incapable of irony.”

    Because the original phrase is split between voluntary and involuntary, this has given rise to reappropriations, for example Kriss’s, in which “doing too much irony” earns you literal celibacy.

    Kantbot has commented extensively, both in articles and on podcasts, on this controversy. He and Kriss have even gone head-to-head.[xiii]




    In the ironycel debate, it has become clear that Kantbot thinks that socialism has kneecapped the Left, but only sentimentally. The same goes for actual conservatism, which has prevented the Right from embracing its new counterculture. Leaving behind old ideologies is a symptom for standing at the vanguard of a civilizational shift. It is that shift that makes sense of the phrase “Trump will Complete the System of German Idealism.”

    The Left, LogoDaedalus intoned on a podcast, is “metaphysically stuck in the Bush era.” I take this to mean that the Left is caught in an endless cycle of recriminations about the neoliberal model of politics, even as that model has begun to become outdated. Kantbot writes, in an article called “Chapo Traphouse Will Never Be Edgy”:

    Capturing the counterculture changes nothing, it is only by the diligent and careful application of it that anything can be changed. Not politics though. When political ends are selected for aesthetic means, the mismatch spells stagnation. Counterculture, as part of culture, can only change culture, nothing outside of that realm, and the truth of culture which is to be restored and regained is not a political truth, but an aesthetic one involving the ultimate truth value of the narratives which pervade our lived social reality. Politics are always downstream. (Kantbot 2017a)

    Citing Breitbart’s motto, Kantbot argues that continents of theory separate him and LogoDaedalus from the Left. That politics is downstream from culture is precisely what Marx—and by extension, the contemporary Left—could not understand. On several recent podcasts, Kantbot has made just this argument, that the German Enlightenment struck a balance between the “vitality of aesthetics” and political engagement that the Left lost in the generation after Hegel.

    Kantbot has decided, against virtually every Hegel reader since Hegel and even against Hegel himself, that the system of German Idealism is ironic in its deep structure. It’s not a move we can afford to take lightly. This irony, generalized as Schlegel would have it, manipulates the formal and meta settings of communicative situations and thus is at the incipient point of any solidarity. It gathers community through mediation even as it rejects those not in the know. It sits at the membrane of the filter bubble, and—correctly used—has the potential to break or reform the bubble. To be clear, I am not saying that Kantbot has done this work. It is primarily Donald Trump, according to Kantbot’s own argument, who has done this work. But this is exactly what it means to play Hegel to Trump’s Napoleon: to provide the metaphysics for the historical moment, which happens to be the moment where social media and politics combine. Philosophy begins only after an early-morning sleepless tweetstorm once again determines a news cycle. Irony takes its proper place, as Schlegel had suggested, in human history, becoming a political weapon meant to manipulate communication.

    Kantbot was the media theorist of Trump’s ironic moment. The channeling of affect is irreducible, but not unchangeable: this is both the result of some steps we can only wish we’d taken in theory and used in politics before the Alt Right got there, and the actual core of what we might call Alt Right Media Theory. When they say “the Left can’t meme,” in other words, they’re accusing the socialist Left of being anti-intellectual about the way we communicate now, about the conditions and possibilities of social media’s amplifications of the capacity called irony that is baked in to cognition and speech so deeply that we can barely define it even partially. That would match the sense of medium we get from looking at Shannon again, and the raw material possibility with which Schlegel infused the notion of irony.

    This insight, along with its political activation, might have been the preserve of Western Marxism or the other critical theories that succeeded it. Why have we allowed the Alt Right to pick up our tools?

    Kantbot takes obvious pleasure in the irony of using poststructuralist tools, and claiming in a contrarian way that they really derive from a broadly construed German Enlightenment that includes Romanticism and Idealism. Irony constitutes both that Enlightenment itself, on this reading, and the attitude towards it on the part of the content-producers, the German Idealist Trolls. It doesn’t matter if Breitbart was right about the Frankfurt School, or if the Neoreactionaries are right about capitalism. They are not practicing what Hegel called “representational thinking,” in which the goal is to capture a picture of the world that is adequate to it. They are practicing a form of conceptual thinking, which in Hegel’s terms is that thought that is embedded in, constituted by, and substantially active within the causal chain of substance, expression, and history.[xiv] That is the irony of Hegel’s reincarnation after the end of history.

    In media analysis and rhetorical analysis, we often hear the word “materiality” used as a substitute for durability, something that is not easy to manipulate. What is material, it is implied, is a stabilizing factor that allows us to understand the field of play in which signification occurs. Dean’s analysis of the Iraq War does just this, showing the relationship of signs and politics that undermines the aspirational content of political speech in neoliberalism. It is a crucial move, and Dean’s analysis remains deeply informative. But its type—and even the word “material,” used in this sense—is, not to put too fine a point on it, neo-Kantian: it seeks conditions and forms that undergird spectra of possibility. To this the Alt Right has lodged a Hegelian eppur si muove, borrowing techniques that were developed by Marxists and poststructuralists and German Idealists, and remaking the world of mediated discourse. That is a political emergency in which the humanities have a special role to play—but only if we can dispense with political and academic in-fighting and turn our focus to our opponents. What Mark Fisher once called the “Vampire castle” of the Left on social media is its own kind of constraint on our progress (Fisher 2013). One solvent for it is irony in the expanded field of social media—not jokes, not snark, but dedicated theoretical investigation and exploitation of the rhetorical features of our systems of communication. The situation of mediated communication is part of the objective conjuncture of the present, one that the humanities and the Left cannot afford to ignore, and cannot avoid by claiming not to participate. The alternative to engagement is to cede the understanding, and quite possibly the curve, of civilization, to the global Alt Right.

    _____

    Leif Weatherby is Associate Professor of German and founder of the Digital Theory Lab at NYU. He is working on a book about cybernetics and German Idealism.

    Back to the essay

    _____

    Notes
    [i] Video here. The comment thread on the video generated a series of unlikely slogans for 2020: “MAKE TRANSCENDENTAL IDENTITY GREAT AGAIN,” “Make German Idealism real again,” and the ideological non sequitur “Make dialectical materialism great again.”

    [ii] Neiwert (2017) tracks the rise of extreme Right violence and media dissemination from the 1990s to the present, and is particularly good on the ways in which these movements engage in complex “double-talk” and meta-signaling techniques, including irony in the case of the Pepe meme.

    [iii] I’m going to use this term throughout, and refer readers to Chip Berlet’s useful resource: I’m hoping this article builds on a kind of loose consensus that the Alt Right “talks out of both sides of its mouth,” perhaps best crystallized in the term “dog whistle.” Since 2016, we’ve seen a lot of regular whistling, bigotry without disguise, alongside the rise of the type of irony I’m analyzing here.

    [iv] There is, in this wing of the Online Right, a self-styled “autism” that stands for being misunderstood and isolated.

    [v] Thanks to Moira Weigel for a productive exchange on this point.

    [vi] See the excellent critique of object-oriented ontologies on the basis of their similarities with object-oriented programming languages in Galloway 2013. Irony is precisely the condition that does not reproduce code representationally, but instead shares a crucial condition with it.

    [vii] The paper is a point of inspiration and constant return for Friedrich Kittler, who uses this diagram to demonstrate the dependence of culture on media, which, as his famous quip goes, “determine our situation.” Kittler 1999, xxxix.

    [viii] This kind of redundancy is conceptually separate from signal redundancy, like the strengthening or reduplicating of electrical impulses in telegraph wires. The latter redundancy is likely the first that comes to mind, but it is not the only kind Shannon theorized.

    [ix] This is because Shannon adopts Ludwig Boltzmann’s probabilistic formula for entropy. The formula certainly suggests the slow simplification of material structure, but this is irrelevant to the communications engineering problem, which exists only so long as there are the very complex structures called humans and their languages and communications technologies.

    [x] Shannon presented these findings at one of the later Macy Conferences, the symposia that founded the movement called “cybernetics.” For an excellent account of what Shannon called “Printed English,” see Liu 2010, 39-99.

    [xi] The disavowal follows Freud’s famous “kettle logic” fairly precisely. In describing disavowal of unconscious drives unacceptable to the ego and its censor, Freud used the example of a friend who returns a borrowed kettle broken, and goes on to claim that 1) it was undamaged when he returned it, 2) it was already damaged when he borrowed it, and 3) he never borrowed it in the first place. Zizek often uses this logic to analyze political events, as in Zizek 2005. Its ironic structure usually goes unremarked.

    [xii] Kantbot, “Angela Nagle’s Wild Ride,” http://thermidormag.com/angela-nagles-wild-ride/, visited August 15, 2017—link currently broken.

    [xiii] Kantbot does in fact write fiction, almost all of which is science-fiction-adjacent retoolings of narrative from German Classicism and Romanticism. The best example is his reworking of E.T.A. Hoffmann’s “A New Year’s Eve Adventure,” “Chic Necromancy,” Kantbot 2017c.

    [xiv] I have not yet seen a use of Louis Althusser’s distinction between representation and “theory” (which relies on Hegel’s distinction) on the Alt Right, but it matches their practice quite precisely.

    _____

    Works Cited

    • Beckett, Andy. 2017. “Accelerationism: How a Fringe Philosophy Predicted the Future We Live In.” The Guardian (May 11).
    • Behler, Ernst. 1990. Irony and the Discourse of Modernity. Seattle: University of Washington.
    • Berkowitz, Bill. 2003. “ ‘Cultural Marxism’ Catching On.” Southern Poverty Law Center.
    • Breitbart, Andrew. 2011. Righteous Indignation: Excuse Me While I Save the World! New York: Hachette.
    • Burton, Tara. 2016. “Apocalypse Whatever: The Making of a Racist, Sexist Religion of Nihilism on 4chan.” Real Life Mag (Dec 13).
    • Costello, Jef. 2017. “Trump Will Complete the System of German Idealism!” Counter-Currents Publishing (Mar 10).
    • de Man, Paul. 1996. “The Concept of Irony.” In de Man, Aesthetic Ideology. Minneapolis: University of Minnesota. 163-185.
    • Dean, Jodi. 2005. “Communicative Capitalism: Circulation and the Foreclosure of Politics.” Cultural Politics 1:1. 51-74.
    • Drucker, Johanna. The General Theory of Social Relativity. Vancouver: The Elephants.
    • Feldman, Brian. 2017. “The ‘Ironic’ Nazi is Coming to an End.” New York Magazine.
    • Fisher, Mark. 2009. Capitalist Realism: Is There No Alternative? London: Zer0.
    • Fisher, Mark. 2013. “Exiting the Vampire Castle.” Open Democracy (Nov 24).
    • Floridi, Luciano. 2010. Information: A Very Short Introduction. Oxford: Oxford.
    • Galloway, Alexander. 2013. “The Poverty of Philosophy: Realism and Post-Fordism.” Critical Inquiry 39:2. 347-66.
    • Goerzen, Matt. 2017. “Notes Towards the Memes of Production.” texte zur kunst (Jun).
    • Gray, Rosie. 2017. “Behind the Internet’s Dark Anti-Democracy Movement.” The Atlantic (Feb 10).
    • Haider, Shuja. 2017. “The Darkness at the End of the Tunnel: Artificial Intelligence and Neorreaction.” Viewpoint Magazine.
    • Hayles, N. Katherine. 1999. How We Became Posthuman: Virtual Bodies in Cybernetics, Literature, and Informatics. Chicago: University of Chicago Press.
    • Higgins, Richard. 2017. “POTUS and Political Warfare.” National Security Council Memo.
    • Huyssen, Andreas. 2017. “Breitbart, Bannon, Trump, and the Frankfurt School.” Public Seminar (Sep 28).
    • Jay, Martin. 2011. “Dialectic of Counter-Enlightenment: The Frankfurt School as Scapegoat of the Lunatic Fringe.” Salmagundi 168/169 (Fall 2010-Winter 2011). 30-40. Excerpt at Canisa.Org.
    • Kantbot (as Edward Waverly). 2017a. “Chapo Traphouse Will Never Be Edgy
    • Kantbot. 2017b. “All the Techcomm Blogger’s Men.” Medium.
    • Kantbot. 2017c. “Chic Necromancy.” Medium.
    • Kittler, Friedrich. 1999. Gramophone, Film, Typewriter. Translated by Geoffrey Winthrop-Young and Michael Wutz. Stanford: Stanford University Press.
    • Liu, Alan. 2004. “Transcendental Data: Toward a Cultural History and Aesthetics of the New Encoded Discourse.” Critical Inquiry 31:1. 49-84.
    • Liu, Lydia. 2010. The Freudian Robot: Digital Media and the Future of the Unconscious. Chicago: University of Chicago Press.
    • Marwick, Alice and Rebecca Lewis. 2017. “Media Manipulation and Disinformation Online.” Data & Society.
    • Milner, Ryan. 2016. The World Made Meme: Public Conversations and Participatory Media. Cambridge: MIT.
    • Neiwert, David. 2017. Alt-America: The Rise of the Radical Right in the Age of Trump. New York: Verso.
    • Noys, Benjamin. 2014. Malign Velocities: Accelerationism and Capitalism. London: Zer0.
    • Phillips, Whitney and Ryan M. Milner. 2017. The Ambivalent Internet: Mischief, Oddity, and Antagonism Online. Cambridge: Polity.
    • Phillips, Whitney. 2016. This is Why We Can’t Have Nice Things: Mapping the Relationship between Online Trolling and Mainstream Culture. Cambridge: The MIT Press.
    • Quintilian. 1920. Institutio Oratoria, Book VIII, section 6, 53-55.
    • Schlegel, Friedrich. 1958–. Kritische Friedrich-Schlegel-Ausgabe. Vol. II. Edited by Ernst Behler, Jean Jacques Anstett, and Hans Eichner. Munich: Schöningh.
    • Shannon, Claude, and Warren Weaver. 1964. The Mathematical Theory of Communication. Urbana: University of Illinois Press.
    • Stone, Biz. 2009. “Retweet Limited Rollout.” Press release. Twitter (Nov 6).
    • Walsh, Michael. 2017. The Devil’s Pleasure Palace: The Cult of Critical Theory and the Subversion of the West. New York: Encounter Books.
    • Winter, Jana and Elias Groll. 2017. “Here’s the Memo that Blew Up the NSC.” Foreign Policy (Aug 10).
    • Žižek, Slavoj. 1993. Tarrying with the Negative: Kant, Hegel and the Critique of Ideology. Durham: Duke, 1993.
    • Žižek, Slavoj. 2005. Iraq: The Borrowed Kettle. New York: Verso.

     

  • R. Joshua Scannell — Architectures of Managerial Triumphalism (Review of Benjamin Bratton, The Stack: On Software and Sovereignty)

    R. Joshua Scannell — Architectures of Managerial Triumphalism (Review of Benjamin Bratton, The Stack: On Software and Sovereignty)

    A review of Benjamin Bratton, The Stack: On Software and Sovereignty (MIT Press Press, 2016)

    by R. Joshua Scannell

    The Stack

    Benjamin Bratton’s The Stack: On Software and Sovereignty is an often brilliant and regularly exasperating book. It is a diagnosis of the epochal changes in the relations between software, sovereignty, climate, and capital that underwrite the contemporary condition of digital capitalism and geopolitics.  Anybody who is interested in thinking through the imbrication of digital technology with governance ought to read The Stack. There are many arguments that are useful or interesting. But reading it is an endeavor. Sprawling out across 502 densely packed pages, The Stack is nominally a “design brief” for the future. I don’t know that I understand that characterization, no matter how many times I read this tome.

    The Stack is chockablock with schematic abstractions. They make sense intuitively or cumulatively without ever clearly coming into focus. This seems to be a deliberate strategy. Early in the book, Bratton describes The Stack–the titular “accidental megastructure” of “planetary computation” that has effectively broken and redesigned, well, everything–as “a blur.” He claims that

    Only a blur provides an accurate picture of what is going on now and to come…Our description of a system in advance of its appearance maps what we can see but cannot articulate, on the one hand, versus what we know to articulate but cannot yet see, on the other. (14)

    This is also an accurate description of the prevailing sensation one feels working through the text. As Ian Bogost wrote in his review of The Stack for Critical Inquiry, reading the book feels “intense—meandering and severe but also stimulating and surprising. After a while, it was also a bit overwhelming. I’ll take the blame for that—I am not necessarily built for Bratton’s level and volume of scholarly intensity.” I agree on all fronts.

    Bratton’s inarguable premise is that the various computational technologies that collectively define the early decades of the 21st century—smart grids, cloud platforms, mobile apps, smart cities, the Internet of Things, automation—are not analytically separable. They are often literally interconnected but, more to the point, they combine to produce a governing architecture that has subsumed older calculative technologies like the nation state, the liberal subject, the human, and the natural. Bratton calls this “accidental megastructure” The Stack.

    Bratton argues that The Stack is composed of six “layers,” the earth, the cloud, the city, the address, the interface, and the user. They all indicate more or less what one might expect, but with a counterintuitive (and often Speculative Realist) twist. The earth is the earth but is also a calculation machine. The cloud is “the cloud” but as a chthonic structure of distributed networks and nodal points that reorganize sovereign power and body forth quasi-feudal corporate sovereignties. The City is, well, cities, but not necessarily territorially bounded, formally recognized, or composed of human users. Users are also usually not human. They’re just as often robots or AI scripts. Really they can be anything that works up and down the layers, interacting with platforms (which can be governments) and routed through addresses (which are “every ‘thing’ that can be computed” including “individual units of life, loaded shipping containers, mobile devices, locations of datum in databases, input and output events and enveloped entities of all size and character” [192], etc.).

    Each layer is richly thought through and described, though it’s often unclear whether the “layer” in question is “real” or a useful conceptual envelope or both or neither. That distinction is generally untenable, and Bratton would almost certainly reject the dichotomy between the “real” and the “metaphorical.” But it isn’t irrelevant for this project. He argues early on that, contra Marxist thought that understands the state metaphorically as a machine, The Stack is a “machine-as-the-state.” That’s both metaphorical and not. There really are machines that exert sovereign power, and there are plenty of humans in state apparatuses that work for machines. But there aren’t, really, machines that are states. Right?

    Moments like these, when The Stack’s concepts productively destabilize given categories (like the state) that have never been coherent enough to justify their power are when the book is at its most compelling. And many of the counterintuitive moves that Bratton makes start and end with real, important insights. For instance, the insistence on the absolute materiality, and the absolute earthiness of The Stack and all of its operations leads Bratton to a thoroughgoing and categorical rejection of the prevailing “idiot language” that frames digital technology as though it exists in a literal “cloud,” or some sort of ethereal “virtual” that is not coincident with the “real” world. Instead, in The Stack, every point of contact between every layer is a material event that transduces and transforms everything else. To this end, he inverts Latour’s famous dictum that there is no global, only local. Instead, The Stack as planetary megastructure means that there is only global. The local is a dead letter. This is an anthropocene geography in which an electron, somewhere, is always firing because a fossil fuel is burning somewhere else. But it is also a post-anthropocene geography because humans are not The Stack’s primary users. The planet itself is a calculation machine, and it is agnostic about human life. So, there is a hybrid sovereignty: The Stack is a “nomos of the earth” in which humans are an afterthought.

    A Design for What?

    Bratton is at his conceptual best when he is at his weirdest. Cyclonopedic (Negarestani 2008) passages in which the planet slowly morphs into something like HP Lovecraft and HR Geiger’s imaginations fucking in a Peter Thiel fever dream are much more interesting (read: horrifying) than the often perfunctory “real life” examples from “real world” geopolitical trauma, like “The First Sino-Google War of 2009.” But this leads to one of the most obvious shortcomings of the text. It is supposedly a “design brief,” but it’s not clear what or who it is a design brief for.

    For Bratton, design

    means the structuring of the world in reaction to an accelerated decay and in projective anticipation of a condition that is now only the ghostliest of a virtual present tense. This is a design for accommodating (or refusing to accommodate) the post-whatever-is-melting-into-air and prototyping for pre-what-comes-next: a strategic, groping navigation (however helpless) of the punctuations that bridge between these two. (354)

    Design, then, and not theory, because Bratton’s Stack is a speculative document. Given the bewildering and potentially apocalyptic conditions of the present, he wants to extrapolate outwards. What are the heterotopias-to-come? What are the constraints? What are the possibilities? Sounding a familiar frustration with the strictures of academic labor, he argues that this moment requires something more than diagnosis and critique. Rather,

    the process by which sovereignty is made more plural becomes a matter of producing more than discoursing: more about pushing, pulling, clicking, eating, modeling, stacking, prototyping, subtracting, regulating, restoring, optimizing, leaving alone, splicing, gardening and evacuating than about reading, examining, insisting, rethinking, reminding, knowing full-well, enacting, finding problematic, and urging. (303)

    No doubt. And, not that I don’t share the frustration, but I wonder what a highly technical, 500-page diagnosis of the contemporary state of software and sovereignty published and distributed by an academic press and written for an academic audience is if not discoursing? It seems unlikely that it can serve as a blueprint for any actually-existing power brokers, even though its insights are tremendous. At the risk of sounding cynical, calling The Stack a “design brief” seems like a preemptive move to liberate Bratton from having to seriously engage with the different critical traditions that work to make sense of the world as it is in order to demand something better. This allows for a certain amount of intellectual play that can sometimes feel exhilarating but can just as often read as a dodge—as a way of escaping the ethical and political stakes that inhere in critique.

    That is an important elision for a text that is explicitly trying to imagine the geopolitics of the future. Bratton seems to pose The Stack from a nebulous “Left” position that is equally disdainful of the sort of “Folk Politics” that Srnicek and Williams (2015) so loathe and the accelerationist tinge of the Speculative Realists with whom he seems spiritually aligned. This sense of rootlessness sometimes works in Bratton’s favor. There are long stretches in which his cherry picking and remixing ideas from across a bewildering array of schools of thought yields real insights. But just as often, the “design brief” characterization seems to be a way out of thinking the implications of the conjuncture through to their conclusion. There is a breeziness about how Bratton poses futures-as-thought-experiments that is troubling.

    For instance, in thinking through the potential impacts of the capacity to measure planetary processes in real time, Bratton suggests that producing a sensible world is not only a process of generalizing measurement and representation. He argues that

    the sensibility of the world might be distributed or organized, made infrastructural, and activated to become part of how the landscape understands itself and narrates itself. It is not only a diagnostic image then; it is a tool for geo-politics in formation, emerging from the parametric multiplication and algorithmic conjugation of our surplus projections of worlds to come, perhaps in mimetic accordance with one explicit utopian conception or another, and perhaps not. Nevertheless, the decision between what is and is not governable may arise as much from what the model computational image cannot do as much as what it can. (301, emphasis added)

    Reading this, I wanted to know: What explicit utopian project is he thinking about? What are the implications of it going one way and not another? Why mimetic? What does the last bit about what is and is not governable mean? Or, more to the point: who and what is going to get killed if it goes one way and not another? There are a great many instances like this over the course of the book. At the precise moment where analysis might inform an understanding of where The Stack is taking us, Bratton bows out. He’s set down the stakes, and given a couple of ideas about what might happen. I guess that’s what a design brief is meant to do.

    Another example, this time concerning the necessity of geoengineering for solving what appears to be an ever-more-imminent climatic auto-apocalypse:

    The good news is that we know for certain that short-term “geoengineering” is not only possible but in a way inevitable, but how so? How and by whom does it go, and unfortunately for us the answer (perhaps) must arrive before we can properly articulate the question. For the darker scenarios, macroeconomics completes its metamorphosis into ecophagy, as the discovery of market failures becomes simultaneously the discovery of limits of planetary sinks (e.g., carbon, heat, waste, entropy, populist politics) and vice versa; The Stack becomes our dakhma. The shared condition, if there is one, is the mutual unspeakability and unrecognizability that occupies the seat once reserved for Kantian cosmopolitanism, now just a pre-event reception for a collective death that we will actually be able to witness and experience. (354, emphasis added)

    Setting aside the point that it is not at all clear to me that geoengineering is an inevitable or even appropriate (Crist 2017) way out of the anthropocene (or capitalocene? (Moore 2016)) crisis, if the answer for “how and by whom does it go” is to arrive before the question can be properly articulated, then the stack-to-come starts looking a lot like a sort of planetary dictatorship of, well of who? Google? Mark Zuckerberg? In-Q-Tel? Y Combinator? And what exactly is the “populist politics” that sits in the Latourian litany alongside carbon, heat, waste, and entropy as a full “planetary sink”? Does that mean Trump, and all the other globally ascendant right wing “populists?” Or does it mean “populist politics” in the Jonathan Chait sense that can’t differentiate between left and right and therefore sees both political projects as equally dismissible? Does populism include any politics that centers the needs and demands of the public? What are the commitments in this dichotomy? I suppose The Stack wouldn’t particularly care about these sorts of questions. But a human writing a 500-page playbook so that other humans might better understand the world-to-come might be expected to. After all, a choice between geoengineering or collective death might be what the human population of the planet is facing (and for most of the planet’s species, and for a great many of the planet’s human societies, already eliminated or dragged down the road towards it during the current mass extinction, there is no choice), but such a binary doesn’t make for much of a design spec.

    One final example, this time on what the political subject of the stack-to-come ought to look like:

    We…require, as I have laid out, a redefinition of the political subject in relation to the real operations of the User, one that is based not on homo economicus, parliamentary liberalism, poststructuralist linguistic reduction, or the will to secede into the moral safety of individual privacy and withdrawn from coercion. Instead, this definition should focus on composing and elevating sites of governance from the immediate, suturing interfacial material between subjects, in the stitches and the traces and the folds of interaction between bodies and things at a distance, congealing into different networks demanding very different kinds of platform sovereignty.

    If “poststructuralist linguistic reduction” is on the same plane as “parliamentary liberalism” or “homo economicus” as one among several prevailing ideas of the contemporary “political subject,” then I am fairly certain that we are in the realm of academic “theory” rather than geopolitical “design.” The more immediate point is that I do understand what the terms that we ought to abandon mean, and agree that they need to go. But I don’t understand what the redefined political subject looks like. Again, if this is “theory,” then that sort of hand waving is unfortunately often to be expected. But if it’s a design brief—even a speculative one—for the transforming nature of sovereignty and governance, then I would hope for some more clarity on what political subjectivity looks like in The Stack-To-Come.

    Or, and this is really the point, I want The Stack to tell me something more about how The Stack participates in the production and extractable circulation of populations marked for death and debility (Puar 2017). And I want to know what, exactly, is so conceptually radical about pointing out that human beings are not at the center of the planetary systems that are driving transformations in geopolitics and sovereignty. After all, hasn’t that been exactly the precondition for the emergence of The Stack? This accidental megastructure born out of the ruthless expansions of digitally driven capitalism is not just working to transform the relationship between “human” and sovereignty. The condition of its emergence is precisely that most planetary homo sapiens are not human, and are therefore disposable and disposited towards premature death. The Stack might be “our” dhakma, if we’re speaking generically as a sort of planetary humanism that cannot but be read as white—or, more accurately, “capacitated.” But the systematic construction of human stratification along lines of race, gender, sex, and ability as precondition for capitalist emergence freights the stack with a more ancient, and ignored, calculus: that of the logistical work that shuttles humans between bodies, cargo, and capital. It is, in other words, the product of an older planetary death machine: what Fred Moten and Stefano Harney (2013) call the “logistics in the hold” that makes The Stack hum along.

    The tenor of much of The Stack is redolent of managerial triumphalism. The possibility of apocalypse is always minimized. Bratton offers, a number of times, that he’s optimistic about the future. He is disdainful of the most stringent left critics of Silicon Valley, and he thinks that we’ll probably be able to trust to our engineers and institutions to work out The Stack’s world-destroying kinks. He sounds invested, in other words, in a rhetorical-political mode of thought that, for now, seems to have died on November 9, 2016. So it is not surprising that Bratton opens the book with an anecdote about Hillary Clinton’s vision of the future of world governance.

    The Stack begins with a reference to then-Secretary of State Clinton’s 2013 farewell address to the Council on Foreign Relations. In that speech, Clinton argued that the future of international governance requires a “new architecture for this new world, more Frank Gehry than formal Greek.” Unlike the Athenian Agora, which could be held up by “a few strong columns,” contemporary transnational politics is too complicated to rely on stolid architecture, and instead must make use of the type of modular assemblage that “at first might appear haphazard, but in fact, [is] highly intentional and sophisticated” that makes Gehry famous. Bratton interprets her argument as a “half-formed question, what is the architecture of the emergent geopolitics of this software society? What alignments, components, foundations, and apertures?” (Bratton 2016, 13).

    For Clinton, future governance must make a choice between Gehry and Agora. The Gehry future is that of the seemingly “haphazard” but “highly intentional and sophisticated” interlocking treaties, non-governmental organizations, super and supra-state technocratic actors working together to coordinate the disparate interests of states and corporations in the service of the smooth circulation of capital across a planetary logistics network. On the other side, a world order held up by “a few strong pillars”—by implication the status quo after the collapse of the Soviet Union, a transnational sovereign apparatus anchored by the United States. The glaring absence in this dichotomy is democracy—or rather its assumed subsumption into American nationalism. Clinton’s Gehry future is a system of government whose machinations are by design opaque to those that would be governed, but whose beneficence is guaranteed by the good will of the powerful. The Agora—the fountainhead of slaveholder democracy—is metaphorically reduced to its pillars, particularly the United States and NATO. Not unlike ancient Athens, it’s democracy as empire.

    There is something darkly prophetic of the collapse of the Clintonian world vision, and perversely apposite in Clinton’s rhetorical move to supplant as the proper metaphor for future government Gehry for the Agora. It is unclear why a megalomaniacal corporate starchitecture firm that robs public treasuries blind and facilitates tremendous labor exploitation ought to be the future for which the planet strives.

    For better or for worse, The Stack is a book about Clinton. As a “design brief,” it works from a set of ideas about how to understand and govern the relationship between software and sovereignty that were strongly intertwined with the Clinton-Obama political project. That means, abysmally, that it is now also about Trump. And Trump hangs synechdochally over theoretical provocations for what is to be done now that tech has killed the nation-state’s “Westphalian Loop.” This was a knotty question when the book went to press in February 2016 and Gehry seemed ascendant. Now that the Extreme Center’s (Ali 2015) project of tying neoliberal capitalism to non-democratic structures of technocratic governance appears to be collapsing across the planet, Clinton’s “half-formed question” is even knottier. If we’re living through the demise of the Westphalian nation state, then it’s sounding one hell of a murderous death rattle.

    Gehry or Agora?

    In the brief period between July 21st and November 8 2016, when the United States’ cognoscenti convinced itself that another Clinton regime was inevitable, there was a neatly ordered expectation of how “pragmatic” future governance under a prolonged Democratic regime would work. In the main, the public could look forward to another eight years sunken in a “Gehry-like” neoliberal surround subtended by the technocratic managerialism of the Democratic Party’s right edge. And, while for most of the country and planet, that arrangement didn’t portend much to look forward to, it was at least not explicitly nihilistic in its outlook. The focus on management, and on the deliberate dismantling of the nation state as the primary site of governance in favor of the mesh of transnational agencies and organizations that composed 21st century neoliberalism’s star actants meant that a number of questions about how the world would be arranged were left unsettled.

    By end of election week, that future had fractured. The unprecedented amateurishness, decrypted racism, and incomparable misogyny of the Trump campaign portended an administration that most thought couldn’t, or at the very least shouldn’t, be trusted with the enormous power of the American executive. This stood in contrast to Obama, and (perhaps to a lesser extent) to Clinton, who were assumed to be reasonable stewards. This paradoxically helps demonstrate just how much the “rule of law” and governance by administrative norms that theoretically underlie the liberal national state had already deteriorated under Obama and his immediate predecessors—a deterioration that was in many ways made feasible by the innovations of the digital technology sector. As many have pointed out, the command-and-control prerogatives that Obama claimed for the expansion of executive power depended essentially on the public perception of his personal character.

    The American people, for instance, could trust planetary drone warfare because Obama claimed to personally vet our secret kill list, and promised to be deliberate and reasonable about its targets. Of course, Obama is merely the most publicly visible part of a kill-chain that puts this discretionary power over life and death in the hands of the executive. The kill-chain is dependent on the power of, and sovereign faith in, digital surveillance and analytics technologies. Obama’s kill-chain, in short, runs on the capacities of an American warfare state—distributed at nodal points across the crust of the earth, and up its Van Allen belts—to read planetary chemical, territorial, and biopolitical fluxes and fluctuations as translatable data that can be packet switched into a binary apparatus of life and death. This is the calculus that Obama conjures when he defines those mobile data points that concatenate into human beings as as “baseball cards” that constitute a “continuing, imminent threat to the American people.” It is the work of planetary sovereignty that rationalizes and capacitates the murderous “fix” and “finish” of the drone program.

    In other words, Obama’s personal aura and eminent reasonableness legitimated an essentially unaccountable and non-localizable network of black sites and black ops (Paglen 2009, 2010) that loops backwards and forwards across the drone program’s horizontal regimes of national sovereignty and vertical regimes of cosmic sovereignty. It is, to use Clinton’s framework, a very Frank Gehry power structure. Donald Trump’s election didn’t transform these power dynamics. Instead, his personal qualities made the work of planetary computation in the service of sovereign power to kill suddenly seem dangerous or, perhaps better: unreasonable. Whether President Donald Trump would be so scrupulous as his predecessor in determining the list of humans fit for eradication was (formally speaking) a mystery, but practically a foregone conclusion. But in both presidents’ cases, the dichotomies between global and local, subject and sovereign, human and non-human that are meant to underwrite the nation state’s rights and responsibilities to act are fundamentally blurred.

    Likewise, Obama’s federal imprimatur transformed the transparently disturbing decision to pursue mass distribution of privately manufactured surveillance technology – Taser’s police-worn body cameras, for instance – as a reasonable policy response to America’s dependence on heavily armed paramilitary forces to maintain white supremacy and crush the poor. Under Obama and Eric Holder, American liberals broadly trusted that digital criminal justice technologies were crucial for building a better, more responsive, and more responsible justice system. With Jeff Sessions in charge of the Department of Justice, the idea that the technologies that Obama’s Presidential Task Force on 21st Century Policing lauded as crucial for achieving the “transparency” needed to “build community trust” between historically oppressed groups and the police remained plausible instruments of progressive reform suddenly seemed absurd. Predictive policing, ubiquitous smart camera surveillance, and quantitative risk assessments sounded less like a guarantee of civil rights and more like a guarantee of civil rights violations under a president that lauds extrajudicial police power. Trump goes out of his way to confirm these civil libertarian fears, such as when he told Long Island law enforcement that “laws are stacked against you. We’re changing those laws. In the meantime, we need judges for the simplest thing — things that you should be able to do without a judge.”

    But, perhaps more to the point, the rollout of these technologies, like the rollouts of the drone program, formalized a transformation in the mechanics of sovereign power that had long been underway. Stripped of the sales pitch and abstracted from the constitutional formalism that ordinarily sets the parameters for discussions of “public safety” technologies, what digital policing technologies do is flatten out the lived and living environment into a computational field. Police-worn body cameras quickly traverse the institutional terrain from a tool meant to secure civil rights against abusive officers into an artificially intelligent weapon that flags facial structures that match with outstanding warrants, that calculates changes in enframed bodily comportment to determine imminent threat to the officer-user, and that captures the observed social field as  data privately owned by the public safety industry’s weapons manufacturers. Sovereignty, in this case, travels up and down a Stack of interoperative calculative procedures, with state sanction and human action just another data point in the proper administration of quasi-state violence. After all, it is Axon (formerly Taser), and not a government that controls the servers that their body cams draw on to make real-time assessments of human danger. The state sanctions a human officer’s violence, but the decision-making apparatus that situates the violence is private, and inhuman. Inevitably, the drone war and carceral capitalism collapse into one another, as drones are outfitted with AI designed to identify crowd “violence” from the sky, a vertical parallax to pair with the officer-user’s body worn camera.

    Trump’s election seemed to show with a clarity that had hitherto been unavailable for many that wedding the American security apparatus’ planetary sovereignty to twenty years of unchecked libertarian technological triumphalism (even, or especially if in the service of liberal principles like disruption, innovation, efficiency, transparency, convenience, and generally “making the world a better place”) might, in fact, be dangerous. When the Clinton-Obama project collapsed, its assumption that the intertwining of private and state sector digital technologies inherently improves American democracy and economy, and increases individual safety and security looked absurd. The shock of Trump’s election, quickly and self-servingly blamed on Russian agents and Facebook, transformed Silicon Valley’s broadly shared Prometheanism into interrogations into the industry’s infrastructural corrosive toxicity, and its deleterious effect on the liberal national state.  If tech would ever come to Jesus, the end of 2016 would have had to be the moment. It did not.

    A few days after Trump won election I found myself a fly on the wall in a meeting with mid-level executives for one of the world’s largest technology companies (“The Company”). We were ostensibly brainstorming how to make The Cloud a force for “global good,” but Trump’s ascendancy and all its authoritarian implications made the supposed benefits of cloud computing—efficiency, accessibility, brain-shattering storage capacity—suddenly terrifying. Instead of setting about the dubious task of imagining how a transnational corporation’s efforts to leverage the gatekeeping power over access to the data of millions, and the private control over real-time identification technology (among other things) into heavily monetized semi-feudal quasi-sovereign power could be Globally Good, we talked about Trump.

    The Company’s reps worried that, Peter Thiel excepted, tech didn’t have anybody near enough to Trump’s miasmatic fog to sniff out the administration’s intentions. It was Clinton, after all, who saw the future in global information systems. Trump, as we were all so fond of pointing out, didn’t even use a computer. Unlike Clinton, the extent of Trump’s mania for surveillance and despotism was mysterious, if predictable. Nobody knew just how many people of color the administration had in its crosshairs, and The Company reps suggested that the tech world wasn’t sure how complicit it wanted to be in Trump’s explicitly totalitarian project. The execs extemporized on how fundamental the principles of democratic and republican government were to The Company, how committed they were to privacy, and how dangerous the present conjuncture was. As the meeting ground on, reason slowly asphyxiated on a self-evidently implausible bait hook: that it was now both the responsibility and appointed role of American capital, and particularly of the robber barons of Platform Capitalism (Srnicek 2016), to protect Americans from the fascistic grappling of American government. Silicon Valley was going to lead the #resistance against the very state surveillance and overreach that it capacitated, and The Company would lead Silicon Valley. That was the note on which the meeting adjourned.

    That’s not how things have played out. A month after that meeting, on December 14, 2016, almost all of Silicon Valley’s largest players sat down at Trump’s technology roundtable. Explaining themselves to an aghast (if credulous) public, tech’s titans argued that it was their goal to steer the new chief executive of American empire towards a maximally tractable gallimaufry of power. This argument, plus over one hundred companies’ decision to sign an amici curiae brief opposing Trump’s first attempt at a travel ban aimed at Muslims, seemed to publicly signal that Silicon Valley was prepared to #resist the most high-profile degradations of contemporary Republican government. But, in April 2017, Gizmodo inevitably reported that those same companies that appointed themselves the front line of defense against depraved executive overreach in fact quietly supported the new Republican president before he took office. The blog found that almost every major concern in the Valley donated tremendously to the Trump administration’s Presidential Inaugural Committee, which was impaneled to plan his sparsely attended inaugural parties. The Company alone donated half a million dollars. Only two tech firms donated more. It seemed an odd way to #resist.

    What struck me during the meeting was how weird it was that executives honestly believed a major transnational corporation would lead the political resistance against a president committed to the unfettered ability of American capital to do whatever it wants. What struck me afterward was how easily the boundaries between software and sovereignty blurred. The Company’s executives assumed, ad hoc, that their operation had the power to halt or severely hamper the illiberal policy priorities of government. By contrast, it’s hard to imagine mid-level General Motors executives imagining that they have the capacity or responsibility to safeguard the rights and privileges of the republic. Except in an indirect way, selling cars doesn’t have much to do with the health of state and civil society. But state and civil society is precisely what Silicon Valley has privatized, monetized, and re-sold to the public. But even “state and civil society” is not quite enough. What Silicon Valley endeavors to produce is, pace Bratton, a planetary simulation as prime mover. The goal of digital technology conglomerates is not only to streamline the formal and administrative roles and responsibilities of the state, or to recreate the mythical meeting houses of the public sphere online. Platform capital has as its target the informational infrastructure that makes living on earth seem to make sense, to be sensible. And in that context, it’s commonsensical to imagine software as sovereignty.

    And this is the bind that will return us to The Stack. After one and a half relentless years of the Trump presidency, and a ceaseless torrent of public scandals concerning tech companies’ abuse of power, the technocratic managerial optimism that underwrote Clinton’s speech has come to a grinding halt. For the time being, at least, the “seemingly haphazard yet highly intentional and sophisticated” governance structures that Clinton envisioned are not working as they have been pitched. At the same time, the cavalcade of revelations about the depths that technology companies plumb in order to extract value from a polluted public has led many to shed delusions about the ethical or progressive bona fides of an industry built on a collective devotion to Ayn Rand. Silicon Valley is happy to facilitate authoritarianism and Nazism, to drive unprecedented crises of homelessness, to systematically undermine any glimmer of dignity in human labor, to thoroughly toxify public discourse, to entrench and expand carceral capitalism so long as doing so expands the platform, attracts advertising and venture capital, and increases market valuation. As Bratton points out, that’s not a particularly Californian Ideology. It’s The Stack, both Gehry and Agora.

    _____

    R. Joshua Scannell holds a PhD in Sociology from the CUNY Graduate Center. He teaches sociology and women’s, gender, and sexuality studies at Hunter College, and is currently researching the political economic relations between predictive policing programs and urban informatics systems. He is the author of Cities: Unauthorized Resistance and Uncertain Sovereignty in the Urban World (Paradigm/Routledge, 2012).

    Back to the essay

    _____

    Works Cited

    • Ali, Tariq. 2015. The Extreme Center: A Warning. London: Verso
    • Crist, Eileen. 2016. “On the Poverty of Our Nomenclature.” In Anthropocene or Capitalocene: Nature, History, and the Crisis of Capitalism, edited by Jason W. Moore, 14-33. Oakland: PM Press
    • Harney, Stefano, and Fred Moten. 2013. The Undercommons: Fugitive Planning and Black Study. Brooklyn: Autonomedia.
    • Moore, Jason W. 2016. “Anthropocene or Capitolocene? Nature, History, and the Crisis of Capitalism.” In Anthropocene or Capitalocene: Nature, History, and the Crisis of Capitalism, edited by Jason W. Moore, 1-13. Oakland: PM Press
    • Negarestani, Reza. 2008. Cyclonopedia: Complicity with Anonymous Materials. Melbourne: re.press
    • Paglen, Trevor. 2009. Blank Spots on the Map: The Dark Geography of the Pentagon’s Secrert World. Boston: Dutton Adult
    • Paglen, Trevor. 2010. Invisible: Covert Operations and Classified Landscapes. Reading: Aperture Press
    • Puar, Jasbir. 2017. The Right to Maim: Debility, Capacity, Disability. Durham: Duke University Press
    • Srnicek, Nick. 2016. Platform Capitalism. Boston: Polity Press
    • Srnicek, Nick, and Alex Williams. 2016. Inventing the Future: Postcapitalism and a World Without Work. London: Verso.
  • Rob Hunter — The Digital Turn and the Ethical Turn: Depoliticization in Digital Practice and Political Theory

    Rob Hunter — The Digital Turn and the Ethical Turn: Depoliticization in Digital Practice and Political Theory

    Rob Hunter [*]

    Introduction

    In official, commercial, and activist discourses, networked computing is frequently heralded for establishing a field of inclusive, participatory political activity. It is taken to be the latest iteration of, or a standard-bearer for, “technology”: an autonomous force penetrating the social world, an independent variable whose magnitude may not directly be modified and whose effects are or ought to be welcomed. The internet, its component techniques and infrastructures, and related modalities of computing are often supposed to be accelerating and multiplying various aspects of the ideological lynchpin of the neoliberal order: individual sovereignty.[1] The Internet is heralded as the dawn of a new communication age, one in which democracy is to be reinvigorated and expanded through the publicity and interconnectivity made possible by new forms of networked relations among informed consumers.

    Composed of consumer choice, intersubjective rationality, and the activity of the autonomous subject, such sovereignty also forms the basis of many strands of contemporary ethical thought—which has increasingly come to displace rival conceptions of political thought in sectors of the Anglophone academy. In this essay, I focus on two turns and their parallels—the turn to the digital in commerce, politics, and society; and the turn to the ethical in professional and elite thought about how such domains should be ordered. I approach the digital turn through the case of the free and open source software movements. These movements are concerned with sustaining a publicly-available information commons through certain technical and juridical approaches to software development and deployment. The community of free, libre, and open source (FLOSS) developers and maintainers is one of the more consequential spaces in which actors frequently endorse the claim that the digital turn precipitates an unleashing of democratic potential in the form of improved deliberation, equalized access to information, networks, and institutions, and a leveling of hierarchies of authority. I approach the ethical turn through an examination of the political theory of democracy, particularly as it has developed in the work of theorists of deliberative democracy like Jürgen Habermas and John Rawls.

    By FLOSS I refer, more or less interchangeably, to software that is licensed such that it may be freely used, modified, and distributed, and whose source code is similarly available so that it may be inspected or changed by anyone (Free Software Foundation 2018). (It stands in contradistinction to “closed source” or proprietary software that is typically produced and sold by large commercial firms.) The agglomeration of “free,” “libre,” and “open source” reflects the multiple ideological geneses of non-proprietary software. Briefly, “free” or “libre” software is so named because, following Stallman’s (2015) original injunction in 1985, the conditions of its distribution forbid rendering the code (or derivative code) proprietary for the sake of maximizing the freedom of downstream coders and users to do as they see fit with it. The signifier “free” primarily connotes the absence of restrictions on use, modification, and distribution, rather than considerations of cost or exchange value. Of crucial importance to the free software movement was the adoption of “copyleft” licensure of software, in which copies of software are freely distributed with the restriction that subsequent users and distributors not impose additional restrictions upon subsequent distribution. As Stallman has noted, copyleft is built on a deliberate contradiction of copyright: “Copyleft uses copyright law, but flips it over to serve the opposite of its usual purpose: instead of a means of privatizing software, it becomes a means of keeping software free” (Stallman 2002, 22). Avowed members of the free software movement also conceive of free software’s importance not just in technical terms but in moral terms as well. For them, the free software ecosystem is a moral-pedagogical space in which values are reproduced and developers’ skills are fostered through unfettered access to free software (Kelty 2008).

    “Open source” software derives its name from a push—years after Stallman’s cri de coeur—that stressed non-proprietary software’s potential in the business world. Advocates of the open source framing downplayed free software’s origins in the libertarian-individualist ethos of the early free software movement. They discarded its rhetorics of individual freedom in favor of the invocation of “innovation,” “openness,” and neoliberal subjectivity. Toward the end of the twentieth century, open source activists “partially codified this philosophical frame by establishing a clear priority for pragmatic technical achievement over ideology (which was more central to the culture of the Free Software Foundation)” (Weber 2005, 165). In the current moment, antagonisms between proponents of the respective terminologies are comparatively muted. In many FLOSS developer spaces, the most commonly-avowed view is that the practical upshot of the differences in emphasis between “free” and “open source” is unimportant: the typical user or producer doesn’t care, and the immediate social consequences of the distinction are close to nil. (It is noteworthy that this framing is fully compatible with the self-consciously technicist, pragmatic framing of the open source movement, less so with the ideological commitments of the free software movement. Whether or not it is the case at the micro level that free software and open source software retain meaningfully different political valences is beyond the scope of this essay, although it is possible that voices welcoming an elision of “free” and “open source” do protest too much.)

    FLOSS is situated at the intersection of several trends and tendencies. It is a body of technical practice (hacking or coding); it is also a political-ethical formation. FLOSS is an integral component of capitalist software development—but it is also a hobbyist’s toy and a creator’s instrument (Kelty 2008), a would-be entrepreneur’s tool (Weber 2005), and an increasingly essential piece of academic kit (see, e.g., Coleman 2012). A generation of scholarship in anthropology, cultural studies, history, sociology, and other related fields has established that FLOSS is an appropriate object of study not only because its participants are typically invested in the internet-as-emancipatory-technology narrative, but also because free and open source software development has been profoundly consequential for both the cultural and technical character of the present-day information commons.

    In the remainder of the essay, I gesture at a critique of this view of the internet’s alleged emancipatory potential by examining its underlying assumptions and the theory of democracy to which it adheres. This theory trades on the idea that democracy is an ethical practice, one that achieves its fullest expression in the absence of coercion and the promotion of deliberative norms. This approach to thinking about democracy has numerous analogues in current debates in political theory and political philosophy. In prevailing models of liberal politics, institutions and ethical constraints are privileged over concepts like organization, contestation, and—above all—the pursuit and exercise of power. Indeed, within contemporary liberal political thought it is sometimes difficult to discern the activity of thinking about politics as such. I do not argue here for the merits of contestatory democracy, nor do I conceal an unease with the depoliticizing tendencies of deliberative democracy, or with the tendency to substitute the ethical for the political. Instead I draw out the theoretical commonalities between the emergence of deliberative democracy and the turn toward the digital in relations of production and reproduction. I suggest that critiques of the shortcomings of liberal thought regarding political activity and political persuasion are also applicable to the social and political claims and propositions that undergird the strategies and rhetorics of FLOSS. The hierarchies of commitment that one finds in contemporary liberalism may be detected in FLOSS thought as well. Liberalism typically prioritizes intersubjectivity over mass political action and contestation. Similarly, FLOSS rhetoric focuses on ethical persuasion rather than the pursuit of influence and social power such that proprietarian computing may be resisted or challenged. Liberalism also prioritizes property relations over other social relations. The FLOSS movement similarly retains a stark commitment to the priority of liberal property relations and to the idea of personal property in digital commodities (Pedersen 2010).

    In the context of FLOSS and the information commons, a depoliticized theory of democracy fails to attend to the dynamics of power, and to crucial considerations of political economy in communications and computing. An insistence on conceiving of democracy as an ethical aspiration or as a moral ideal—rather than as a practice of mass politics with a given historical and institutional specificity—serves to obscure crucial features of the internet as a cultural and social phenomenon. It also grants an illusory warrant for ideological claims to the effect that computing and internet-mediated communication constitute meaningful and consequential forms of civic participation and political engagement. As the ethical displaces the political, so the technological displaces the ethical. In the process, the workings of power are obscured, the ideological trappings of technologically-mediated domination are mystified, and the social forms that are peculiar to internet subcultures are naturalized as typifying the form of social organization that all democrats ought to seek after.

    In identifying the theoretical affinities between the liberalism of the digital turn and the ethical turn in liberal political theory, I hope to contribute to an enriched, interdisciplinary understanding of the available spaces for investigation and research with respect to emerging trends in digital life. The social relations that are both constituted by and constitutive of the worlds of software, networked communication, and pervasive computing are rightly becoming the objects of sustained study within disparate fields in humanistic disciplines. This essay aims at provoking new questions in such study by examining the theoretical linkages between the digital turn and the ethical turn.

    The Digital Turn

    The internet—considered in the broadest possible sense, as something comprised of networks and terminals through which various forms of sociality are mediated electronically—attracts, of course, no small amount of academic, elite, and popular attention. A familiar story tends to arise out of these attentions. The digital turn ushers in the promise of digital democracy: an expansion of opportunities for participation in politics (Klein 1999), and a revolutionizing of communications that connects individuals in networks (Castells 2010) of informed and engaged consumers and producers of non-material content (Shirky 2008). Dissent would prove impossible to stifle, as information—endowed with its own virtual, composite personality, and empowered by sophisticated technologies—would both want and be able to be free. “The Net interprets censorship as damage and routes around it” (as cited in Reagle 1999) is a famous—and possibly apocryphal—variant of this piece of folk wisdom. Pervasive networked computing ensures that citizens will be self-mobilizing in their participation in politics and in their scrutiny of corruption and rights abuses. Capital, meanwhile, can anticipate a new suite of needs to be satisfied through informational commodities. The only losers are governments that, despite enthusiastic rhetoric about an “information superhighway,” are unable to keep pace with technological growth, or with popular adoption of decentralized communications media. Their capacities to restrict or control discourse will be crippled; their control over their own populations will diminish in proportion to the growth of electronically-mediated communication.[2]

    Much of the excitement over the internet is freighted with neoliberal (Brown 2005) ideology, either in implicit or explicit terms. On this view, liberalism’s focus on the unfettered movement of commodities and the unrestricted consumption activities of individuals will find its final and definitive instantiation in a world of digital objects (with a marginal cost approaching zero) and the satisfaction of consumer needs through novel and innovative patterns of distribution. The cultural commons may be reclaimed through transformations of digital labor—social, collaborative, and remix-friendly (Benkler 2006). Problems of production can be solved through increasingly sophisticated chains of logistics (Bonacich and Wilson 2008), finally fulfilling the unrealized cybernetic dreams of planners and futurists in the twentieth century.[3] Political superintendence of the market—and many other social fields—will be rendered redundant by rapid, unmediated feedback mechanisms linking producers and consumers. This contradictory utopia will achieve a non-coercive panopticon of full information, made possible through the endless concatenation of individual decisions to consume, evaluate, and generate information (Shirky 2008).

    This prediction has not been vindicated. Contemporary observers of the internet age do not typically describe it in terms of democratic vistas and cultural efflorescence. They are likelier to examine it in terms of the extension of technologies of control and surveillance, and in terms of the subsumption of sociality under the regime of neoliberal capital accumulation. Indeed, the digital turn follows a trajectory similar to that of the neoliberal turn in governance. The neoliberal turn has enhanced rather than undermined the capacity of the state. Those capacities are directed not at the provision of public goods and social services but rather coercive security and labor discipline. The digital turn’s course has decidedly not been one of individual empowerment and an expansion of the scope of participatory forms of democratic politics. Instead, networked computing is now a profit center for a small number of titanic capitals. Certainly, the revolution in communications technology has influenced social relations. But the political consequences of that influence do not constitute a profound transformation and extension of democracy (Hindman 2008). Nor are the consequences of the revolution in communications uniformly emancipatory (Morozov 2011). More generally, the subsumption of greater swathes of sociality within the logics of computing presents the risk of the enclosure of public information, and of the extension of the capabilities of the powerful to surveil and coerce others while evading public supervision (Drahos 2002, Golumbia 2009, Pasquale 2015).

    Extensive critiques of “the Californian ideology” (Barbrook and Cameron 2002), renascent “cyberlibertarianism” (Dahlberg 2010) and its affinities with longstanding currents in right-wing thought (Golumbia 2013), and related ideological formations are all ready to hand. The digital turn is of course not characterized by a singular politics. However, the hegemonic political tendency associated with it may be fairly described as a complex of libertarian ideology, neoliberal political economy, and antistatist rhetoric. The material substrate for this complex is the burgeoning arena of capitals pursuing profits through the exploitation of “digital labor” (Fuchs 2014). Such labor occurs in software development, but also in hardware manufacturing; the buying, selling, and licensing of intellectual property; and the extractive industries providing the necessary mineral ores, rare earth metals, and other primary inputs for the production of computers (on this point see especially Dyer-Witheford 2015). The growth of this sector has been accomplished through the exploitation of racialized and marginalized populations (see, for example, Amrute 2016), the expropriation of the commons through the transformation of public assets into private property, and the decoupling in the public mind of any link between easily accessed electronic media and computing power, on the one hand, and massive power consumption and environmental devastation, on the other.

    To the extent that hopes for the emancipatory potential of a cyberlibertarian future have been dashed, enthusiasm for the left-right hybrid politics that first bruited it is still widespread. In areas in which emancipatory hopes remain unchastened by the experience of capital’s colonization of the information commons, that enthusiasm is undiminished. FLOSS movements are important examples of such areas. In FLOSS communities and spaces, left-liberal commitments to social justice causes are frequently melded with a neoliberal faith in decentralized, autonomous activity in the development, deployment, and maintenance of computing processes. When FLOSS activists self-reflexively articulate their political commitments, they adopt rhetorics of democracy and cooperative self-determination that are broadly left-liberal. However, the politics of FLOSS, like hacker politics in general, also betray a right-libertarian fixation on the removal of obstacles to individual wills. The hacker’s political horizon is the unfettering of the socially untethered, electronically empowered self (Borsook 2000). Similarly, the liberal commitments that undergird contemporary theories of “deliberative democracy” are easily adapted to serve libertarian visions of the good society.

    The Ethical and the Political

    The liberalism of such political theory as is encountered in FLOSS discourse may be fruitfully compared to the turn toward deliberative models of social organization. This turn is characterized by a dual trend in postwar political thought, centrally but not exclusively limited to the North Atlantic academy.  It consists of the elision of theoretical distinctions between individual ethical practice and democratic citizenship, while increasing the theoretical gap between agonistic practices—contestation, conflict, direction action—and policy-making within the institutional context of liberal constitutionality. The political is often equated with conflict—and thereby, potentially, violence or coercion. The ethical, by contrast, comes closer to resembling democracy as such. Democracy is, or ought to be, “depoliticized” (Pettit 2004); deliberative democracy, aimed at the realization of ethical consensus, is normatively prior to aggregative democracy or the mere counting of votes. On this view, the historical task of democracy is not to grant greater social purchase to political tendencies or formations; nor does it consist in forging tighter links between decision-making institutions and the popular will. Rather, democracy is a legitimation project, under which the decisions of representative elites are justified in terms of the publicity of the reasons or justifications supplied on their behalf. The uncertain movement between these two poles—conceiving of democracy as a normative ideal, and conceiving of it as a description of adequately legitimated institutions—is hardly unique to contemporary democratic theory. The turn toward the deliberative and the ethical is distinguished by the narrowness of its conception of the democratic—indeed by its insistence that the democratic, properly understood, is characterized by the dampening of political conflict and a tendential movement toward consensus.

    Why ought we consider the trajectory of postwar liberal thought in conjunction with the digital turn? First, there are, of course, similarities and continuities between the fortunes of liberal ideology in both the world of software work and the world of academic labor. The former is marked to a much greater extent by a widespread distrust of mechanisms of governance and is indelibly marked by outpourings of an ascendant strain of libertarian triumphalism. Where ideological development in software work has charted a libertarian course, in academic Anglophone political thought it has more closely followed a path of neoliberal restructuring. To the extent that we maintain an interest in the consequences of the digitization of sociality, it is germane and appropriate to consider liberalism in software work and liberalism in professional political theory in tandem. However, there is a rather more important reason to chart the movement of liberal political thought in this context: many of the debates, problematics, and proffered solutions in the politico-ideological discourse in the world of software work are, as it were, always already present in liberal democratic theory. As such, an examination of the ethical turn—liberal democratic theory’s disavowal of contestation, and of the agon that interpellates structures of politics (Mouffe 2005, 80–105)—can aid further, subsequent examinations of the ontological, methodological, and normative presuppositions that inform the self-understanding of formations and tendencies within FLOSS movements. Both FLOSS discourses and professional democratic theory tend to discharge conclusions in favor of a depoliticized form of democracy.

    Deliberative democracy’s roots lie in liberal legitimation projects begun in response to challenges from below and outside existing power structures. Despite effacing its own political content, deliberative democracy must nevertheless be understood as a political project. Notable gestures toward the concept may be found in John Rawls’s theory-building project, beginning with A Theory of Justice (1971); and in Jürgen Habermas’s attempts to render the intellectual legacy of the Frankfurt School compatible with postwar liberalism, culminating in Between Facts and Norms (1996). These philosophical moves were being made at the same time as the fragmentation of the postwar political and economic consensus in developed capitalist democracies. Critics have detected a trend toward retrenchment in both currents: the evacuation of political economy—let alone Marxian thought—from critical theory; the accommodation made by Rawls and his epigones with public choice theory and neoliberal economic frames. The turn from contestatory politics in Anglophone political thought was simultaneous with the rise of a sense that the institutional continuity and stability of democracy were in greater need of defense than were demands for political criticism and social transformation. By the end of the postwar boom years, an accommodation with “neoliberal governmentality” (Brown 2015) was under way throughout North Atlantic intellectual life. The horizons of imagined political possibility were contracting at the very conjuncture when labor movements and left political formations foundered in the face of the consolidation of the capitalist restructuring under way since the third quarter of the twentieth century.

    Rawls’s account of justified institutions does not place a great emphasis on mass politics; nor does Habermas’s delineation of the boundaries of the ideal circumstances for communication—except insofar as the memory of fascism that Habermas inherited from the Frankfurt School weighs heavily on his forays into democratic theory. Mass politics is an inherently suspect category in Habermas’s thought. It is telling—and by no means surprising—that the two heavyweight theorists of North Atlantic postwar social democracy are primarily concerned with political institutions and with “the ideal speech situation” (Habermas 1996, 322–328) rather than with mass politics. They are both concerned with making justificatory moves rather than with exploring the possibilities and limits to mass politics and collective action. Rawls’s theory of justice describes a technocratic scheme for a minimally redistributive social democratic polity, while Habermas’s oeuvre has increasingly come to serve as the most sophisticated philosophical brief on behalf of the project of European cosmopolitan liberalism. Within the confines of this essay it is impossible to engage in a sustained consideration of the full sweep of Rawls’s political theory, including his conception of an egalitarian and redistributive polity and his constructivist account of political justification; similarly, the survey of Habermas presented here is necessarily compressed and abstracted. I restrict the scope of my critical gestures to the contributions made by Rawls and Habermas to the articulation of a deliberative conception of democracy. In this respect, they were strikingly similar:

    Both Rawls and Habermas assert, albeit in different ways, that the aim of democracy is to establish a rational agreement in the public sphere. Their theories differ with respect to the procedures of deliberation that are needed to reach it, but their objective is the same: to reach a consensus, without exclusion, on the ‘common good.’ Although they claim to be pluralist, it is clear that theirs is a pluralism whose legitimacy is only recognized in the private sphere and that it has no constitutive place in the public one. They are adamant that democratic politics requires the elimination of passions from the public sphere. (Mouffe 2013, 55)

    In neither Rawls’s nor Habermas’s writings is the theory of deliberative democracy simply the expression of a preference for the procedural over the substantive. It is better understood as a preference for unity and consensus, coupled with a minoritarian suspicion of the institutions and norms of mass electoral democracy. It is true that both their deliberative democratic theories evince considerable concern for the procedures and conditions under which issues are identified, alternatives are articulated, and decisions are made. However, this concern is motivated by a preoccupation with a particular substantive interest: specifically, the reproduction of liberal democratic forms. Such forms are valued not for their own sake—indeed, that would verge on incoherence—but because they are held to secure certain moral ends: respect for individuals, reciprocity of regard or recognition between persons, the banishment of coercion from public life, and so on. The ends of politics are framed in terms of morality—a system of universal duties or ends. The task of political theory is to envision institutions which can secure ends or goods that may be seen as intrinsically desirable. Notions that the political might be an autonomous domain of human activity, or that political theory’s ambit extends beyond making sense of existing configurations of institutions, are discarded. In their place is an approach to political thought rooted in concerns about technologies of governance. Such an approach concerns itself with political disagreement primarily insofar as it is a foreseeable problem that must be managed and contained.

    Depoliticized, deliberative democracy may be characterized as one or more of several forms of commitment to an apolitical conception of social organization. It is methodologically individualist: it takes the (adult, sociologically normative and therefore likely white and cis-male) individual person as the appropriate object of analysis and as the denominator to which social structures ultimately reduce. It is often intersubjective in its model of communication: that is, ideas are transmitted by and between individuals, typically or ideally two individuals standing in a relation of uncoerced respect with one another. It is usually deliberative in the kind of decision-making it privileges: authoritative decisions arise not out of majoritarian voting mechanisms or mass expressions of collective will, but rather out of discursive encounters that encourage the formation and exchange of claims whose content conform to specific substantive criteria. It is often predicated on the notion that the most valuable or self-constitutive of individuals’ beliefs and understandings are pre-political: individual rational agents are “self-authenticating sources of valid claims” (Rawls 2001, 23). Their claims are treated as exogenous to the social and political contexts in which they are found. Depoliticized democracy is frequently racialized and erected on a series of assumptions and cultural logics of hierarchy and domination (Mills 1997). Finally, depoliticized democracy insists on a particular hermeneutic horizon: the publicity of reasons. For any claim to be considered credible, and for public exercises to be considered legitimate, they must be comprehensible in terms of the worldviews, held premises, or anterior normative commitments of all persons who might somehow be affected by them.

    Theories of deliberative democracy are not merely suspicious of political disagreement—they typically treat it as pathological. Social cleavages over ideology (which may always be reduced to the concatenation of individual deliberations) are evidence either of bad faith argumentation or a failure to apprehend the true nature of the common good. To the extent that deliberative democracy is not nakedly elitist, it ascribes to those democratic polities it considers well-formed a capacity for a peculiar kind of authority. Such collectivities are capable, by virtue of their well-formed deliberative structures, of discharging decisions that are binding precisely because they are correct with reference to standards that are anterior to any dialectic that might take place within the social body itself. Consequently, much depends on the ideological content of those standards.

    The concept of public reason has acquired special potency in the hands of Rawls’s legatees in North American analytic political philosophy. Similar in aim to Habermas’s ideal speech situation, the modern idea of public reason is meant to model an ideal state of deliberative democracy. Rawls locates its origins in Rousseau (Rawls 2007, 231). However, it acquires a specifically Kantian conception in his elaboration (Rawls 2001, 91–94), and an extensive literature in analytic political philosophy is devoted to the elaboration of the concept in a Rawlsian mode (for a good recent discussion see Quong 2013). Public reason requires that contested policies’ justifications are comprehensible to those who controvert those policies. More generally, the polity in which the ideal public reason obtains is one in which interlocutors hold themselves to be obliged to share, to the extent possible, the premises from which political reasoning proceeds. Arguments that are deemed to originate from outside the boundaries of public reason cannot serve a legitimating function. Public reason usually finds expression in the writings of liberal theorists as an explanation for why controverted policies or decisions may nevertheless be viewed as substantively appropriate and democratically legitimated.

    Proponents of public reason often cast the ideal as a commonplace of reasonable discussion that merely binds interlocutors to deliberate in good faith. However, public reason may also be described as a cudgel with which to police the boundaries of debate. It effectively cedes discursive power to those who controvert public policy in order to control the trajectory of the discourse—if they are possessed of enough social power. Explicitly liberal in its philosophical genealogy, public reason is expressive of liberal democratic theory’s wariness with respect to both radical and reactionary politics. Many liberal theorists are primarily concerned to show how public reason constrains reactionaries from advancing arguments that rest on religious or theological grounds. An insistence on public reasonableness (perhaps framed through an appeal to norms of civility) may also allow the powerful to cavil at challenges to prevailing economic thought as well as to prevailing understandings of the relationship between the public and the religious.

    Habermas’s project on the communicative grounds of liberal democracy (1998) reflects a similar commitment to containing disagreement and establishing the parameters when and how citizens may contest political institutions and the rules they produce and enforce. His “discourse principle” (1996, 107) is not unlike Rawls’s conception of public reason in that it is intended to serve as a justificatory ground for deliberations tending toward consensus. According to the discourse principle, a given rule or law is justified if and only if those who are to be affected by it could accept it as the product of a reasonable discourse. Much of Habermas’s work—particularly Between Facts and Norms (1996)—is devoted to establishing the parameters of reasonable discourses. Such cartographies are laid out not with respect to controversies arising out of actually existing politics (such as pan-European integration or the problems of contemporary German right-wing politics). They are instead sited within the coordinates of Habermas’s specification of the linguistic and pragmatic contours of the social world in established constitutional democracies. The practical application of the discourse principle is often recursive, in that the particular implications and the scope of the discourse principle require further elaboration or extension within any given domain of practical activity in which the principle is invoked. Despite its rarefied abstraction, the discourse principle is meant in the final instance to be embedded in real activities and sites of discursive activity. (Habermas’s work in ethics parallels his discourse-theoretic approach to politics. His dialogical principle of universalization holds that moral norms are valid insofar as its observance—and the effects of that observance—would be accepted singly and jointly by all those affected.)

    Both Rawls and Habermas’s conceptions of the communicative activity underlying collective decision-making are strongly motivated by concerns for intersubjective ethical concerns. If anything, Habermas’s discourse ethics, and the parallel moves that he makes in his interventions in political thought, are more exacting than Rawls’s conception of public reason, both in terms of the discursive environments that they presuppose as well as the demands that they place upon individual interlocutors. Both thinkers’ views also conceive of political conflict as a field in which ethical questions predominate. Indeed, under these views political antagonism might be seen as pathological, or at least taken to be the locus of a sort of problem situation: If politics is taken to be a search for the common welfare (grounded in commonly-avowed terms), or is held to consist in the provision of public goods whose worth can, in principle, be agreed upon, then it would make sense to think that political antagonism is an ill to be avoided. Politics would then be exceptional, whereas the suspension of political antagonism for the sake of decisive, authoritative decision-making would be the norm. This is the core constitutive contradiction of the theory of deliberative democracy: the priority given to discussion and rationality tends to foreclose the possibility of contestation and disagreement.

    If, however, politics is a struggle for power in the pursuit of collective interests, it becomes harder to insist that the task of politics is to smooth over differences, rather than to articulate them and act upon them. Both Rawls and Habermas have been the subjects of extensive critique by proponents of several different perspectives in political theory. Communitarian critics have typically charged Rawls with relying on a too-atomized conception of individual subjects, whose preferences and beliefs are unformed by social, cultural or institutional contexts (Gutmann 1985); similar criticisms have been mounted against Habermas (see, for example, C. Taylor 1989). Both thinkers’ accounts of the foundations of political order fail to acknowledge the politically constitutive aspects of gender and sexuality (Okin 1989, Meehan 1995). From the perspective of a more radical conception of democracy, even Rawls’s later writings in which he claims to offer a constructivist (rather than metaphysical) account of political morality (Rawls 1993) does not necessarily pass muster, particularly given that his theory is fundamentally a brief for liberalism and not for the democratization of society (for elaboration of this claim see Wolin 1996).

    Deliberative democracy, considered as a prescriptive model of politics, represents a striking departure both from political thought on the right—typically preoccupied with maintaining cultural logics and preserving existing social hierarchies—and political thought on the left, which often emphasizes contingency, conflict, and the priority of collective action. Both of these latter approaches to politics take social phenomena as subjects of concern in and of themselves, and not merely as intermediate formations which reduce to individual subjectivity. The substitution of the ethical for the political marks an intellectual project that is adequate to the imperatives of a capitalist political economy. The contradictory merger of the ethical anxieties underpinning deliberative democratic theory and liberal democracy’s notional commitment to legitimation through popular sovereignty tends toward quietism and immobilism.

    FLOSS and Democracy

    The free and open source software movements are cases of distinct importance in the emergence of digital democracy. Their traditions, and many of the actors who participate in them, antedate the digital turn considerably: the free software movement began in earnest in the mid-1980s, while its social and technical roots may be traced further back and are tangled with countercultural trends in computing in the 1970s. The movements display durable commitments to ethical democracy in their rhetoric, their organizational strategies, and the philosophical presuppositions that are revealed in their aims and activities (Coleman 2012).

    FLOSS is sited at the intersection of many of liberal democratic theory’s desiderata. These are property, persuasion, rights, and ethics. The movement is a flawed, incompletely successful, but suggestive and instructive attempt at reconfiguring capitalist property relations—importantly, and fatally, from inside of an existing set of capitalist property relations—for the sake of realizing liberal ethical commitments with respect to expression, communication, and above all personal autonomy. Self-conscious hackers in the world of FLOSS conceive of their shared goals as the maximization of individual freedom with respect to the use of computers. Coleman describes how many hackers conceive of this activity in explicitly ethical terms. For them, hacking is a vital expression of individual freedom—simultaneously an aesthetic posture as well as a furtherance of specific ethical projects (such as the dissemination of information, or the empowerment of the alienated subject).

    The origins of the free software movement are found in the countercultural currents of computing in the 1970s, when several lines of inquiry and speculation converged: cybernetics, decentralization, critiques of bureaucratic organization, and burgeoning individualist libertarianism. Early hacker values—such as unfettered sharing and collaboration, a suspicion of distant authority given expression through decentralization and redundancy, and the maximization of the latitude of individual coders and users to alter and deploy software as they see fit—might be seen as the outflowing of several political traditions, notably participatory democracy and mutualist forms of anarchism. Certainly, the computing counterculture born in the 1970s was self-consciously opposed to what it saw as the bureaucratized, sclerotic, and conformist culture of major computing firms and research laboratories (Barbrook and Cameron 2002). Richard Stallman’s 1985 declaration of the need for, and the principles underlying, the free development of software is often treated as the locus classicus of the movement (Stallman, The GNU Manifesto 2015). Stallman succeeded in instigating a narrow kind of movement, one whose social specifity it is possible to trace. Its social basis consisted of communities of software developers, analysts, administrators, and hobbyists—in a word, hackers—that shared Stallman’s concerns over the subsumption of software development under the value-expanding imperatives of capital. As they saw it, the values of hacking were threatened by a proprietarian software development model predicated on the enclosure of the intellectual commons.

    Democracy, as it is championed by FLOSS advocates, is not necessarily an ideal of well-ordered constitutional forms and institutions whose procedures are grounded in norms of reciprocity and intersubjective rationality. It is characterized by a tension between an enthusiasm for volatile forms of participatory democracy and a tendency toward deference to the competence or charisma (the two are frequently conflated) of leaders. Nevertheless, the parallels between the two political projects—deliberative democracy and hacker liberation under the banner of FLOSS—are striking. Both projects share an emphasis on the persuasion of individuals, such that intersubjective rationality is the test of the permissibility of power arrangements or use restrictions. As such, both projects—insofar as they are to be considered to be interventions in politics—are necessarily self-limiting.

    Exponents of digital democracy rely on a conception of democracy that is strikingly similar to the theory of ethical democracy considered above. The constitutive documents and inscriptive commitments of various FLOSS communities bear witness to this. FLOSS communities should attract our interest because they are frequently animated by ethical and political concerns which appear to be liberal—even left-liberal—rather than libertarian. Barbrook and Cameron’s “Californian ideology” is frequently manifested in libertarian rhetorics that tend to have a right-wing grounding. The rise of Bitcoin is also a particularly resonant recent example (Golumbia 2016). The adulation that accompanies the accumulation of wealth in Silicon Valley furnishes a more abstract example of the ideological celebration of acquisitive amour propre in computing’s social relations. The ideological substrate of commercial computing is palpably right-wing, at least in its orientation to political economy. As such it is all the more noteworthy that the ideological commitments of many FLOSS projects appear to be animated by ethico-political concerns that are more typical of left-liberalism, such as: consensus-seeking modes of collective decision-making; recognition of the struggles and claims of members of marginalized or oppressed groups; and the affirmation of differing identifies.

    Free software rhetoric relies on concepts like liberty and freedom (Free Software Foundation 2016). It is in this rhetoric that free software’s imbrication within capitalist property relations is most apparent:

    Freedom means having control over your own life. If you use a program to carry out activities in your life, your freedom depends on your having control over the program. You deserve to have control over the programs you use, and all the more so when you use them for something important in your life. (Stallman 2015)

    Stallman’s equation of freedom with control—self-control—is telling: Copyleft does not subvert copyright; it depends upon it. Hacking is dependent upon the corporate structure of industrial software development. It is embedded in the social matrix of closed-source software production, even though hackers tend to believe that “their expertise will keep them on the upside of the technology curve that protects the best and brightest from proletarianization” (Ross 2009, 168). A dual contradiction is at work here. First, copyleft inverts copyright in order to produce social conditions in which free software production may occur. Second, copyleft nevertheless remains dependent on closed-source software development for its own social reproduction. Without the state power that is necessary for contracts to be enforced, or without the reproduction of technical knowledge that is underwritten by capital’s continued interest in software development, FLOSS loses its social base. Artisanal hacking or digital homesteading could not enter into the void were capitalist computing to suddenly disappear. The decentralized production of software is largely epiphenomenal upon the centralized and highly cooperative models of development and deployment that typify commercial software development. The openness of development stands in uneasy contrast with the hierarchical organization of the management and direction of software firms (Russell 2014).

    Capital has accommodated free and open source software with little difficulty, as can be seen in the expansion of the open source software movement. As noted above, many advocates of both the free software and open source software movements frequently aver that their commitments overlap to the point that any differences are largely ones of emphasis. Nevertheless, open source software differs—in an ideal, if not political, sense—from free software in its distinct orientation to the value of freedom: it is something which is to be valued as the absence of the fetters on coding, design, and debugging that characterize proprietary software development. As such open source software trades on an interpretation of freedom that is rather distinct from the ethical individualism of free software. Indeed, it is more recognizably politically adjacent to right-wing libertarianism. This may be seen, for example, in the writings of His influential essay “The Cathedral and the Bazaar” is a paean not to the emancipatory potential of open source software but its adaptability and suitability for large-scale, rapid-turnover software development—and its amenability to the prerogatives of capital (Raymond 2000).

    One of the key ethical arguments made by free and open source software advocates rests on an understanding of property that is historically specific. The conception of property deployed within FLOSS is the absolute and total right of owners to dispose of their possessions—a form of property rights that is peculiar to the juridical apparatus of capitalism. There are, of course, superficial resemblances between software license agreements—which curtail the rights of those who buy hardware with pre-installed commercial software, for example—and the seigneurial prerogatives associated with feudalism. However, the specific set of property relations underpinning capitalist software development is also the same set of property relations that are traded upon in FLOSS theory. FLOSS criticism of proprietary software rarely extends to a criticism of private property as such. Ethical arguments for the expansion of personal computing freedoms, made with respect to the prevailing set of property relations, frequently focus on consumption. The focus may be positive: the freedom of the individual finds expression in the autonomy of the rational consumer of commodities. Or the focus may be negative: individual users must eschew a consumerist approach to computing or they will be left at the mercy of corporate owners of proprietary software.

    Arguments erected on premises about individual consumption choices are not easily extended to the sphere of collective political action. They do not discharge calls for pressuring political institutions or pursuing public power. The Free Software Foundation, the main organizational node of the free software movement, addresses itself to individual users (and individual capitalist firms) and places its faith in the ersatz property relations made possible by copyleft’s parasitism on copyright. The FSF’s ostensible non-alignment is really complementary, rather than antagonistic with, the alignments of major open source organizations. Organizations associated with the open source software movement are eager to find institutional partners in the business world. It is certainly the case that in the world of commercial computing, the open source approach has been embraced as an effective means for socializing the costs of software production (and the reproduction of software development capacities) while privatizing the monetary rewards that can be realized on the basis of commodified software. Meanwhile, the writings of Stallman and the promotional literature of the Free Software Foundation eschew the kind of broad-based political strategy that their analysis would seem to militate for, one in which FLOSS movements would join up with other social movements. An immobilist tendency toward a single-issue approach to politics is characteristic of FLOSS at large.

    One aspect of deliberative democracy—an aspect that is, as we have seen treated as banal in an unproblematic by many theorists of liberalism—that is often given greater emphasis by active proponents of digital democracy is the primacy of liberal property relations. Property relations take on special urgency in the discourse and praxis of free and open source software movements. Particularly in the propaganda and apologia of the open source movement, the personal computer is the ultimate form of personal property. More than that—it is an extension of the self. Computers are intimately enmeshed in human lives, to a degree even greater than was the case thirty years ago. To many hackers, the possibility that the code executed on their machines is beyond their inspection is a violation of their individual autonomy. Tellingly, analogies for this putative loss of freedom take as their postulates the “normal,” extant ways in which owners relate to the commodities they have purchased. (For example, running proprietary code on a computer may be analogized to driving a car whose hood cannot be opened.)

    Consider the Debian Social Contract, which encodes a variety of liberal principles as the constitutive political materials of the Debian project, adopted in the wake of a series of controversies and debates about gender imbalance (O’Neil 2009, 129–146). That the project’s constitutive document is self-reflexively liberal is signaled in its very title: it presupposes liberal concerns with the maximization of personal freedom and the minimization of coercion, all under the rubric of cooperation for a shared goal. The Debian Social Contract was the product of internal struggles within the Debian project, which aims to produce a technically sophisticated and yet ethically grounded version of the GNU/Linux operating system. It represents the ascendancy of a tendency within the Debian project that sought to affirm the project’s emancipatory aims. This is not to suggest that, prior to the adoption of the Social Contract, the project was characterized by an uncontested focus on technical expertise, at the direct expense of an emancipatory vision of FLOSS computing; nevertheless, the experience decisively shifted Debian’s trajectory such that it was no longer parallel with that of related projects.

    Another example of FLOSS’s fetishism for non-coercive, individual-centered ethics may be found in the emphasis placed on maximizing individual user freedom. The FSF, for example, considers it a violation of user autonomy to make the use of free, open source software conditional by restricting its use—even only notionally—to legal or morally-sanctioned use cases. As is often the case when individualist libertarianism comes into contact with practical politics, an obstinate insistence on abstract principles discharges absurd commitments. The major stakeholders and organizational nodes in the free software movement—the FSF, the GNU development community, and so on—refuse even to censure the use of free software in situations characterized by the restriction or violation of personal freedoms: military computing, governmental surveillance, and so on.

    It must also be noted that the hacker ethos is at least partially coterminous with cyberlibertarianism. Found in both is the tendency to see the digital sphere as both the vindication of neoliberal economic precepts as well as the ideal terrain in which to pursue right-wing social projects. From the user’s perspective, cyberlibertarianism is presented as a license to use and appropriate the work of others who have made their works available for such purposes. It may perhaps be said that cyberlibertarianism is the ethos of the alienated monad pursuing jouissance through the acquisition of technical mastery and control over a personal object, the computer.

    Persuasion and Contestation

    We are now in a position to examine the contradictions in the theory of politics that informs FLOSS activity. These contradictions converge at two distinct—though certainly related—sites. The first site centers on power, and interest aggregation; the second, on property and the claims of users over their machines and data. An elaboration and examination of these contradictions will suggest that, far from overcoming or transcending the contradictions of liberalism as they inhere either in contemporary political practice or in liberal political thought, FLOSS hackers and activists have reproduced them in their practices as well as in their texts.

    The first site of contradiction centers on politics. FLOSS advocates adhere to an understanding of politics that emphasizes moral suasion and that valorizes the autonomy of the individual to pursue chosen projects and satisfy their own preferences. This despite the fact that the primary antagonists in the FLOSS political imaginary—corporate owners of IP portfolios, developers and retailers of proprietary software, and policy-makers and bureaucrats—possess considerable political, legal, and social power. FLOSS discourses counterpose to this power, not counterpower but evasion, escape, and exit. Copyleft itself may be characterized as evasive, but more central here is the insistence that FLOSS is an ethical rather than a political project, in which individual developers and users must not be corralled into particular formations that might use their collective strength to demand concessions or transform digitally mediated social relations. This disavowal of politics directly inhibits the articulation of counter-positions and the pursuit of counterpower.

    So long as FLOSS as a political orientation remains grounded in a strategic posture of libertarian individualism and interpersonal moral suasion, it will be unable to effectively underwrite demands or place significant pressures on institutions and decision-making bodies. FLOSS political rhetoric trades heavily on tropes of individual sovereignty, egalitarian epistemologies, and participatory modes of decision-making. Such rhetorics align comfortably with the currently prevailing consensus regarding the aims and methods of democratic politics, but when relied on naïvely or uncritically, they place severe limits on the capacity for the FLOSS movement to expand its political horizons, or indeed to assert itself in such a way as to become a force to be reckoned with.

    The second site of contradiction is centered on property relations. In the self-reflexive and carefully articulated discourse of FLOSS advocates, persons are treated as ethical agents, but such agents are primarily concerned with questions of the disposition of their property—most importantly, their personal computing devices. Free software advocates, in particular, emphasize the importance of users’ freedoms, but their attentiveness to such freedoms appears to end at the interface between owner and machine. More generally, property relations are foregrounded in FLOSS discourse even as such discourse draws upon and deploys copyleft in order to weaponize intellectual property law against proprietarian use cases.

    For so long as FLOSS as a social practice remains centered on copyleft, it will reproduce and reinforce the property relations which sustain a scarcity economy of intellectual creations. Copyleft is commonly understood as an ingenious solution to what is seen as an inherent tendency in the world of software towards restrictions on access, limitations on communication and exchange of information, and the diminution of the informational commons. However, these tendencies are more appropriately conceived of as notably enduring features of the political economy of capitalism itself. Copyleft cannot dismantle a juridical framework heavily weighted in favor of ownership in intellectual property from the inside—no more so than a worker-controlled-and-operated enterprise threatens the circuits of commodity production and exchange that comprise capitalism as a set of social relations. Moreover, major FLOSS advocates—including the FSF and the Open Source Initiative—proudly note the reliance of capitalist firms on open source software in their FAQs, press releases, and media materials. Such a posture—welcoming the embrace of FLOSS the software industry, with its attendant practices of labor discipline and domination, customer and citizen surveillance, and privatization of data—stands in contradiction with putative FLOSS values like collaborative production, code transparency, and user freedom.

    The persistence—even, in some respects, the flourishing—of FLOSS in the current moment represents a considerable achievement. Capitalism’s tendency toward crisis continues to impel social relations toward the subsumption of more and more of the social under the rubric of commodity production and exchange. And yet it is still the case that access to computing processes, logics, and resources remains substantially unrestricted by legal or commercial barriers. Much of this must be credited to the efforts of FLOSS activists. The first cohort of FLOSS activists recognized that resisting the commodification of the information commons was a social struggle—not simply a technical challenge—and sought to combat it. That they did so according to the logic of single-issue interest group activism, rather than in solidarity with a broader struggle against commodification, should perhaps not be surprising; in the final quarter of the twentieth century, broad struggles for power and recognition by and on behalf of workers and the poor were at their lowest ebb in a century, and a reconfiguration of elite power in the state and capitalism was well under way. Cross-class, multiracial, and gender-inclusive social movements were losing traction in the face of retrenchment by a newly emboldened ruling class; and the conceptual space occupied by such work was contested. Articulating their interests and claims as participants in liberal interest group politics was by no means the poorest available strategic choice for FLOSS proponents.

    The contradictions of such an approach have nevertheless developed apace, such that the current limitations and impasses faced by FLOSS movements appear more or less intractable. Free and open source software is integral to the operations of some of the largest firms in economic history. Facebook (2018), Apple (2018), and Google (Alphabet, Inc. 2018), for example, all proudly declare their support of and involvement in open source development.[4] Millions of coders, hackers, and users can and do participate in widely (if unevenly) distributed networks of software development, debugging, and deployment. It is now a practical possibility for the home user to run and maintain a computer without proprietary software installed on it. Nevertheless, proprietary software development remains a staggeringly profitable undertaking, FLOSS hacking remains socially and technically dependent on closed computing, and the home computing market is utterly dominated by the production and sale of machines that ship with and run software that is opaque—by design and by law—to the user’s inspection and modification. These limitations are compounded by FLOSS movements’ contradictions with respect to property relations and political strategy.

    Implications and Further Questions

    The paradoxes and contradictions that attend both the practice and theory of digital democracy in the FLOSS movements bear strong family resemblances to the paradoxes and contradictions that inhere in much contemporary liberal political theory. Liberal democratic theory is frequently committed to the melding of a commitment to rational legitimation with the affirmation of the ideal of popular sovereignty; but an insistence on rational authority tends to undermine the insurgent potential of democratic mass action. Similarly, the public avowals of respect for human rights and the value of user freedom that characterize FLOSS rhetoric are in tension with a simultaneous insistence on moral suasion centered on individual subjectivity. What’s more, they are flatly contradicted by the stated commitments by prominent leaders and stakeholders in FLOSS communities in favor of capitalist labor relations and neutrality with respect to the social or moral consequences of the use of FLOSS. Liberal political theory is potentially self-negating to the extent that it discards the political in favor of the ethical. Similarly, FLOSS movements short-circuit much of FLOSS’s potential social value through a studied refusal to consider the merits of collective action or the necessity of social critique.

    The disjunctures between the rhetorics and stated goals of FLOSS movements and their actual practices and existing social configurations are deserving of greater attention from a variety of perspectives. I have approached those disjunctures through the lens of political theory, but these phenomena are also deserving of attention within other disciplines. The contradiction between FLOSS’s discursive fealty to the emancipatory potential of software and the dependence of FLOSS upon the property relations of capitalism merits further elaboration and exploration. The digital turn is too easily conflated with the democratization of a social world that is increasingly intermediated by networked computing. The prospects for such an opening up of digital public life remain dim.

    _____

    Rob Hunter is an independent scholar who holds a PhD in Politics from Princeton University.

    Back to the essay

    _____

    Acknowledgments

    [*] I am grateful to the b2o: An Online Journal editorial collective and to two anonymous reviewers for their feedback, suggestions, and criticism. Any and all errors in this article are mine alone. Correspondence should be directed to: jrh@rhunter.org.

    _____

    Notes

    [1] The notion of the digitally-empowered “sovereign individual” is adumbrated at length in an eponymous book by Davidson and Rees-Mogg (1999) that sets forth a right-wing techno-utopian vision of network-mediated politics—a reactionary pendant to liberal optimism about the digital turn. I am grateful to David Golumbia for this reference.

    [2] For simultaneous presentations and critiques of these arguments see, for example, Dahlberg and Siapera (2007), Margolis and Moreno-Riaño (2013), Morozov (2013), Taylor (2014), and Tufekci (2017).

    [3] See Bernes (2013) for a thorough presentation of the role of logistics in (re)producing social relations in the present moment.

    [4] “Google believes that open source is good for everyone. By being open and freely available, it enables and encourages collaboration and the development of technology, solving real world problems” (Alphabet, Inc. 2017).

    _____

    Works Cited

    • Alphabet, Inc. 2018. “Google Open Source.” (Accessed July 31, 2018.)
    • Amrute, Sareeta. 2016. Encoding Race, Encoding Class: Indian IT Workers in Berlin. Durham, NC: Duke University Press.
    • Apple Inc. 2018. “Open Source.” (Accessed July 31, 2018.)
    • Barbrook, Richard, and Andy Cameron. (1995) 2002. “The Californian Ideology.” In Peter Ludlow, ed., Crypto Anarchy, Cyberstates, and Pirate Utopias. Cambridge, MA: The MIT Press. 363–387.
    • Benkler, Yochai. 2006. The Wealth of Networks: How Social Production Transforms Markets and Freedom. New Haven, CT: Yale University Press.
    • Bernes, Jasper. 2013. “Logistics, Counterlogistics and the Communist Prospect.” Endnotes 3. 170–201.
    • Bonacich, Edna, and Jake Wilson. 2008. Getting the Goods: Ports, Labor, and the Logistics Revolution. Ithaca, NY: Cornell University Press.
    • Borsook, Paulina. 2000. Cyberselfish: A Critical Romp through the Terribly Libertarian Culture of High Tech. New York: PublicAffairs.
    • Brown, Wendy. 2005. “Neoliberalism and the End of Liberal Democracy.” In Edgework: Critical Essays on Knowledge and Politics. Princeton, NJ: Princeton University Press. 37–59.
    • Brown, Wendy. 2015. Undoing the Demos: Neoliberalism’s Stealth Revolution. New York: Zone Books.
    • Castells, Manuel. 2010. The Rise of The Network Society. Malden, MA: Wiley-Blackwell.
    • Coleman, E. Gabriella. 2012. Coding Freedom: The Ethics and Aesthetics of Hacking. Princeton, NJ: Princeton University Press.
    • Dahlberg, Lincoln. 2010. “Cyber-Libertarianism 2.0: A Discourse Theory/Critical Political Economy Examination.” Cultural Politics 6:3. 331–356.
    • Dahlberg, Lincoln, and Eugenia Siapera. 2007. “Tracing Radical Democracy and the Internet.” In Lincoln Dahlberg and Eugenia Siapera, eds., Radical Democracy and the Internet: Interrogating Theory and Practice. Basingstoke: Palgrave. 1–16.
    • Davidson, James Dale, and William Rees-Mogg. 1999. The Sovereign Individual: Mastering the Transition to the Information Age. New York: Touchstone.
    • Drahos, Peter. 2002. Information Feudalism: Who Owns the Knowledge Economy? New York: The New Press.
    • Dyer-Witheford, Nick. 2015. Cyber-Proletariat: Global Labour in the Digital Vortex. London: Pluto Press.
    • Facebook, Inc. 2018. “Facebook Open Source.” (Accessed July 31, 2018.)
    • Free Software Foundation. 2018. “What Is Free Software?” (Accessed July 31, 2018.)
    • Fuchs, Christian. 2014. Digital Labour and Karl Marx. London: Routledge.
    • Golumbia, David. 2009. The Cultural Logic of Computation. Cambridge, MA: Harvard University Press
    • Golumbia, David. 2013. “Cyberlibertarianism: The Extremist Foundations of ‘Digital Freedom.’” Uncomputing.
    • Golumbia, David. 2016. The Politics of Bitcoin: Software as Right-Wing Extremism. Minneapolis, MN: University of Minnesota Press.
    • Gutmann, Amy. 1985. “Communitarian Critics of Liberalism.” Philosophy and Public Affairs 14. 308–322.
    • Habermas, Jürgen. 1996. Between Facts and Norms: Contributions to a Discourse Theory of Law and Democracy. Cambridge, MA: MIT Press.
    • Habermas, Jürgen. 1998. The Inclusion of the Other. Edited by Ciarin P. Cronin and Pablo De Greiff. Cambridge, MA: MIT Press.
    • Hindman, Matthew. 2008. The Myth of Digital Democracy. Princeton, NJ: Princeton University Press.
    • Kelty, Christopher M. 2008. Two Bits: The Cultural Significance of Free Software. Durham, NC: Duke University Press.
    • Klein, Hans. 1999. “Tocqueville in Cyberspace: Using the Internet for Citizens Associations.” Technology and Society 15. 213–220.
    • Laclau, Ernesto, and Chantal Mouffe. 2014. Hegemony and Socialist Strategy: Towards a Radical Democratic Politics. London: Verso.
    • Margolis, Michael, and Gerson Moreno-Riaño. 2013. The Prospect of Internet Democracy. Farnham: Ashgate.
    • Meehan, Johanna, ed. 1995. Feminists Read Habermas. New York: Routledge.
    • Mills, Charles W. 1997. The Racial Contract. Ithaca, NY: Cornell University Press.
    • Morozov, Evgeny. 2011. The Net Delusion: The Dark Side of Internet Freedom. New York: PublicAffairs.
    • Morozov, Evgeny. 2013. To Save Everything, Click Here: The Folly of Technological Solutionism. New York: PublicAffairs.
    • Mouffe, Chantal. 2005. The Democratic Paradox. London: Verso.
    • Mouffe, Chantal. 2013. Agonistics: Thinking the World Politically. London: Verso.
    • Okin, Susan Moller. 1989. “Justice as Fairness, For Whom?” In Justice, Gender and the Family. New York: Basic Books. 89–109.
    • O’Neil, Mathieu. 2009. Cyberchiefs: Autonomy and Authority in Online Tribes. London: Pluto Press.
    • Pasquale, Frank. 2015. The Black Box Society: The Secret Algorithms That Control Money and Information. Cambridge, MA: Harvard University Press.
    • Pedersen, J. Martin. 2010. “Introduction: Property, Commoning and the Politics of Free Software.” The Commoner 14 (Winter). 8–48.
    • Pettit, Philip. 2004. “Depoliticizing Democracy.” Ratio Juris 17:1. 52–65.
    • Quong, Jonathan. 2013. “On the Idea of Public Reason.” In The Blackwell Companion to Rawls, edited by John Mandle and David A. Reidy. Oxford: Wiley-Blackwell. 265–280.
    • Rawls, John. 1971. A Theory of Justice. Cambridge, MA: Harvard University Press.
    • Rawls, John. 1993. Political Liberalism. New York: Columbia University Press.
    • Rawls, John. 2001. Justice as Fairness: A Restatement. Cambridge, MA: Harvard University Press.
    • Rawls, John. 2007. Lectures in the History of Political Philosophy. Cambridge, MA: The Belknap Press of Harvard University Press.
    • Raymond, Eric S. 2000. The Cathedral and the Bazaar. Self-published.
    • Reagle, Joseph. 1999. “Why the Internet Is Good: Community Governance That Works Well.” Berkman Center.
    • Ross, Andrew. 2009. Nice Work If You Can Get It: Life and Labor in Precarious Times. New York: New York University Press.
    • Russell, Andrew L. 2014. Open Standards and the Digital Age. New York, NY: Cambridge University Press.
    • Shirky, Clay. 2008. Here Comes Everybody: The Power of Organizing Without Organizations. London: Penguin.
    • Stallman, Richard M. 2002. Free Software, Free Society: Selected Essays of Richard M. Stallman. Edited by Joshua Gay. Boston: GNU Press.
    • Stallman, Richard M. 2015. “Free Software Is Even More Important Now.” GNU.org.
    • Stallman, Richard M. 2015. “The GNU Manifesto.” GNU.org.
    • Taylor, Astra. 2014. The People’s Platform: Taking Back Power and Culture in the Digital Age . New York: Metropolitan Books.
    • Taylor, Charles. 1989. Sources of the Self. Cambridge, MA: Harvard University Press.
    • Tufekci, Zeynep. 2017. Twitter and Tear Gas: The Power and Fragility of Networked Protest. New Haven, CT: Yale University Press.
    • Weber, Steven. 2005. The Success of Open Source. Cambridge, MA: Harvard University Press.
    • Wolin, Sheldon. 1996. “The Liberal/Democratic Divide: On Rawls’s Political Liberalism.” Political Theory 24. 97–119.

     

  • Tim Duffy — Mapping Without Tools: What the Digital Turn Can Learn from the Cartographic Turn

    Tim Duffy — Mapping Without Tools: What the Digital Turn Can Learn from the Cartographic Turn

    Tim Duffy

    Christian Jacob, in The Sovereign Map, describes maps as enablers of fantasy: “Maps and globes allow us to live a voyage reduced to the gaze, stripped of the ups and downs and chance occurrences, a voyage without the narrative, without the narrative, without pitfalls, without even the departure” (2005). Consumers and theorists of maps, more than cartographers themselves, are especially set up to enjoy the “voyage reduced to the gaze” that cartographic artifacts (including texts) are able to provide. An outside view, distant from the production of the artifact, activates the epistemological potential of the artifact in a way that producing the same artifact cannot.

    This dynamic is found at the conceptual level of interpreting cartography as a discipline as well. J.B. Harley, in his famous essay “Deconstructing the Map,” writes that:

    a major roadblock to our understanding is that we still accept uncritically the broad consensus, with relatively few dissenting voices, of what cartographers tell us maps are supposed to be. In particular, we often tend to work from the premise that mappers engage in an unquestionably “scientific” or “objective” form of knowledge creation…It is better for us to begin from the premise that cartography is seldom what cartographers say it is (Harley 1989, 57).

    Harley urges an interpretation of maps outside the purview and authority of the map’s creator, just as a literary scholar would insist on the critic’s ability to understand the text beyond the authority of what the authors say about their texts. There can be, in other words, a power in having distance from the act of making. There is clarity that comes from the role of the thinker outside of the process of creation.

    The goal of this essay is to push back against the valorization of “tools” and “making” in the digital turn, particularly its manifestation in digital humanities (DH), by reflecting on illustrative examples of the cartographic turn, which, from its roots in the sixteenth century through to J.B. Harley’s explosive provocation in 1989 (and beyond) has labored to understand the relationship between the practice of making maps and the experiences of looking at and using them.  By considering the stubborn and defining spiritual roots of cartographic research and the way fantasies of empiricism helped to hide the more nefarious and oppressive applications of their work, I hope to provide a mirror for the state of the digital humanities, a field always under attack, always defining and defending itself, and always fluid in its goals and motions.

    Cartography in the sixteenth century, even as its tools and representational techniques were becoming more and more sophisticated, could never quite abandon the religious legacies of its past, nor did it want to. Roger Bacon in the thirteenth century had claimed that only with a thorough understanding of geography could one understand the Bible. Pauline Moffitt Watts, in her essay “The European Religious Worldview and Its Influence on Mapping” concludes that many maps, including those by Eskrich and Ortelius, preserved a sense of providential and divine meaning even as they sought to narrate smaller, local areas:

    Although the messages these maps present are inescapably bound, their ultimate source—God—transcends and eclipses history. His eternity and omnipresence is signed but not constrained in the figurae, places, people, and events that ornament them. They offer fantastic, sometimes absurd vignettes and pastiches that nonetheless integrate the ephemera into a vision of providential history that maintained its power to make meaning well into the early modern era. (2007, 400)

    The way maps make meaning is contained not just in the technical expertise of the way the maps are constructed but in the visual experiences they provide that “make meaning” for the viewer. By over-prioritizing an emphasis on the way maps are made or on the geometric innovations that make their creation possible, the cartographic historian and theorist would miss the full effect of the work.

    Yet, the spiritual dimensions of mapmaking were not in opposition to technological expertise, and in many cases they went hand in hand. In his book Radical Arts, the Anglo-Dutch scholar Jan van Dorsten describes the spiritual motivations of sixteenth-century cosmographers disappointed by academic theology’s ability to ease the trauma of the European Reformation: “Theology…as the traditional science of revelation had failed visibly to unite mankind in one indisputably “true” perception of God’s plan and the properties of His creature. The new science of cosmography, its students seem to argue, will eventually achieve precisely that, thanks to its non-disputative method” (1970, 56-7). Some mapmakers of the sixteenth century in England, the Netherlands, and elsewhere—including Ortelius and others—imagined that the science and art of describing the created world, a text rivaling scripture in both revelatory potential and divine authorship, would create unity out of the disputation-prone culture of academic theology. Unlike theology, where thinkers are mapping an invisible world held in biblical scripture and apostolic tradition (as well as a millennium’s worth of commentary and exegesis), the liber naturae, the book of nature, is available to the eyes more directly, seemingly less prone to disputation.

    Cartographers were attempting to create an accurate imago mundi—surely that was a more tangible and grounded goal than trying to map divinity. Yet, as Patrick Gautier Dalché notes in his essay “The Reception of Ptolemy’s Geography (End of the Fourteenth to Beginning of the Sixteenth Century),” the modernizing techniques of cartography after the “rediscovery” of Ptolemy’s work, did not exactly follow a straight line of empirical progress:

    The modernization of the imago mundi and the work on modes of representation that developed during the early years of the sixteenth century should not be seen as either more or less successful attempts to integrate new information into existing geographic pictures. Nor should they be seen as steps toward a more “correct” representation, that is, toward conforming to our own notion of correct representation. They were exploratory games played with reality that took people in different directions…Ptolemy was not so much the source of a correct cartography as a stimulus to detailed consideration of an essential fact of cartographic representation: a map is a depiction based on a problematic, arbitrary, and malleable convention. (2007, 360).

    So even as the maps of this period may appear more “correct” to us, they are still engaged in experimentation to a degree that undermines any sense of the map as simply an empirical graphic representation of the earth. The “problematic, arbitrary, and malleable” conventions, used by the cartographer but observed and understood by the cartographic theorist and historian, reveal the sort of synergetic relationship between maker and observer, practitioner and theorist that allow an artifact to come into greater focus.

    Yet, cartography for much of its history turned away from seeing its work as culturally or even politically embedded. David Stoddart, in his history of geography, labels Cook’s voyage to the Pacific in 1769 as the origin point of cartography’s transforming into an empirical science.[1] Stoddart places geography, from that point onward, within the realm of the natural sciences based on, as Derek Gregory observes, “three features of decisive significance for the formation of geography as a distinctly modern, avowedly ‘objective’ science: a concern for realism in description, for systematic classification in collection, and for the comparative method in explanation” (Gregory 1994, 19). What is gone, then, in this march toward empiricism is any sense of culturally embedded codes within the map. The map, like a lab report of scientific findings, is meant to represent what is “actually” there. This term “actually” will come back to haunt us when we turn to the digital humanities.

    Yet, in the long history of mapping, before and after this supposed empirical fulcrum, maps remain slippery and malleable objects that are used for a diverse range of purposes and that reflect the cultural imagination of their makers and observers. As maps took on the appearance of the empirical and began to sublimate the devotional and fantastical aspects they had once shown proudly, they were no less imprinted with cultural knowledge and biases. If anything, the veil of empiricism allowed the cultural, political, and imperial goals of mapmaking to be hidden.

    In William Boelhower’s groundbreaking “Inventing America: The Culture of the Map” he argued precisely that maps had not simply graphically represented America, but rather that America was invented by maps. “Accustomed to the success of scientific discourse and imbued with the Cartesian tradition,” he writes, “the sons of Columbus naturally presumed that their version of reality was the version” (1988, 212). While Europeans believed they were simply mapping what they saw according to empirical principles, they didn’t realize they were actually inventing America in their own discursive image. He elaborates: “The Map is America’s precognition; at its center is not geography in se but the eye of the cartographer. The fact requires new respect for the in-forming relation between the history of modern cartography and the history of the Euro-American’s being-in-the-new-world” (213). Empiricism, then, was empire. “Empirical” maps were making the eye of the cartographer into the ideal “objective” viewer, producing a fictional way of seeing that reflected state power. Boelhower refers to the scale map as a kind of “panopticon” because of the “line’s achievement of an absolute and closed system no longer dependent on the local perspectivism of the image. With map in hand, the physical subject is theoretically everywhere and nowhere, truly a global operator” (222). What appears, then, simply to be the gathering, studying, and representation of data is, in fact, a system of discursive domination in which the cartographer asserts their worldview onto a site. As Boelhower puts it: “Never before had a nation-state sprung so rationally from a cartographic fiction, the Euclidean map imposing concrete form on a territory and a people” (223). America was a cartographic invention meant to appear as empirically identical to how the cartographers made it look.

    To turn again to J.B. Harley’s 1989 bombshell, maps are always evidence of cultural norms and perspectives, even when they try their best to appear sparse and scientific. Referring to “plain scientific maps,” Harley claims that “such maps contain a dimension of ‘symbolic realism’ which is no less a statement of political authority or control than a coat-of-arms or a portrait of a queen placed at the head of an earlier decorative map.” Even “accuracy and austerity of design are now the new talismans of authority culminating in our own age with computer mapping” (60). To represent the world “is to appropriate it” and to “discipline” and “normalize” it (61). The more we move away from cultural markers for the mythical comfort of “empirical” data, the more we find we are creating dominating fictions. There is no representation of data that does not exist within the hierarchies of cultural codes and expectations.

    What this rather eclectic history of cartography reveals is that even when maps and mapmaking attempt to hide or move beyond their cultural and devotional roots, cultural, ethical, and political markers inevitably embed themselves in the map’s role as a broker of power. Maps sort data, but in so doing they create worldviews with real world consequences. As some practitioners of mapmaking in the early modern period, such as those Familists who counted several cartographers among their membership, might have thought their cartographic work provided a more universal and less disputation-prone discursive focus than say, philosophy or theology, they were producing power through their maps, appropriating and taming the world around them in ways only fully accessible to the reader, the historian, the viewer. Harley invites us to push back against a definition of cartographic studies that follows what cartographers themselves believe cartography must be. One can now, like the author of this essay, be a theorist and historian of cartographic culture without ever having made a map. Having one’s work exist outside the power-formation networks of cartographic technology provides a unique view into how maps make meaning and power out in the world. The main goal of this essay, as I turn to the digital humanities, is to encourage those interested in the digital turn to make room for those who study, observe, and critique, but do not make.[2]

    Though the digital turn in the humanities is often celebrated for its wider scope and its ability to allow scholars to interpret—or at least observe—data trends across many more books than one human could read in the research period of an academic project, I would argue that the main thrust of the fantasy of the digital turn can be understood through its preoccupation with a fantasy of access and a view of its labor as fundamentally different than the labor of traditional academic discourse. A radical hybridity is celebrated. Rather than just read books and argue about the contents, the digital humanist is able to draw from a wide variety of sources and expanded data. Michael Witmore, in a recent essay published in New Literary History, celebrates this age of hybridity: “If we speak of hybridization as the point where constraints cease to be either intellectual or physical, where changes in the earth’s mean temperature follow just as inevitably from the ‘political choices of human beings’ as they do from the ‘laws of nature,’ we get a sense of how rich and productive the modernist divide has been. Hybrids have proliferated. Indeed, they seem inexhaustible” (355). Witmore sees digital humanities as existing within this hybridity: “The Latourian theory of hybrids provides a useful starting point for thinking about a field of inquiry in which interpretive claims are supported by evidence obtained via the exhaustive, enumerative resources of computing” (355).  The emphasis on the “exhaustive” and “enumerative” resources of computing would imply, even if this were not Witmore’s intention, that computing opens a depth of evidence not available to the non-hybrid, non-digitally enabled humanist.

    Indeed, in certain corners of DH, one often finds a suspicious eye cast on the value of traditional exegetical practices practiced without any digital engagements. In The Digital Humanist: A Critical Inquiry by Teresa Numerico, Domenico Fiormonte, and Francesco Tomasi, “the authors call on humanists to acquire the skills to become digital humanists,” elaborating: “Humanists must complete a paso doble, a double step: to rediscover the roots of their own discipline and to consider the changes necessary for its renewal. The start of this process is the realization that humanists have indeed played a role in the history of informatics” (2015, x). Numerico, Fiormonte, and Tomasi offer a vision of the humanities as in need of “renewal” rather than under attack from external forces. The suggestion is that the humanities need to rediscover their roots while at the same time taking on the “tools necessary for [their] renewal,” tools which are related to their “role in the history of informatics” and computing. The humanities are then shown to be tied up in a double bind: they have forgotten their roots and they are unable to innovate without the help of considering the digital.

    To offer a political aside: while Numerico, Fiormonte, and Tomasi offer a compelling and necessary history of the humanistic roots of computing, their argument is well in line with right-leaning attacks on the humanities  In their view, the humanities have fallen away from their first purpose, their roots. While the authors of the volume see these roots as connected to the early years of modern computer science, they could just as easily, especially given what early computational humanities looked like, be urging a return to philology and to the world of concordances and indexing that were so important to early and mid-twentieth century literary studies. They might also gesture instead at the deep history of political and philosophical thought out of which the modern university was born, and which were considered fundamental to the very project of university education until only very recently. Barring a return to these roots, the least the humanities can do to survive is to renew itself based on a connection to the digital and to the site of modern work: the computer terminal.

    Of course, what scholarly work is done outside the computer terminal? Journals and, increasingly, whole university press catalogs are being digitized and sold to university libraries on a subscription bases. Scholars read these materials and then type their own words into word processing programs onto machines (even, if like the recent Freewrite Machine released by Astrohaus, the machine attempts to appear as little like a computer as possible) and then, in almost all cases, email their work to editors who then edit it digitally and then publish it either in digitally-enabled print publishing or directly on-line. So why aren’t humanists of all sorts already considered connected to the digital?

    The answer is complicated and, like so many things in DH, depends on which particular theorist or practitioner you ask. Matthew Kirschenbaum writes about how one knows one is a digital humanist:

    You are a digital humanist if you are listened to by those who are already listened to as digital humanists, and they themselves got to be digital humanists by being listened to by others. Jobs, grant funding, fellowships, publishing contracts, speaking invitations—these things do not make one a digital humanist, though they clearly have a material impact on the circumstances of the work one does to get listened to. Put more plainly, if my university hires me as a digital humanist and if I receive a federal grant (say) to do such a thing that is described as digital humanities and if I am then rewarded by my department with promotion for having done it (not least because outside evaluators whom my department is enlisting to listen to as digital humanists have attested to its value to the digital humanities), then, well, yes, I am a digital humanist. Can you be a digital humanist without doing those things? Yes, if you want to be, though you may find yourself being listened to less unless and until you do some thing that is sufficiently noteworthy that reasonable people who themselves do similar things must account for your work, your thing, as a part of the progression of a shared field of interest. (2014, 55)

    Kirschenbaum defines the digital humanist as, mostly, someone who does something that earns the recognition of other digital humanists. He argues that this is not particularly different from the traditional humanities in which publications, grants, jobs, etc. are the standard definition of who is or is not a scholar. Yet, one wonders, especially in the age of the complete collapse of the humanities job market, if such institutional distinctions are either ethical or accurate. What would we call someone with a Ph.D. (or even without) who spends their days readings books, reading scholarly articles, and writing in their own room about the Victorian verse monologue or the early Tudor dramatic interludes? If no one reads a scholar, are they still a scholar? For the creative arts, we seem to have answered this question. We believe that the work of a poet, artist, or philosopher matters much more than their institutional appreciation or memberships during the era of the work’s production. Also, the need to be “listened to” is particularly vexed and reflects some of the political critiques that are often launched at DH. Who is most listened to in our society? White, cisgendered, heterosexual men. In the age of Trump, we are especially attuned to the fact that whom we choose to listen to is not always the most deserving or talented voice, but the one reflecting existent narratives of racial and economic distribution.

    Beyond this, the combined requirement of institutional recognition and economic investment (a salary from a university, a prestigious grant paid out) ties the work of the humanist to institutional rewards. One can be a poet, scholar, thinker in one’s own house, but you can’t be an investment banker or a lawyer or a police officer by self-declaration. The fluid nature of who can be a philosopher, thinker, poet, scholar has always meant that the work, not the institutional affiliation, of a writer/maker matters. Though DH is a diverse body of practitioners doing all sorts of work, it is often framed, sometimes only implicitly, as a return to “work” over “theory.” Kirschenbaum for instance, defending DH against accusations that it is against the traditional work of the humanities, writes: “Digital humanists don’t want to extinguish reading and theory and interpretation and cultural criticism. Digital humanists want to do their work… they want professional recognition and stability, whether as contingent labor, ladder faculty, graduate students, or in ‘alt-ac’ settings” (56). They essentially want the same things any other scholar does. Yet, while digital humanists are on the one hand defined by their ability to be listened to and to have professional recognition and stability, they are also in search of recognition and stability and eager to reshape humanistic work toward a more technological model.

    This leads to a question that is not always explored closely enough in discussions of the digital humanities in higher education. Though scholars are rightly building bridges between STEM and the humanities (rightly pushing for STEAM over STEM), there are major institutional differences between how the humanities and the sciences have traditionally functioned. Scientific research largely happens because of institutional investment of some kind whether from governmental, NGO, or corporate grants. This is why the funding sources of any given study are particularly important to follow. In the humanities, of course, grants also exist and they are a marker of career prestige. No one could doubt the benefit of time spent in a far-away archive or at home writing instead of teaching because of a dissertation-completion grant. Grants, in other words, boost careers but they are not necessary.[3] Very successful humanists depend on only library resources to produce influential work. In many cases, access to a library, a computer, and a desk is all one needs and the digitization of many archives (a phenomenon not free from political and ethical complications) has expanded access to archival materials once only available to students of wealthy institutions with deep special collections budgets or those with grants able to travel and lodge themselves far away for their research.

    All this is to say that a particular valorization of the sciences is risky business for the humanities. Kirschenbaum recommends that since “digital humanities…is sometimes said to suffer from Physics envy,” the field should embrace this label and turn to “a singularly powerful intellectual precedent for examining in close (yes, microscopic) detail the material conditions of knowledge production in scientific settings or configurations. Let us read citation networks and publication venues. Let us examine the usage patterns around particular tools. Let us treat the recensio of data sets” (60). Longing for the humanities to resemble the sciences is nothing new. Longing for data sets instead of individual texts, longing for “particular tools” rather than a philosophical problem or trend can sometimes be a helpful corrective to more Platonic searches for the “spirit” of a work or movement. And yet, there are risks to this approach, not least because the works themselves, that is, the object of inquiry, is treated in such general terms that it becomes essentially invisible. One can miss the tree for the forest and know more about the number of citations of Dante’s Commedia than the original text, or the spirit in which those citations are made. Surely, there is room for both, except when, because of shrinking hiring practices, there isn’t.

    In fact the economic politics of digital humanities has long been a source of at time fiery debate. Daniel Allington, Sarah Brouillette, and David Golumbia, in “Neoliberal Tools (and Archives): A Political History of Digital Humanities,” argue that the digital humanities have long been more defined by their preference for lab and project-based sources of knowledge over traditional humanistic inquiry:

    What Digital Humanities is not about, despite its explicit claims, is the use of digital or quantitative methodologies to answer research questions in the humanities. It is, instead, about the promotion of project-based learning and lab-based research over reading and writing, the rebranding of insecure campus employment as an empowering “alt-ac” career choice, and the redefinition of technical expertise as a form (indeed, the superior form) of humanistic knowledge. (Allington, Brouillette and Golumbia 2016)

    This last point, the valorization of “technical expertise,” is, I would argue, profoundly difficult to perform in a way that doesn’t implicitly devalue the classic toolbox of humanistic inquiry. The motto “More hack, less yack”—a favorite of the THATCamps, collaborative “un-conferences”—encapsulates this idea. Too much unfettered talking could lead to discord, to ambiguity, and to strife. To hack, on the other hand, is understood as something tangible and something implicitly more worthwhile than the production of discourse outside of particular projects and digital labs. Yet Natalia Cecire has noted, “You show up at a THATCamp and suddenly folks are talking about separating content and form as if that were, like, a real thing you could do. It makes the head spin” (Cecire 2011). Context, with all its ambiguities, once the bedrock of humanistic inquiry, is being sidestepped for massive data analysis that, by the very nature of distant reading, cannot account for context to a degree that would satisfy, say, the many Renaissance scholars who trained me. Cecire’s argument is a valuable one. In her post, she does not argue that we should necessarily follow a strategy of “no hack,” only that “we should probably get over the aversion to ‘yack.’” As she notes, “[yack] doesn’t have to replace ‘hack’; the two are not antithetical.”

    As DH continues to define itself, one can detect a sense that digital humanities’ focus on individual pieces or series of data, as well as their work in coding, embeds them in more empirical conversations that do not float to the level of speculation that is so emblematic of what used to be called high theory. This is, for many DH practitioners, a source of great pride. Kirschenbaum ends his essay with the following observation: “there is one thing that digital humanities ineluctably is: digital humanities is work, somebody’s work, somewhere, some thing, always. We know how to talk about work. So let’s talk about this work, in action, this actually existing work” (61). The author’s insistence on “some thing” and “this actually existing work” implies that there is work that is not centered on a thing or on work that actually exists, that the move to more concrete objects of inquiry, toward more empirical subjects, is a defining characteristic of digital humanities.

    This, among other issues, has made many respond to the digital humanities as if they are cooperating with and participating in the corporatized ideologies of Silicon Valley “tech culture.” Whitney Trettien, in an insightful blogpost, claims, “Humanities scholars who engage with technology in non-trivial ways have done a poor job responding to such criticism” and accuses those who criticize digital humanities as “continuing to reify a diverse set of practices as a homogeneous whole.” Let me be clear: I am not claiming that Kirschenbaum or Trettien or any other scholar writing in a theoretical mode about digital humanities is representative of an entire field, but their writing is part of the discursive community and when those of us whose work is enabled by digital resources but who do not work to build digital tools see our work described as a “trivial” engagement with the digital and see our work put in contrast, implicitly but still clearly, with “this actually existing work,” it is hard not to feel as if the humanist working on texts with digital tools (but not about the digital tools or about data derived from digital modeling) were being somehow slighted.

    For instance, in a short essay by Tom Scheinfeldt, “Why Digital Humanities is ‘Nice,’” the author claims: “One of the things that people often notice when they enter the field of digital humanities is how nice everybody is. This can be in stark contrast to other (unnamed) disciplines where suspicion, envy, and territoriality sometimes seem to rule. By contrast, our most commonly used bywords are ‘collegiality,’ ‘openness,’ and ‘collaboration’” (2012, 1). I have to admit I have not noticed what Scheinfeldt claims people often notice (perhaps I have spent too much time on twitter watching digital humanities debates unfurl in less than “nice” ways), but the claim, even as a discursive and defining fiction around DH, helps to understand one thread of the digital humanities’ project of self-definition: we are kind because what we work on is verifiable fact, not complicated and speculative philosophy or theory. Scheinfeldt says as much as he concludes his essay:

    Digital humanities is nice because, as I have described in earlier posts, we’re often more concerned with method than we are with theory. Why should a focus on method make us nice? Because methodological debates are often more easily resolved than theoretical ones. Critics approaching an issue with sharply opposed theories may argue endlessly over evidence and interpretation. Practitioners facing a methodological problem may likewise argue over which tool or method to use. Yet at some point in most methodological debates one of two things happens: either one method or another wins out empirically, or the practical needs of our projects require us simply to pick one and move on. Moreover, as Sean Takats, my colleague at the Roy Rosenzweig Center for History and New Media (CHNM), pointed out to me today, the methodological focus makes it easy for us to “call bullshit.” If anyone takes an argument too far afield, the community of practitioners can always put the argument to rest by asking to see some working code, a useable standard, or some other tangible result. In each case, the focus on method means that arguments are short, and digital humanities stays nice. (2)

    The most obvious question one is left with is: but what is the code doing? Where are the humanities in this vision of the digital? What truly discursive and interpretative work could produce fundamental disagreements that could be resolved simply by verifying the code in a community setting? Also, the celebration of how enforceable community norms are if an argument goes “too far afield” presents a troubling vision of a true discursive community where the appearance of agreement, enforceable through “empirical” testing, is more important than freedom of debate. In our current political climate, one wonders if such empirically-minded groupthink adequately makes room for more vulnerable, and not quite as loud, voices. When the goal is a functioning website or program, Scheinfeldt may be quite right, but when describing discursive work in the humanities, citing text for instance, rarely quells disagreement, but only makes clearer where the battle lines are drawn. This is particularly ironic given how the digital humanities, understood as a giant discursive, never-quite-adequate term for the field, is still defining itself and has been defining itself for decades with essay after essay defining just what DH is.

    I am echoing here some of the arguments offered by Adeline Koh in her essay “Niceness, Building, and Opening the Genealogy of the Digital Humanities: Beyond the Social Contract of Humanities Computing.” In this quite important intervention, Koh argues that DH is centered in two linked characteristics, niceness and technological expertise. Though one might think these requirements are disparate, Koh reveals how they are linked in the formation of a DH social contract:

    In my reading of this discursive structure, each rule reinforces the other. An emphasis on method as it applies to a project—which requires technical knowledge—requires resolution, which in turn leads to niceness and collegiality. To move away from technical knowledge—which appears to happen in [prominent DH scholar Stephen] Ramsay’s formulation of DH 2—is to move away from niceness and toward a darker side of the digital humanities. Proponents of technical knowledge appear to be arguing that to reject an emphasis on method is to reject an emphasis on civility. In other words, these two rules form the basis of an attempt to enforce a digital humanities social contract: necessary conditions (technical knowledge) that impose civic responsibilities (civility and niceness). (100)

    Koh believes that what is necessary to reduce the link between DH social contracts and the tenets of liberalism, is an expanded genealogy of the digital humanities. Koh urges DH to consider its roots beyond humanities computing.[4]

    To demand that one work with technical expertise on “this actually existing work”—whatever that work may end up being—is to state rather clearly that there are guidelines fencing in the digital humanities. As in the history of cartographic studies, the opinions of the makers paying attention to data sets have been allowed to determine what the digital humanities are (or what DH is). Like the moment when J.B. Harley challenged historians and theorists of cartography to ignore what the cartographers say and explore maps and mapmaking outside of the tools needed to make a map, perhaps DH is ready to enter a new phase where it begins its own renewal by no longer valorizing tools, code, and technology and letting the observers, the consumers, the fantasists, and the historians of power and oppression in (without their laptops). Indeed, what DH can learn from the history of cartography is to understand that what DH is, in all its many forms, is seldom (just) what digital humanists say it is.

    _____

    Tim Duffy is a scholar of Renaissance literature, poetics, and spatial philosophy.

    Back to the essay

    _____

    Notes

    [1] See David Stoddart, “Geography—a European Science” in On geography and its history, pp 28-40. For a discussion of Stoddart’s thinking, see Derek Gregory, Geographic Imaginations, pp. 16-21.

    [2] Obviously, critics and writers make, but their critique exists outside of the production of the artifact that they study. Cartographic theorists, as this article will argue, need not be a cartographer themselves any more than a critic or theorist of the digital need be a programmer or creator of digital objects.

    [3] For more on the political problems of dependence on grants, see Waltzer (2012): “One of those conditions is the dependence of the digital humanities upon grants. While the increase in funding available to digital humanities projects is welcome and has led to many innovative projects, an overdependence on grants can shape a field in a particular way. Grants in the humanities last a short period of time, which make them unlikely to fund the long-term positions that are needed to mount any kind of sustained challenge to current employment practices in the humanities. They are competitive, which can lead to skewed reporting on process and results, and reward polish, which often favors the experienced over the novice. They are external, which can force the orientation of the organizations that compete for them outward rather than toward the structure of the local institution and creates the pressure to always be producing” (340-341).

    [4] In her reading of how digital humanities deploys niceness, Koh writes “In my reading of this discursive structure, each rule reinforces the other. An emphasis on method as it applies to a project—which requires technical knowledge—requires resolution, which in turn leads to niceness and collegiality. To move away from technical knowledge…is to move away from niceness and toward a darker side of the digital humanities. Proponents of technical knowledge appear to be arguing that to reject an emphasis on method is to reject an emphasis on civility” (100).

    _____

    Works Cited

    • Allington, Daniel, Sarah Brouillete, and David Golumbia. 2016. “Neoliberal Tools (and Archives): A Political History of Digital Humanities.” Los Angeles Review of Books.
    • Boelhower, William. 1988. “Inventing America: The Culture of the Map” in Revue française d’études américaines 36. 211-224.
    • Cecire, Natalia. 2011. “When DH Was in Vogue; or, THATCamp Theory.”
    • Dalché, Patrick Gautier. 2007. “The Reception of Ptolemy’s Geography (End of the Fourteenth to Beginning of the Sixteenth Century) in Cartography in the European Renaissance, Volume 3, Part 1. Edited by David Woodward. Chicago: University of Chicago Press. 285-364.
    • Fiormonte, Domenico, Teresa Numerico, and Francesca Tomasi. 2015. The Digital Humanist: A Critical Inquiry. New York: Punctum Books
    • Gregory, Derek. 1994. Geographic Imaginations. Cambridge: Blackwell.
    • Harley, J.B. 2011. “Deconstructing the Map” in The Map Reader: Theories of Mapping Practice and Cartographic Representation, First Edition, edited by Martin Dodge, Rob Kitchin and Chris Perkins. New York: John Wiley & Sons, Ltd. 56-64.
    • Jacob, Christian. 2005. The Sovereign Map. Translated by Tom Conley. Chicago:  University of Chicago Press.
    • Kirschenbaum, Matthew. 2014. “What is ‘Digital Humanities,’ and Why Are They Saying Such Terrible Things about It?” Differences 25:1. 46-63.
    • Koh, Adeline. 2014. “Niceness, Building, and Opening the Genealogy of the Digital Humanities: Beyond the Social Contract of Humanities Computing.” Differences 25:1. 93-106.
    • Scheinfeldt, Tom. 2012. “Why Digital Humanities is ‘Nice.’” In Matthew Gold, ed., Debates in the Digital Humanities. Minneapolis: University of Minnesota Press.
    • Trettien, Whitney. 2016. “Creative Destruction/‘Digital Humanities.’” Medium (Aug 24).
    • Watts, Pauline Moffitt. 2007. “The European Religious Worldview and Its Influence on Mapping” in The History of Cartography: Cartography in the European Renaissance, Vol. 3, part 1. Edited by David Woodward. Chicago: University of Chicago Press). 382-400.
    • Waltzer, Luke. 2012. “Digital Humanities and the ‘Ugly Stepchildren’ of American Higher Education.” In Matthew Gold, ed., Debates in the Digital Humanities. Minneapolis: University of Minnesota Press.
    • Witmore, Michael. 2016. “Latour, the Digital Humanities, and the Divided Kingdom of Knowledge.” New Literary History 47:2-3. 353-375.

     

  • David Golumbia — The Digital Turn

    David Golumbia — The Digital Turn

    David Golumbia

    Is there, was there, will there be, a digital turn? In (cultural, textual, media, critical, all) scholarship, in life, in society, in politics, everywhere? What would its principles be?

    The short prompt I offered to the contributors to this special issue did not presume to know the answers to these questions.

    That means, I hope, that these essays join a growing body of scholarship and critical writing (much, though not by any means all, of it discussed in the essays that make up this collection) that suspends judgment about certain epochal assumptions built deep into the foundations of too much practice, thought, and even scholarship about just these questions.

    • In “The New Pythagoreans,” Chris Gilliard and Hugh Culik look closely at the long history of Pythagorean mystic belief in the power of mathematics and its near-exact parallels in contemporary promotion of digital technology, and especially surrounding so-called big data.
    • In “From Megatechnic Bribe to Megatechnic Blackmail: Mumford’s ‘Megamachine’ after the Digital Turn,” Zachary Loeb asks about the nature of the literal and metaphorical machines around us via a discussion of the 20th century writer and social critic (and) Lewis Mumford’s work, one of the thinkers who most fully anticipated the digital revolution and understood its likely consequences.
    • In “Digital Proudhonism,” Gavin Mueller writes that “a return to Marx’s critique of Proudhon will aid us in piercing through the Digital Proudhonist mystifications of the Internet’s effects on politics and industry and reformulate both a theory of cultural production under digital capitalism as well as radical politics of work and technology for the 21st century.”
    • In “Mapping Without Tools: What the Digital Turn Can Learn from the Cartographic Turn.” Tim Duffy pushes back “against the valorization of ‘tools’ and ‘making’ in the digital turn, particularly its manifestation in digital humanities (DH), by reflecting on illustrative examples of the cartographic turn, which, from its roots in the sixteenth century through to J.B. Harley’s explosive provocation in 1989 (and beyond) has labored to understand the relationship between the practice of making maps and the experiences of looking at and using them.  By considering the stubborn and defining spiritual roots of cartographic research and the way fantasies of empiricism helped to hide the more nefarious and oppressive applications of their work, I hope to provide a mirror for the state of the digital humanities, a field always under attack, always defining and defending itself, and always fluid in its goals and motions.”
    • Joseph Erb, Joanna Hearne, and Mark Palmer with Durbin Feeling, in “Origin Stories in the Genealogy of Cherokee Language Technology,” argue that “the surge of critical work in digital technology and new media studies has rarely acknowledged the centrality of Indigeneity to our understanding of systems such as mobile technologies, major programs such as Geographic Information Systems (GIS), digital aesthetic forms such as animation, or structural and infrastructural elements of hardware, circuitry, and code.”
    • In “Artificial Saviors,” tante connects the pseudo-religious and pseudo-scientific rhetoric found at a surprising rate among digital technology developers and enthusiasts: “When AI morphed from idea or experiment to belief system, hackers, programmers, ‘data scientists,’ and software architects became the high priests of a religious movement that the public never identified and parsed as such.”
    • In “The Endless Night of Wikipedia’s Notable Woman Problem,” Michelle Moravec “takes on one of the ‘tests’ used to determine whether content is worthy of inclusion in Wikipedia, notability, to explore how the purportedly neutral concept works against efforts to create entries about female historical figures.”
    • In “The Computational Unconscious,” Jonathan Beller interrogates the “penetration of the digital, rendering early on the brutal and precise calculus of the dimensions of cargo-holds in slave ships and the sparse economic accounts of ship ledgers of the Middle Passage, double entry bookkeeping, the rationalization of production and wages in the assembly line, and more recently, cameras and modern computing.”
    • In “What Indigenous Literature Can Bring to Electronic Archives,” Siobhan Senier asks, “How can the insights of the more ethnographically oriented Indigenous digital archives inform digital literary collections, and vice versa? How do questions of repatriation, reciprocity, and culturally sensitive contextualization change, if at all, when we consider Indigenous writing?”
    • Rob Hunter provides the following abstract of “The Digital Turn and the Ethical Turn: Depoliticization in Digital Practice and Political Theory”:

      The digital turn is associated with considerable enthusiasm for the democratic or even emancipatory potential of networked computing. Free, libre, and open source (FLOSS) developers and maintainers frequently endorse the claim that the digital turn promotes democracy in the form of improved deliberation and equalized access to information, networks, and institutions. Interpreted in this way, democracy is an ethical practice rather than a form of struggle or contestation. I argue that this depoliticized conception of democracy draws on commitments—regarding personal autonomy, the ethics of intersubjectivity, and suspicion of mass politics—that are also present in recent strands of liberal political thought. Both the rhetorical strategies characteristic of FLOSS as well as the arguments for deliberative democracy advanced within contemporary political theory share similar contradictions and are vulnerable to similar critiques—above all in their pathologization of disagreement and conflict. I identify and examine the contradictions within FLOSS, particularly those between commitments to existing property relations and the championing of individual freedom. I conclude that, despite the real achievements of the FLOSS movement, its depoliticized conception of democracy is self-inhibiting and tends toward quietistic refusals to consider the merits of collective action or the necessity of social critique.

    • John Pat Leary, in “Innovation and the Neoliberal Idioms of Development,” “explores the individualistic, market-based ideology of ‘innovation’ as it circulates from the English-speaking first world to the so-called third world, where it supplements, when it does not replace, what was once more exclusively called ‘development.’” He works “to define the ideology of ‘innovation’ that undergirds these projects, and to dissect the Anglo-American ego-ideal that it circulates. As an ideology, innovation is driven by a powerful belief, not only in technology and its benevolence, but in a vision of the innovator: the autonomous visionary whose creativity allows him to anticipate and shape capitalist markets.”
    • Annemarie Perez, in “UndocuDreamers: Public Writing and the Digital Turn,” writes of a “paradox” she finds in her work with students who belong to communities targeted by recent immigration enforcement crackdowns and the default assumptions about “open” and “public” found in so much digital rhetoric: “My students should write in public. Part of what they are learning in Chicanx studies is about the importance of their voices, of their experiences and their stories are ones that should be told. Yet, given the risks in discussing migration and immigration through the use of public writing, I wonder how I as an instructor should either encourage or discourage students from writing their lives, their experiences as undocumented migrants, experiences which have touched, every aspect of their lives.”
    • Gretchen Soderlund, in “Futures of Journalisms Past (or, Pasts of Journalism’s Future),” looks at discourses of “the future” in journalism from the 19th and 20th centuries, in order to help frame current discourses about journalism’s “digital future,” in part because when “when it comes to technological and economic speedup, journalism may be the canary in the mine.”
    • In “The Singularity in the I790s: Toward a Prehistory of the Present With William Godwin and Thomas Malthus,” Anthony Galluzzo examines the often-misunderstood and misrepresented writings of William Godwin, and also those of Thomas Malthus, to demonstrate how far back in English-speaking political history go the roots of today’s technological Prometheanism, and how destructive it can be, especially for the political left.

    “Digital Turn” Table of Contents

     

     

     

  • Zachary Loeb – All Watched Over By Machines (Review of Levine, Surveillance Valley)

    Zachary Loeb – All Watched Over By Machines (Review of Levine, Surveillance Valley)

    a review of Yasha Levine, Surveillance Valley: The Secret Military History of the Internet (PublicAffairs, 2018)

    by Zachary Loeb

    ~

    There is something rather precious about Google employees, and Internet users, who earnestly believe the “don’t be evil” line. Though those three words have often been taken to represent a sort of ethos, their primary function is as a steam vent – providing a useful way to allow building pressure to escape before it can become explosive. While “don’t be evil” is associated with Google, most of the giants of Silicon Valley have their own variations of this comforting ideological façade: Apple’s “think different,” Facebook’s talk of “connecting the world,” the smiles on the side of Amazon boxes. And when a revelation troubles this carefully constructed exterior – when it turns out Google is involved in building military drones, when it turns out that Amazon is making facial recognition software for the police – people react in shock and outrage. How could this company do this?!?

    What these revelations challenge is not simply the mythos surrounding particular tech companies, but the mythos surrounding the tech industry itself. After all, many people have their hopes invested in the belief that these companies are building a better brighter future, and they are naturally taken aback when they are forced to reckon with stories that reveal how these companies are building the types of high-tech dystopias that science fiction has been warning us about for decades. And in this space there are some who seem eager to allow a new myth to take root: one in which the unsettling connections between big tech firms and the military industrial complex is something new. But as Yasha Levine’s important new book, Surveillance Valley, deftly demonstrates the history of the big tech firms, complete with its panoptic overtones, is thoroughly interwoven with the history of the repressive state apparatus. While many people may be at least nominally aware of the links between early computing, or the proto-Internet, and the military, Levine’s book reveals the depth of these connections and how they persist. As he provocatively puts it, “the Internet was developed as a weapon and remains a weapon today” (9).

    Thus, cases of Google building military drones, Facebook watching us all, and Amazon making facial recognition software for the police, need to be understood not as aberrations. Rather, they are business as usual.

    Levine begins his account with the war in Vietnam, and the origins of a part of the Department of Defense known as the Advanced Research Projects Agency (ARPA) – an outfit born of the belief that victory required the US to fight a high-tech war. ARPA’s technocrats earnestly believed “in the power of science and technology to solve the world’s problems” (23), and they were confident that the high-tech systems they developed and deployed (such as Project Igloo White) would allow the US to triumph in Vietnam. And though the US was not ultimately victorious in that conflict, the worldview of ARPA’s technocrats was, as was the linkage between the nascent tech sector and the military. Indeed, the tactics and techniques developed in Vietnam were soon to be deployed for dealing with domestic issues, “giving a modern scientific veneer to public policies that reinforced racism and structural poverty” (30).

    Much of the early history of computers, as Levine documents, is rooted in systems developed to meet military and intelligence needs during WWII – but the Cold War provided plenty of impetus for further military reliance on increasingly complex computing systems. And as fears of nuclear war took hold, computer systems (such as SAGE) were developed to surveil the nation and provide military officials with a steady flow of information. Along with the advancements in computing came the dispersion of cybernetic thinking which treated humans as information processing machines, not unlike computers, and helped advance a worldview wherein, given enough data, computers could make sense of the world. All that was needed was to feed more, and more, information into the computers – and intelligence agencies proved to be among the first groups interested in taking advantage of these systems.

    While the development of these systems of control and surveillance ran alongside attempts to market computers to commercial firms, Levine’s point is that it was not an either/or situation but a both/and, “computer technology is always ‘dual use,’ to be used in both commercial and military applications” (58) – and this split allows computer scientists and engineers who would be morally troubled by the “military applications” of their work to tell themselves that they work strictly on the commercial, or scientific side. ARPANET, the famous forerunner of the Internet, was developed to connect computer centers at a variety of prominent universities. Reliant on Interface Message Processors (IMPs) the system routed messages through the network through a variety of nodes and in the case that one node went down the system would reroute the message through other nodes – it was a system of relaying information built to withstand a nuclear war.

    Though all manner of utopian myths surround the early Internet, and by extension its forerunner, Levine highlights that “surveillance was baked in from the very beginning” (75). Case in point, the largely forgotten CONUS Intel program that gathered information on millions of Americans. By encoding this information on IBM punch cards, which were then fed into a computer, law enforcement groups and the army were able to access information not only regarding criminal activity, but activities protected by the first amendment. As news of these databases reached the public they generated fears of a high-tech surveillance society, leading some Senators, such as Sam Ervin, to push back against the program. And in a foreshadowing of further things to come, “the army promised to destroy the surveillance files, but the Senate could not obtain definitive proof that the files were ever fully expunged,” (87). Though there were concerns about the surveillance potential of ARPANET, its growing power was hardly checked, and more government agencies began building their own subnetworks (PRNET, SATNET). Yet, as they relied on different protocols, these networks could not connect to each other, until TCP/IP “the same basic network language that powers the Internet today” (95), allowed them to do so.

    Yet surveillance of citizens, and public pushback against computerized control, is not the grand origin story that most people are familiar with when it comes to the Internet. Instead the story that gets told is one whereby a military technology is filtered through the sieve of a very selective segment of the 1960s counterculture to allow it to emerge with some rebellious credibility. This view, owing much to Stewart Brand, transformed the nascent Internet from a military technology into a technology for everybody “that just happened to be run by the Pentagon” (106). Brand played a prominent and public role in rebranding the computer, as well as those working on the computers – turning these cold calculating machines into doors to utopia, and portraying computer programmers and entrepreneurs as the real heroes of the counterculture. In the process the military nature of these machines disappeared behind a tie-dyed shirt, and the fears of a surveillance society were displaced by hip promises of total freedom. The government links to the network were further hidden as ARPANET slowly morphed into the privatized commercial system we know as the Internet. It may seem mind boggling that the Internet was simply given away with “no real public debate, no discussion, no dissension, and no oversight” (121), but it is worth remembering that this was not the Internet we know. Rather it was how the myth of the Internet we know was built. A myth that combined, as was best demonstrated by Wired magazine, “an unquestioning belief in the ultimate goodness and rightness of markets and decentralized computer technology, no matter how it was used” (133).

    The shift from ARPANET to the early Internet to the Internet of today presents a steadily unfolding tale wherein the result is that, today, “the Internet is like a giant, unseen blob that engulfs the modern world” (169). And in terms of this “engulfing” it is difficult to not think of a handful of giant tech companies (Amazon, Facebook, Apple, eBay, Google) who are responsible for much of that. In the present Internet atmosphere people have become largely inured to the almost clichéd canard that “if you’re not paying, you are the product,” but what this represents is how people have, largely, come to accept that the Internet is one big surveillance machine. Of course, feeding information to the giants made a sort of sense, many people (at least early on) seem to have been genuinely taken in by Google’s “Don’t Be Evil” image, and they saw themselves as the beneficiaries of the fact that “the more Google knew about someone, the better its search results would be” (150). The key insight that firms like Google seem to have understood is that a lot can be learned about a person based on what they do online (especially when they think no one is watching) – what people search for, what sites people visit, what people buy. And most importantly, what these companies understand is that “everything that people do online leaves a trail of data” (169), and controlling that data is power. These companies “know us intimately, even the things that we hide from those closest to us” (171). ARPANET found itself embroiled in a major scandal, at its time, when it was revealed how it was being used to gather information on and monitor regular people going about their lives – and it may well be that “in a lot of ways” the Internet “hasn’t changed much from its ARPANET days. It’s just gotten more powerful” (168).

    But even as people have come to gradually accept, by their actions if not necessarily by their beliefs, that the Internet is one big surveillance machine – periodically events still puncture this complacency. Case in point: Edward Snowden’s revelations about the NSA which splashed the scale of Internet assisted surveillance across the front pages of the world’s newspapers. Reporting linked to the documents Snowden leaked revealed how “the NSA had turned Silicon Valley’s globe-spanning platforms into a de facto intelligence collection apparatus” (193), and these documents exposed “the symbiotic relationship between Silicon Valley and the US government” (194). And yet, in the ensuing brouhaha, Silicon Valley was largely able to paint itself as the victim. Levine attributes some of this to Snowden’s own libertarian political bent, as he became a cult hero amongst technophiles, cypher-punks, and Internet advocates, “he swept Silicon Valley’s role in Internet surveillance under the rug” (199), while advancing a libertarian belief in “the utopian promise of computer networks” (200) similar to that professed by Steward Brand. In many ways Snowden appeared as the perfect heir apparent to the early techno-libertarians, especially as he (like them) focused less on mass political action and instead more on doubling-down on the idea that salvation would come through technology. And Snowden’s technology of choice was Tor.

    While Tor may project itself as a solution to surveillance, and be touted as such by many of its staunchest advocates, Levine casts doubt on this. Noting that, “Tor works only if people are dedicated to maintaining a strict anonymous Internet routine,” one consisting of dummy e-mail accounts and all transactions carried out in Bitcoin, Levine suggests that what Tor offers is “a false sense of privacy” (213). Levine describes the roots of Tor in an original need to provide government operatives with an ability to access the Internet, in the field, without revealing their true identities; and in order for Tor to be effective (and not simply signal that all of its users are spies and soldiers) the platform needed to expand its user base: “Tor was like a public square—the bigger and more diverse the group assembled there, the better spies could hide in the crowd” (227).

    Though Tor had spun off as an independent non-profit, it remained reliant for much of its funding on the US government, a matter which Tor aimed to downplay through emphasizing its radical activist user base and by forming close working connections with organizations like WikiLeaks that often ran afoul of the US government. And in the figure of Snowden, Tor found a perfect public advocate, who seemed to be living proof of Tor’s power – after all, he had used it successfully. Yet, as the case of Ross Ulbricht (the “Dread Pirate Roberts” of Silk Road notoriety) demonstrated, Tor may not be as impervious as it seems – researchers at Carnegie Mellon University “had figured out a cheap and easy way to crack Tor’s super-secure network” (263). To further complicate matters Tor had come to be seen by the NSA “as a honeypot,” to the NSA “people with something to hide” were the ones using Tor and simply by using it they were “helping to mark themselves for further surveillance” (265). And much of the same story seems to be true for the encrypted messaging service Signal (it is government funded, and less secure than its fans like to believe). While these tools may be useful to highly technically literate individuals committed to maintaining constant anonymity, “for the average users, these tools provided a false sense of security and offered the opposite of privacy” (267).

    The central myth of the Internet frames it as an anarchic utopia built by optimistic hippies hoping to save the world from intrusive governments through high-tech tools. Yet, as Surveillance Valley documents, “computer technology can’t be separated from the culture in which it is developed and used” (273). Surveillance is at the core of, and has always been at the core of, the Internet – whether the all-seeing eye be that of the government agency, or the corporation. And this is a problem that, alas, won’t be solved by crypto-fixes that present technological solutions to political problems. The libertarian ethos that undergirds the Internet works well for tech giants and cypher-punks, but a real alternative is not a set of tools that allow a small technically literate gaggle to play in the shadows, but a genuine democratization of the Internet.

     

    *

     

    Surveillance Valley is not interested in making friends.

    It is an unsparing look at the origins of, and the current state of, the Internet. And it is a book that has little interest in helping to prop up the popular myths that sustain the utopian image of the Internet. It is a book that should be read by anyone who was outraged by the Facebook/Cambridge Analytica scandal, anyone who feels uncomfortable about Google building drones or Amazon building facial recognition software, and frankly by anyone who uses the Internet. At the very least, after reading Surveillance Valley many of those aforementioned situations seem far less surprising. While there are no shortage of books, many of them quite excellent, that argue that steps need to be taken to create “the Internet we want,” in Surveillance Valley Yasha Levine takes a step back and insists “first we need to really understand what the Internet really is.” And it is not as simple as merely saying “Google is bad.”

    While much of the history that Levine unpacks won’t be new to historians of technology, or those well versed in critiques of technology, Surveillance Valley brings many, often separate strands into one narrative. Too often the early history of computing and the Internet is placed in one silo, while the rise of the tech giants is placed in another – by bringing them together, Levine is able to show the continuities and allow them to be understood more fully. What is particularly noteworthy in Levine’s account is his emphasis on early pushback to ARPANET, an often forgotten series of occurrences that certainly deserves a book of its own. Levine describes students in the 1960s who saw in early ARPANET projects “a networked system of surveillance, political control, and military conquest being quietly assembled by diligent researchers and engineers at college campuses around the country,” and as Levine provocatively adds, “the college kids had a point” (64). Similarly, Levine highlights NBC reporting from 1975 on the CIA and NSA spying on Americans by utilizing ARPANET, and on the efforts of Senators to rein in these projects. Though Levine is not presenting, nor is he claiming to present, a comprehensive history of pushback and resistance, his account makes it clear that liberatory claims regarding technology were often met with skepticism. And much of that skepticism proved to be highly prescient.

    Yet this history of resistance has largely been forgotten amidst the clever contortions that shifted the Internet’s origins, in the public imagination, from counterinsurgency in Vietnam to the counterculture in California. Though the area of Surveillance Valley that will likely cause the most contention is Levine’s chapters on crypto-tools like Tor and Signal, perhaps his greatest heresy is in his refusal to pay homage to the early tech-evangels like Stewart Brand and Kevin Kelly. While the likes of Brand, and John Perry Barlow, are often celebrated as visionaries whose utopian blueprints have been warped by power-hungry tech firms, Levine is frank in framing such figures as long-haired libertarians who knew how to spin a compelling story in such a way that made empowering massive corporations seem like a radical act. And this is in keeping with one of the major themes that runs, often subtlety, through Surveillance Valley: the substitution of technology for politics. Thus, in his book, Levine does not only frame the Internet as disempowering insofar as it runs on surveillance and relies on massive corporations, but he emphasizes how the ideological core of the Internet focuses all political action on technology. To every social, economic, and political problem the Internet presents itself as the solution – but Levine is unwilling to go along with that idea.

    Those who were familiar with Levine’s journalism before he penned Surveillance Valley will know that much of his reporting has covered crypto-tech, like Tor, and similar privacy technologies. Indeed, to a certain respect, Surveillance Valley can be read as an outgrowth of that reporting. And it is also important to note, as Levine does in the book, that Levine did not make himself many friends in the crypto community by taking on Tor. It is doubtful that cypherpunks will like Surveillance Valley, but it is just as doubtful that they will bother to actually read it and engage with Levine’s argument or the history he lays out. This is a shame, for it would be a mistake to frame Levine’s book as an attack on Tor (or on those who work on the project). Levine’s comments on Tor are in keeping with the thrust of the larger argument of his book: such privacy tools are high-tech solutions to problems created by high-tech society, that mainly serve to keep people hooked into all those high-tech systems. And he questions the politics of Tor, noting that “Silicon Valley fears a political solution to privacy. Internet Freedom and crypto offer an acceptable solution” (268). Or, to put it another way, Tor is kind of like shopping at Whole Foods – people who are concerned about their food are willing to pay a bit more to get their food there, but in the end shopping there lets people feel good about what they’re doing without genuinely challenging the broader system. And, of course, now Whole Foods is owned by Amazon. The most important element of Levine’s critique of Tor is not that it doesn’t work, for some (like Snowden) it clearly does, but that most users do not know how to use it properly (and are unwilling to lead a genuinely full-crypto lifestyle) and so it fails to offer more than a false sense of security.

    Thus, to say it again, Surveillance Valley isn’t particularly interested in making a lot of friends. With one hand it brushes away the comforting myths about the Internet, and with the other it pushes away the tools that are often touted as the solution to many of the Internet’s problems. And in so doing Levine takes on a variety of technoculture’s sainted figures like Stewart Brand, Edward Snowden, and even organizations like the EFF. While Levine clearly doesn’t seem interested in creating new myths, or propping up new heroes, it seems as though he somewhat misses an opportunity here. Levine shows how some groups and individuals had warned about the Internet back when it was still ARPANET, and a greater emphasis on such people could have helped create a better sense of alternatives and paths that were not taken. Levine notes near the book’s end that, “we live in bleak times, and the Internet is a reflection of them: run by spies and powerful corporations just as our society is run by them. But it isn’t all hopeless” (274). Yet it would be easier to believe the “isn’t all hopeless” sentiment, had the book provided more analysis of successful instances of pushback. While it is respectable that Levine puts forward democratic (small d) action as the needed response, this comes as the solution at the end of a lengthy work that has discussed how the Internet has largely eroded democracy. What Levine’s book points to is that it isn’t enough to just talk about democracy, one needs to recognize that some technologies are democratic while others are not. And though we are loathe to admit it, perhaps the Internet (and computers) simply are not democratic technologies. Sure, we may be able to use them for democratic purposes, but that does not make the technologies themselves democratic.

    Surveillance Valley is a troubling book, but it is an important book. It smashes comforting myths and refuses to leave its readers with simple solutions. What it demonstrates in stark relief is that surveillance and unnerving links to the military-industrial complex are not signs that the Internet has gone awry, but signs that the Internet is functioning as intended.

    _____

    Zachary Loeb is a writer, activist, librarian, and terrible accordion player. He earned his MSIS from the University of Texas at Austin, an MA from the Media, Culture, and Communications department at NYU, and is currently working towards a PhD in the History and Sociology of Science department at the University of Pennsylvania. His research areas include media refusal and resistance to technology, ideologies that develop in response to technological change, and the ways in which technology factors into ethical philosophy – particularly in regards of the way in which Jewish philosophers have written about ethics and technology. Using the moniker “The Luddbrarian,” Loeb writes at the blog Librarian Shipwreck, and is a frequent contributor to The b2 Review Digital Studies section.

    Back to the essay