boundary 2

Tag: Zachary Loeb

  • Zachary Loeb — Where We’re Going, We’ll Still Probably Need Roads (Review of Paris Marx, Road to Nowhere: What Silicon Valley Gets Wrong about the Future of Transportation)

    Zachary Loeb — Where We’re Going, We’ll Still Probably Need Roads (Review of Paris Marx, Road to Nowhere: What Silicon Valley Gets Wrong about the Future of Transportation)

    a review of Paris Marx, Road to Nowhere: What Silicon Valley Gets Wrong about the Future of Transportation (Verso, 2022)

    by Zachary Loeb

    You can learn a lot about your society’s relationship to technology by looking at its streets. Are the roads filled with personal automobiles or trolley-cars, bike lanes or occupied parking spaces, are there navigable sidewalks or is this the sort of place where a car is a requirement, does a subway rumble beneath the street or is the only sound the honking of cars stuck in traffic, are the people standing on the corner waiting for the bus or for the car they just booked through an app, or is it some kind of strange combination of many of these things simultaneously? The roadways we traverse on a regular basis can come to seem quite banal in their familiarity, yet they capture a complex tale of past decisions, current priorities, as well as a range of competing visions of the future.

    Our streets not only provide us with a literal path by which to get where we are going, they also represent an essential space in which debates about where we are going as a society play out. All of which is to say, as we hurtle down the road towards the future, it is important to pay attention to the fight for control of the steering wheel, and it’s worth paying attention to the sort of vehicle in which we find ourselves.

    In Road to Nowhere: What Silicon Valley Gets Wrong about the Future of Transportation, Paris Marx analyzes the social forces that have been responsible for making our roads (and by extension our cities, towns, and suburbs) function the way they do, while providing particular emphasis on the groups and individuals trying to determine what the roads of the future will look like. It is a cutting assessment that examines the ways in which tech companies are seeking to take over the streets, and sidewalks, as well as the space above and below them: with gig-economy drivers, self-driving cars, new tunnels, delivery robots, and much else. To the extent that technological solutions are frequently touted as the only possible response to complex social/political/economic problems, Marx moves beyond the flashy headlines to consider what those technological solutions actually look like when the proverbial rubber hits the road. In Road to Nowhere the streets and sidewalks appear as sites of political contestation, and Marx delivers an urgent warning against surrendering those spaces to big tech. After all, as Marx documents, the lords of the information superhighway are leaving plenty of flaming debris along the literal highways.

    The primary focus of Road to Nowhere is on the particular vision of mobility being put forth by contemporary tech companies, but Marx takes care to explore the industries and interests that had been enforcing their view of mobility long before anyone had ever held a smartphone. As Marx explains, the street and the city were not always the possession of the personal automobile, indeed the automobile was at one time “the dominant technology that ‘disrupted’ our society” (10). The introduction of the automobile saw these vehicles careening down streets that were once shared by many other groups, and as automobiles left destruction in their wake, the push for safety was one that was won by ostensibly protecting pedestrians by handing the streets over to the automobile. Marx connects the rise of the personal automobile to “a much longer trend of elites remaking the city to serve their interests” (11), and emphasizes how policies favoring automobiles undermined other ways of moving about cities (including walking and streetcars). As the personal automobile grew in popularity, and mass production made it a product available not only to the wealthy, physical spaces were further transformed such that an automobile became less and less of a luxury and more and more of a need. From the interstate highway system to the growth of suburbs to under-investment in public transit to the development of a popular mythos connecting the car to freedom—Marx argues that the auto-oriented society is not the inevitable result flowing from the introduction of the automobile, but the result of policies and priorities that gradually remade streets and cities in the automobile’s image.

    Even as the automobile established its dominance in the mid-twentieth century, a new sort of technology began to appear that promised (and threatened) to further remake society: the computer. Pivoting for a moment away from the automobile, Marx considers the ideological foundations of many tech companies, with their blend of techno-utopian hopefulness and anti-government sentiment wherein “faith was also put in technology itself as the means to address social and economic challenges” (44). While the mythology of Silicon Valley often lauds the rebellious geek, hacking away in a garage, Marx highlights the ways in which Silicon Valley (and the computing industry more generally) owes its early success to a massive influx of government money. Cold War military funding was very good—indeed, essential—for the nascent computing sector. Despite the significance of government backing, Silicon Valley became a hotbed for an ideology that sneered at democratic institutions while elevating the computer (and its advocates) as the bringer(s) of societal change. Thus, the very existence of complex social/political/economic problems became evidence of the failures of democracy and proof of the need for high-tech solutions—this was not only an ahistorical and narrow worldview, but one wherein a group of mostly-wealthy, mostly-white, mostly-cis-male tech lovers saw themselves as the saviors society had been waiting for. And while this worldview was reified in various gadgets, apps, and platforms “as tech companies seek to extend their footprint into the physical world” this same ideology—alongside an agenda that places “growth, profits, and power ahead of the common good”—is what undergirds Silicon Valley’s mobility project (62).

    One of the challenges in wrestling with tech companies’ visions is to not be swept away by the shiny high-tech vision of the future they disseminate. And one area where this can be particularly difficult is when it comes to electric cars. After all, amongst the climate conscious, the electric car appears as an essential solution in the fight against climate change. Yet, beyond the fact that “electric vehicles are not a new invention” (64), the electric car appears as an almost perfect example of the ways in which tech companies attempt to advance a seemingly progressive vision of the future while further entrenching the status quo. Much of the green messaging around electric vehicles “narrowly focuses on tailpipe emissions, ignoring the harms that pervades the supply chain and the unsustainable nature of auto-oriented development” (71). Too often the electric car appears as a way for individuals of means to feel that they are doing their part to “personal responsibility” their way out of climate change, even as the continued focus on the personal automobile blocks the transition towards public transit that is needed. Furthermore, the shift towards electric vehicles does not end destructive extraction, it just shifts the extraction from fossil fuels to minerals like copper, nickel, cobalt, lithium, and coltan. The electric car risks being a way of preserving auto-centric society, and this “does not solve how the existing transportation system fuels the climate crisis and the destruction of local environments all around the world” (88).

    If personal ownership of a car is such a problem, perhaps the solution is to simply have an app on your phone that lets you summon a vehicle (complete with a driver) when you need one, right? Not so fast. Companies like Uber sold themselves to the public on a promise of making cars available when needed, especially for urban dwellers who did not necessarily have a car of their own. The pitch was one of increased mobility, where those in need of a ride could easily hire one, while cash-strapped car owners could have a new opportunity to earn a few extra bucks driving in the evenings. Far from solving congestion, empowering drivers, and increasing everyone’s mobility, “the Uber model adds vehicles to the road and creates more traffic, especially since the app incentivizes drivers to be active during peak times when traffic is already backed up” (99). Despite claims that their app based services would solve a host of issues, Uber (and its ilk) have added to urban congestion, failed to provide their drivers with a stable income, and have not truly increased the mobility options for underserved communities.

    If gig-drivers wind up being such an issue, why not try to construct a world where drivers are not necessary? And thus, perhaps few ideas related to the future of mobility have as firm a grasp on the popular imagination as the idea of the self-driving car. A fantasy that seems straight out of science fiction. Albeit, with good reason. After all, what a science fiction writer can dream up, and what a special effects team can mock up for a movie, face serious obstacles in the real world. The story of tech companies and autonomous vehicles is one of grandiose hype (that often generates numerous glowing headlines), followed by significantly diminished plans once the challenges of introducing self-driving cars are recognized. While much of the infrastructure we encounter is built with automobiles in mind, autonomous cars require a variety of other sorts of not-currently existing infrastructure. Just as “automobiles required a social reconstruction in addition to a physical reconstruction, so too will autonomous vehicles” (125), and this will entail transforming infrastructure and habits that have been built up over decades. Attempts to introduce autonomous vehicles have revealed the clash between the tech company vision of the world and the complexities of the actually existing world—which is a major reason why many tech companies are quietly backing away from the exuberance with which they once hyped autonomous cars.

    Well, if the already existing roads are such a challenge, why not think abstractly? Instead of looking at the road, look above the road and below the road! Thus, plans such as Boring’s proposed tunnels, and ideas about “flying cars,” seek to get around many of the challenges the tech industry is encountering in the streets by attempting to capitalize on seemingly unused space. At first glance, such ideas may seem like clear examples of the sort of “out of the box thinking” for which tech companies are famed, yet “the span of time between the initial bold claims of prominent tech figures and the general realization that they are fraudulent appears to be shrinking” (159). And once more, in contrast to the original framing that seeks to treat new tunnels and flying cars as emancipatory routes, what becomes clear is that these are just another area in which wealthy tech elites are fantasizing about ways of avoiding getting stuck in traffic with the hoi polloi.

    Much of the history of the automobile that Marx recounts, involves pedestrians being deprived of more and more space, and this is a story that continues as new battles for the sidewalk intensify. As with other tech company interventions in mobility, micromobility solutions that cover sidewalks in scooters and bikes that are rentable via app, present themselves with a veneer of green accessibility. Yet littering cities with cheap bikes and scooters that wear out quickly while clogging the sidewalks, turn out to be just another service “designed to benefit the company” without genuinely assessing the mobility needs of particular communities (166). Besides, all of those sidewalk scooters are also finding that they need to compete for space with swarms of delivery robots that make sidewalks more difficult to use.

    From the electric car to the app summoned chauffeur to the autonomous car to the flying car, tech companies have no shortage of high-tech ideas for the future of mobility. And yet, “the truth is that when we look at the world that is actually being created by the tech industry’s interventions, we find that the bold promises are in fact a cover for a society that is both more unequal and one where that inequality is even more fundamentally built into the infrastructure and services we interact with every single day” (185). While the built environment is filled with genuine mobility issues, the solutions put forward by tech companies ignore the complexity of how these issues came about in favor of techno-fixes designed to favor tech companies’ bottom lines while simultaneously feeding them new data streams to capitalize. The gleaming city envisioned by tech elites and their companies may be broadcast to all, but these cities are playgrounds for the wealthy tech elite, not for the rest of us.

    The hope that tech companies will come along and sort everything out with some sort of nifty high-tech program speaks to a lack of faith in societies’ ability to tackle the complex issues they face. Yet, to make mobility work for everyone, what is essential is not to flee from politics, but to truly address politics. The tech companies are working to reshape our streets and cities to better fit their needs, but this demands that people counter by insisting that their streets and cities be made to actually meet people’s needs. Instead of looking to cities with roads clogged with Ubers and sidewalks blocked by broken scooters, we need to be paying attention to the cities that have devoted resources (and space) to pedestrians while improving and expanding public transit. The point is not to reject technology but to reject the tech companies’ narrow definition of what technology is and how it can be used, “we need to utilize technology where it can serve us, while ensuring power remains firmly in the hands of a democratic public” (223).

    After all, “better futures are possible, but they will not be delivered through technological advancement alone” (225). We can no longer sit idle in the passenger seat, we need to take the wheel, and the wheels.

    ***

    Contrary to its somewhat playful title, Road to Nowhere lays out a very clear case that Silicon Valley’s vision of the future of mobility is in fact a road to somewhere—the problem is that it’s not a good somewhere. While the excited pronouncements of tech CEOs (and the oft-uncritical coverage of those pronouncements) may evoke images of gleaming high tech utopias, a more critical and grounded assessment of these pipedreams reveals them to be unrealistic fantasies mixed with ideas that are designed to primarily meet the needs of tech CEOs over the genuine mobility needs of most people. As Paris Marx makes clear throughout the chapters of Road to Nowhere, it is essential to stop taking the plans of tech companies at face value and to instead do the discomforting work of facing up to the realities of these plans. The way our streets and cities have been built certainly present a range of very real problems to solve, but in the choice of which problems to address it makes a difference whether the challenges being considered are those facing a minimum-wage worker or a billionaire mogul furious about sitting in traffic. Or, to put it somewhat differently, there are flying cars in the movie Blade Runner, but that does not mean we should attempt to build that world.

    Road to Nowhere: Silicon Valley and the Future of Mobility provides a thoughtful analysis and impassioned denunciation of Silicon Valley’s mobility efforts up to this point, and pivots from this consideration of the past and the present to cast doubt on Silicon Valley’s future efforts. Throughout the book, Marx writes with the same punchy eloquence that has made Marx such a lively host of the Tech Won’t Save Us podcast. And while Marx has staked out an important space in the world of contemporary tech critique thanks to that podcast, this book makes it clear that Marx is not only a dynamic interviewer of other critics, but a vital critic in their own right. With its wide-ranging analysis, and clear consideration of the route we find ourselves on unless we change course, Road to Nowhere presents an important read for those concerned with where Silicon Valley is driving us.

    The structure of the book provides a clear argument that briskly builds momentum, and even as the chapters focus on certain specific topics they flow seamlessly from one to the next. Having started by providing a quick history of the auto-centric city, and the roots of Silicon Valley’s ideology, Marx’s chapters follow a clear path through mobility issues. If the problem is pollution, why not electric cars? If the problem is individual cars, even electric ones, why not make it easy to summon someone else’s car? If the problem is the treatment of the drivers of those cars, why not cars without drivers? If autonomous vehicles are unrealistic because of already existing infrastructure, why not wholly new infrastructure? If creating wholly new infrastructure (below and above ground) is more difficult than it may seem, what about flooding cities with cheap bikes? Part of what makes Road to Nowhere’s critique of Silicon Valley’s ideas so successful is that Marx does not get bogged down in just one of Silicon Valley’s areas of interest, and instead provides a critique that captures that it is not only a matter of Silicon Valley’s response to this or that problem, but that the issues is the way that Silicon Valley frames problems and envisions solutions. To the extent that the auto-centric world is reflective of a world that was remade in the shape of the automobile, Silicon Valley is currently hard at work attempting to remake the world in its own shape, and as Marx makes clear the needs of Silicon Valley companies and the needs of people trying to get around are not the same.

    At the core of Marx’s analysis is a sense that the worldview of Silicon Valley is one that is no longer so easily confined to certain geographical boundaries in California. As the tech companies have been permitted to present themselves as the shiny saviors of society, that ideology has often overwhelmed faith in democratic solutions. Marx notes that “as the neoliberal political system gave up on bold policies in favor of managing a worsening status quo, they left the door open to techno-utopians to fill the void” (5). When people no longer believe that a democratic society can even maintain the bridges and roads, it opens up a space in which tech companies can drive into town and announce an ambitious project to remake the roads. Marx further argues, “too often, governments stand back and allow the tech industry to roll out whatever ideas its executives and engineers can dream up,” this belief if undergirded by a sense that “whatever tech companies want is inevitable…and that neither governments, traditional companies, nor even the public should stand in their way” (178). Part of the danger of this sense of inevitability is that it cedes the future of mobility to the tech companies, robbing the municipalities both of initiative and of the responsibility to meet the mobility needs of the people who live there. Granted, as the many failures Marx documents show, just  because a tech company says that it will do something does not necessarily mean that it will be able to do it.

    Published by Verso Books and written in a clear comprehensive voice, Road to Nowhere stands as an intervention into broad discussions about the future of mobility, particularly those currently taking place on the political left. Thus, even as many readers are likely to cheer at Marx’s skewering of Musk, it is likely that many of those same readers will chafe at the book’s refusal to treat electric cars as a solution. Sure, it’s one thing to lambast Elon Musk (and by extension Tesla), but to critique electric cars as such? Here Marx makes it very clear that we cannot be taken in by too neat techno-fixes, whether they are touted by a specific company (such as Tesla), or whether they are made about a certain class of technologies (electric cars). As Marx makes clear, all of the minerals in those electric cars come from somewhere, and what’s more the issues that we face (in terms of mobility and environmental ones) are not simply the result of one particular technology (such as the gas-powered car) but the way in which we have built our societies around certain technologies and the infrastructure that those technologies require. Therefore, the matter of mobility is about which questions we are willing to ask, and recognizing that we need to be asking a different set of questions.

    Road to Nowhere is at its best when Marx does this work by moving past the particular tech companies to consider the deeper matters of the underlying technologies. Certainly, readers of the book will find plenty of consideration of Tesla and Uber (alongside their famous leaders), but the strength of Road to Nowhere is that the book does not act as though the problem is simply Tesla or Uber. Rather, Marx considers the way in which the problem forces us to think about automobiles themselves, about the long history of automobiles, and about the ways in which so much physical infrastructure has been built to prioritize the use of automobiles. This is, obviously, not to give Uber or Tesla a pass—but Marx does the essential work of emphasizing that this isn’t just about a handful of tech companies and their bombastic CEOs, this is a question about the ways in which societies orient themselves around particular sets of technologies. And Marx’s response is not a call for a return to some romanticized pastoral landscape, but is instead an argument in favor of placing the needs of people above the needs of technologies (and the people selling those technologies). Much of our built environment has been constructed around the automobile, what if we started building that environment around the needs of the human being?

    The challenge of what it would mean to construct our cities around the needs of people, rather than the needs of profit (or the needs of machines), is not a new question. And while Marx briefly considers some past figures who have wrestled with this matter—such as Jane Jacobs and Murray Bookchin—it might have been worthwhile to spend a little more time engaging more fully with past critics. At risk of becoming too much of a caricature of myself as a reviewer, it does seem like an unfortunate missed opportunity in a book about technology and cities not to engage with the prominent technological critic Lewis Mumford whose oeuvre includes numerous books specifically on the topic of technology and cities (he won the National Book Award for his volume The City in History). And these matters of cities, speed, and vehicles have been topics with which many other critics of technology engaged in the twentieth century. Indeed, the rise of the auto-centric society has had its critics all along the way, and it could have been fascinating to engage with more of those figures. Marx certainly makes a strong case for the ways in which Silicon Valley’s designs on the city are informed by its particular ideology, but engaging more closely with earlier critics of technology could have opened up other spaces for considering broader problems about ideologies surrounding technology that predate Silicon Valley. Of course, it is unfair to criticize an author for the book they did not write, and the intention is not to take away from Marx’s important book—but contemporary criticism of technology has much to gain not just from the history of technology but from the history of technological criticism.

    Road to Nowhere is a challenging book in the best sense of that word, for it discomforts the reader and pushes them to see the world around them in a new light. Marx achieves this particularly well by refusing to be taken in by easy solutions, and by recognizing that even as techno-fixes may be the standard offering from Silicon Valley, that a belief in such fixes permeates beyond just the pitches by tech firms. Nevertheless, Marx is also clear in recognizing that even as many of our problems flow from and have been exacerbated by technology, that technology needs to be seen as part of the solution. And here, Marx is deft at considering the way in which technology represents a much more robust and wide-ranging category than the too simplistic version that it is often reduced to when conversations turn to “tech.” Thus, the matter is nothing so ridiculous as conversations about being “pro-technology” or “anti-technology” but recognizing “that technology is not the primary driver in creating fairer and more equitable cities and transportation systems” what is necessary is “deeper and more fundamental change to give people more power over the decision that are made about their communities” (8). The matter is not just about technology (as such), but about the value systems embedded in particular sorts of technologies, and recognizing that certain sets of technologies are going to be better for achieving particular social goals. After all, “the technologies unleashed by Silicon Valley are not neutral,” (179) though the same is also very much true of the technologies that were unleashed before Silicon Valley. Constructing a different world thus requires us to consider not only how we can remake that world, but how we can remake our technologies. As Marx wonderfully puts it, “when we assume that technology can only develop in one way, we accept the power of the people who control that process, but there is no guarantee that their ideal world is one that truly works for everyone” (179).

    You can learn a lot about your society’s relationship to technology by looking at its streets. And Road to Nowhere is a powerful reminder, that those streets do not have to look the way they do, and that we have a role to play in determining what future those streets are taking us towards.

    _____

    Zachary Loeb earned his MSIS from the University of Texas at Austin, an MA from the Media, Culture, and Communications department at NYU, and is currently a PhD candidate in the History and Sociology of Science department at the University of Pennsylvania. Loeb works at the intersection of the history of technology and disaster studies, and his research focusses on the ways that complex technological systems amplify risk, as well as the history of technological doom-saying. He is working on a dissertation on Y2K. Loeb writes at the blog Librarianshipwreck, and is a frequent contributor to The b2o Review Digital Studies section.

    Back to the essay

  • Zachary Loeb — Is Big Data the Message? (Review of Natasha Lushetich, ed., Big Data—A New Medium?)

    Zachary Loeb — Is Big Data the Message? (Review of Natasha Lushetich, ed., Big Data—A New Medium?)

    a review of Natasha Lushetich, ed. Big Data—A New Medium? (Routledge, 2021)

    by Zachary Loeb

    When discussing the digital, conversations can quickly shift towards talk of quantity. Just how many images are being uploaded every hour, how many meticulously monitored purchases are being made on a particular e-commerce platform every day, how many vehicles are being booked through a ride-sharing app at 3 p.m. on Tuesday afternoon, how many people are streaming how many shows/movies/albums at any given time? The specific answer to the “how much?” and “how many?” will obviously vary depending upon the rest of the question, yet if one wanted to give a general response across these questions it would likely be fair to answer with some version of “a heck of a lot.” Yet from this flows another, perhaps more complicated and significant question, namely: given the massive amount of information being generated by seemingly every online activity, where does all of that information actually go, and how is that information rendered usable and useful? To this the simple answer may be “big data,” but this in turn just serves to raise the question of what we mean by “big data.”

    “Big data” denotes the point at which data begins to be talked about in terms of scale, not merely gigabytes but zettabytes. And, to be clear, a zettabyte represents a trillion gigabytes—and big data is dealing with zettabytes, plural. Beyond the sheer scale of the quantity in question, considering big data “as process and product” involves a consideration of “the seven Vs: volume” (the amount of data previously generated and newly generated), “variety” (the various sorts of data being generated), “velocity” (the highly accelerated rate at which data is being generated), “variability” (the range of types of information that make up big data), “visualization” (how this data can be visually represented to a user), “value” (how much all of that data is worth, especially once it can be processed in a useful way), and “veracity” (3) (the reliability, trustworthiness, and authenticity of the data being generated). In addition to these “seven Vs” there are also the “three Hs: high dimension, high complexity, and high uncertainty” (3). Granted, “many of these terms remain debatable” (3). Big data is both “process and product” (3), its applications vary from undergirding the sorts of real-time analysis that makes it possible to detect viral outbreaks as they are happening to the directions app that is able to suggest an alternative route before you hit traffic to the recommendation software (be it banal or nefarious) that forecast future behavior based on past actions.

    To the extent that discussions around the digital generally focus on the end(s) results of big data, the means remain fairly occluded both from public view and from many of the discussants. And while big data has largely been accepted as an essential aspect of our digital lives by some, for many others it remains highly fraught.

    As Natasha Lushetich notes, “in the arts and (digital) humanities…the use of big data remains a contentious issue not only because data architectures are increasingly determining classificatory systems in the educational, social, and medical realms, but because they reduce political and ethical questions to technical management” (4). And it is this contentiousness that is at the heart of Lushetich’s edited volume Big Data—A New Medium? (Routledge, 2021). Drawing together scholars from a variety of different disciplines ranging across “the arts and (digital) humanities,” this book moves beyond an analysis of what big data is to a complex considerations of what big data could be (and may be in the process of currently becoming). In engaging with the perils and potentialities of big data, the book (as its title suggests) wrestles with the question as to whether or not big data can be seen as constituting “a new medium.” Through engaging with big data as a medium, the contributors to the volume grapple not only with how big data “conjugates human existence” but also how it “(re)articulates time, space, the material and immaterial world, the knowable and the unknowable; how it navigates or alters, hierarchies of importance” and how it “enhances, obsolesces, retrieves and pushes to the limits of potentiality” (8). Across four sections, the contributors grapple with big data in terms of knowledge and time, use and extraction, cultural heritage and memory, as well as people.

    “Patterning Knowledge and Time” begins with a chapter by Ingrid M. Hoofd that places big data in the broader trajectory of the university’s attempt to make the whole of the world knowable. Considering how “big data renders its object of analysis simultaneously more unknowable (or superficial) and more knowable (or deep)” (18), Hoofd’s chapter examines how big data replicates and reinforces the ways in which that which becomes legitimated as knowable are the very things that can be known through the university’s (and big data’s) techniques. Following Hoofd, Franco “Bifo” Berardi provocatively engages with the power embedded in big data, treating it as an attempt to assert computerized control over a chaotic future by forcing it into a predictable model. Here big data is treated as a potential constraint wherein “the future is no longer  a possibility, but the implementation of a logical necessity inscribed in the present” (43), as participation in society becomes bound up with making one’s self and one’s actions legible and analyzable to the very systems that enclose one’s future horizons. Shifting towards the visual and the environmental, Abelardo Gil-Fournier and Jussi Parikka consider the interweaving of images and environments and how data impacts this. As Gil-Fournier and Parikka explore, as a result of developments in machine learning and computer vision “meteorological changes” are increasingly “not only observable but also predictable as images” (56).

    The second part of the book, “Patterning Use and Existence” starts with Btihaj Ajana reflecting on the ways in which “surveillance technologies are now embedded in our everyday products and services” (64). By juxtaposing the biometric control of refugees with the quantified-self movement, Ajana explores the datafication of society and the differences (as well as similarities) between willing participation and forced participation in regimes of surveillance of the self. Highlighting a range of well-known gig-economy platforms (such as Uber, Deliveroo, and Amazon Mechanical Turk), Tim Christaens examines the ways that “the speed of the platform’s algorithms exceeds the capacities of human bodies” (81). While offering a thorough critique of the inhuman speed imposed by gig economy platforms/algorithms, Christaens also offers a hopeful argument for the possibility that by making their software open source some of these gig platforms could “become a vehicle for social emancipation instead of machinic subjugation” (90). While aesthetic and artistic considerations appear in earlier chapters, Lonce Wyse’s chapter pushes fully into this area through looking at the ways that deep learning systems create the sorts of works of art “that, when recognized in humans, are thought of as creative” (95). Wyse provides a rich, and yet succinct, examination of how these systems function while highlighting the sorts of patterns that emerge (sometimes accidentally) in the process of training these systems.

    At the outset of the book’s third section, “Patterning cultural heritage and memory,” Craig J. Saper approaches the magazine The Smart Set as an object of analysis and proceeds to zoom in and zoom out to reveal what is revealed and what is obfuscated at different scales. Highlighting that “one cannot arbitrarily discount or dismiss particular types of data, big or intimate, or approaches to reading, distant or close” Saper’s chapter demonstrates how “all scales carry intellectual weight” (124). Moving away from the academic and the artist, Nicola Horsley’s chapter reckons with the work of archivists and the ways in which their intellectual labor and the tasks of their profession have been challenged by digital shifts. While archival training teaches archivists that “the historical record, on which collective memory is based, is a process not a product” (140) and in interacting with researchers archivists seek to convey that lesson, Horsley’s considers the ways in which the shift away from the physical archive and towards the digital archive (wherein a researcher may never directly interact with an archivist or librarian) means this “process” risks going unseen. From the archive to the work of art, Natasha Lushetich and Masaki Fujihata’s chapter explores Fujihata’s project BeHere: The Past in the Present and how augmented reality opens up the space for new artistic experience and challenges how individual memory is constructed. Through its engagement with “images obtained through data processing and digital frottage” the BeHere project reveals “new configurations of machinically (rather than humanly) perceived existents” and thus can “shed light on that which eludes the (naked) human eye” (151).

    The fourth and final section of the volume, begins with Dominic Smith’s exploration of the aesthetics of big data. While referring back to the “Seven Vs” of big data, Smith argues that to imagine big data as a “new medium” requires considering “how we make sense of data” in regards to both “how we produce it” and “how we perceive it” (164). A matter which Smith explores through an analysis of “surfaces and depths” of oceanic images. Though big data is closely connected with sheer scale (hence the “big”), Mitra Azar observes that “it is never enough as it is always possible to generate new data and make more comprehensive data sets” (180). Tangling with this in a visual registry, Azar contrasts the cinematic point of view with that of the big data enabled “data double” of the individual (which is meant to stand in for that user). Considering several of his own artistic installations—Babel, Dark Matter, and Heteropticon—Simon Biggs examines the ways in which big data reveals “the everyday and trivial and how it offers insights into the dense ambient noise that is our daily lives” (192). In contrast to treating big data as a revelator of the sublime, Biggs discusses big data’s capacity to show “the infra-ordinary” and to show the value of seemingly banal daily details. The book concludes with Warren Neidich’s speculative gaze to what the future of big data might portend, couched in a belief that “we are at the beginning of a transition from knowledge-based economics to a neural or brain-based economy” (207). Surveying current big data technologies and the trajectories they may suggest, Neidich forecasts “a gradual accumulation of telepathic technocueticals” such that “at some moment a critical point might be reached when telepathy could become a necessary skill for successful adaptation…similar to being able to read in today’s society” (218).

    In the introduction to the book, Natasha Lushetich grounds the discussion in a recognition that “it is also important to ask how big data (re)articulates time, space, the material and immaterial world, the knowable and the unknowable; how it navigates or alters, hierarchies of importance” (8), and over the course of this fascinating and challenging volume, the many contributors do just that.

    ***

    The term big data captures the way in which massive troves of digitally sourced information are made legible and understandable. Yet one of the challenges of discussing big data is trying to figure out a way to make big data itself legible and understandable. In discussions around the digital, big data is often gestured at rather obliquely as the way to explain a lot of mysterious technological activity in the background. We may not find ourselves capable, for a variety of reasons, of prying open the various black boxes of a host of different digital systems but stamped in large letters on the outside of that box are the words “big data.” When shopping online or using a particular app, a user may be aware that the information being gathered from their activities is feeding into big data and that the recommendations being promoted to them come courtesy of the same. Or they may be obliquely aware that there is some sort of connection between the mystery shrouded algorithms and big data. Or the very evocation of “big” when twinned with a recognition of surveillance technologies may serve as a discomforting reminder of “big brother.” Or “big data” might simply sound like a non-existent episode of Star Trek: The Next Generation in which Lieutenant Commander Data is somehow turned into a giant. All of which is to say, that though big data is not a new matter, the question of how to think about it (which is not the same as how to use and be used by it) remains a challenging issue.

    With Big Data—A New Medium?, Natasha Lushetich has assembled an impressive group of thinkers to engage with big data in a novel way. By raising the question of big data as “a new medium,” the contributors shift the discussion away from considerations focused on surveillance and algorithms to wrestle with the ways that big data might be similar and distinct from other mediums. While this shift does not represent a rejection, or move to ignore, the important matters related to issues like surveillance, the focus on big data as a medium raises a different set of questions. What are the aesthetics of big data? As a medium what are the affordances of big data? And what does it mean for other mediums that in the digital era so many of those mediums are themselves being subsumed by big data? After all, so many of the older mediums that theorists have grown so accustomed to discussing have undergone some not insignificant changes as a result of big data. And yet to engage with big data as a medium also opens up a potential space for engaging with big data that does not treat it as being wholly captured and controlled by large tech firms.

    The contributors to the volume do not seem to be fully in agreement with one another about whether big data represents poison or panacea, but the chapters are clearly speaking to one another instead of shouting over each other. There are certainly some contributions to the book, notably Berardi’s, with its evocation of a “new century suspended between two opposite polarities: chaos and automaton” (44), that seem a bit more pessimistic. While other contributors, such as Christaens, engage with the unsavory realities of contemporary data gathering regimes but envision the ways that these can be repurposed to serve users instead of large companies. And such optimistic and pessimistic assessments come up against multiple contributions that eschew such positive/negative framings in favor of an artistically minded aesthetic engagement with what it means to treat big data as a medium for the creation of works of art. Taken together, the chapters in the book provide a wide-ranging assessment of big data, one which is grounded in larger discussions around matters such as surveillance and algorithmic bias, but which pushes readers to think of big data beyond those established frameworks.

    As an edited volume, one of the major strengths of Big Data—A New Medium? is the way it brings together perspectives from such a variety of fields and specialties. As part of Routledge’s “studies in science, technology, and society” series, the volume demonstrates the sort of interdisciplinary mixing that makes STS such a vital space for discussions of the digital. Granted, this very interdisciplinary richness can serve to be as much benefit as burden, as some readers will wish there had been slightly more representation of their particular subfield, or wish that the particular scholarly techniques of a particular discipline had seen greater use. Case in point: Horsley’s contribution will be of great interest to those approaching this book from the world of libraries and archives (and information schools more generally), and some of those same readers will wish that other chapters in the book had been equally attentive to the work done by archive professionals. Similarly those who approach the book from fields more grounded in historical techniques may wish that more of the authors had spent more time engaging with “how we got here” instead of focusing so heavily on the exploration of the present and the possible future. Of course, these are always the challenges with edited interdisciplinary volumes, and it is a major credit to Lushetich as an editor that this volume provides readers from so many different backgrounds with so much to mull over. Beyond presenting numerous perspectives on the titular question, the book is also an invitation to artists and academics to join in discussion about that titular question.

    Those who are broadly interested in discussions around big data will find much in this volume of significance, and will likely find their own thinking pushed in novel directions. That being said, this book will likely be most productively read by those who are already somewhat conversant in debates around big data/the digital humanities/the arts/and STS more generally. While contributors are consistently careful in clearly defining their terms and referencing the theorists from whom they are drawing, from Benjamin to Foucault to Baudrillard to Marx to Deleuze and Guattari (to name but a few), the contributors to this book couch much of their commentary in theory, and a reader of this volume will be best able to engage with these chapters if they have at least some passing familiarity with those theorists themselves. Many of the contributors to this volume are also clearly engaging with arguments made by Shoshana Zuboff in Surveillance Capitalism and this book can be very productively read as critique and complement to Zuboff’s tome. Academics in and around STS, and artists who incorporate the digital into their practice, will find that this book makes a worthwhile intervention into current discourse around big data. And though the book seems to assume a fairly academically engaged readership, this book will certainly work well in graduate seminars (or advanced undergraduate classrooms)—many of the chapter will stand quite well on their own, though much of the book’s strength is in the way the chapters work in tandem.

    One of the claims that is frequently made about big data is that—for better or worse—it will allow us to see the world from a fresh perspective. And what Big Data—A New Medium? does is allow us to see big data itself from a fresh perspective.

    _____

    Zachary Loeb earned his MSIS from the University of Texas at Austin, an MA from the Media, Culture, and Communications department at NYU, and is currently a PhD candidate in the History and Sociology of Science department at the University of Pennsylvania. Loeb works at the intersection of the history of technology and disaster studies, and his research focusses on the ways that complex technological systems amplify risk, as well as the history of technological doom-saying. He is working on a dissertation on Y2K. Loeb writes at the blog Librarianshipwreck, and is a frequent contributor to The b2o Review Digital Studies section.

    Back to the essay

  • Zachary Loeb — Specters of Ludd (Review of Gavin Mueller, Breaking Things at Work)

    Zachary Loeb — Specters of Ludd (Review of Gavin Mueller, Breaking Things at Work)

    a review of Gavin Mueller, Breaking Things at Work: The Luddites Were Right about Why You Hate Your Job (Verso, 2021)

    by Zachary Loeb

    A specter is haunting technological society—the specter of Luddism.

    Granted, as is so often the case with hauntings, reactions to this specter are divided: there are some who are frightened, others who scoff at the very idea of it, quite a few dream about designing high-tech gadgets with which to conclusively bust this ghost so that it can bother us no more, and still others are convinced that this specter is trying to tell us something important if only we are willing to listen. And though there are plenty of people who have taken to scoffing derisively whenever the presence of General Ludd is felt, there would be no need to issue those epithetic guffaws if they were truly directed at nothing. The dominant forces of technological society have been trying to exorcize this spirit, but instead of banishing this ghost they only seem to be summoning it.

    The problem with spectral Luddism is that one can feel its presence without necessarily understanding what it means. When one encounters Luddism in the world today it still tends to be as either a term of self-deprecation used to describe why someone has an old smartphone, or as an insult that is hurled at anyone who dares question “the good news” presented by the high priests of technology. With Breaking Things at Work: The Luddites Were Right About Why You Hate Your Job, Gavin Mueller challenges those prevailing attitudes and ideas about Luddism, instead articulating a perspective on Luddism that finds in it a vital analysis with which to respond to techno-capitalism. Luddism, in Mueller’s argument, is not simply a term to describe a specific group of workers at the turn of the 19th century, rather Luddism can be seen in workers’ struggles across centuries.

    At core, Breaking Things at Work is less of a history of Luddism, and more of a manifesto. Historic movements and theorists are thoughtfully engaged with throughout the volume, but this is consistently in service of making an argument about how we should be responding to technology in the present. While contemporary books about technology (even ones that advance a critical attitude) have a tendency to carefully couch any criticism in neatly worded expressions of love for technology, Mueller’s book is refreshing in the forthrightness with which he expresses the view that “technology often plays a detrimental role in working life, and in struggles for a better one” (4). In clearly setting out the particular politics of his book, Mueller makes his goal clear: “to make Marxists into Luddites” and “to turn people critical of technology into Marxists” (5). This is no small challenge, as Mueller notes that “historically Marxists have not been critical of technology” (4) on the one hand, and that “much of contemporary technological criticism comes from a  place of romantic humanism” (6) on the other hand. For Mueller “the problem of technology is its role in capitalism” (7), but the way in which many of these technologies have been designed to advance capitalism’s goals makes it questionable whether all of these machines can necessarily be repurposed. Basing his analysis on a history of class struggle, Mueller is not so much setting out to tell workers what to do, as much as he is putting a name on something that workers are already doing.

    Mueller begins the first chapter of his book by explaining who the actual Luddites were and providing some more details to explain the tactics for which they became legendary. As skilled craft workers in early 19th century England, the historic Luddites saw firsthand how the introduction of new machines resulted in their own impoverishment. Though the Luddites would become famous for breaking machines, it was a tactic they turned to only after their appeals to parliament to protect their trades went ignored. With broad popular support, the Luddites donned the anonymizing mask of General Ludd, and took up arms in their own defense. Contrary to the popular myth in which the Luddites smashed every machine out of a fit of wild hatred, the historic record shows that the Luddites were quite focused in their targets, picking workshops and factories where the new machines had been used as an excuse to lower wages. Luddism did not die out in its moment because the tactics were seen as pointless, rather the movement came to an end at the muzzle of a gun, as troops were deployed to quell the uprising—with many of the captured Luddites being either hanged or transported. Nevertheless, this was certainly not the last time that machine-breaking was taken up as a tactic: not long after the Luddite risings the Swing Riots were even more effective in their targeting of machinery. And, furthermore, as Mueller makes clear throughout his book, the tactic of seeing the machine as a site for resistance continues to this day.

    Perhaps the key takeaway from the historic Luddites is not that they smashed machines, but that they identified machinery as a site of political struggle. They did not take hammers to stocking frames out of a particular hatred for these contraptions; rather they took hammers to stocking frames as a way of targeting the owners of those stocking frames. These struggles, in which groups of workers came together with community support, demonstrate how the Luddite’s various tactics served as “practices of political composition” (16, italics in original text) whereby the Luddites came to see themselves as workers with shared interests that were in opposition to the interests of their employers. The Luddites were not to be assuaged by appeals to the idea of progress, or lurid fantasies of a high-tech utopia, they could see the technological changes playing out in real time in front of them, and what they could see there was not a distant future of plenty, but an immediate future of immiseration. The Luddites were not fools, quite the contrary: they saw exactly what the new machines meant for themselves and their communities, and so they decided to do something about it.

    Despite the popular support the Luddites enjoyed in their own communities, and the extent to which machine-breaking remained a common tactic even after the Luddite risings had been repressed, already in the 19th century more optimistic attitudes towards technology were ascendant. Mueller detects some of this optimism in Karl Marx, noting that “there is evidence for a technophilic Marx” (19), yet Mueller pushes back against the common assumption that Marx was a technological determinist. While recognizing that Marx (and Engels) had made some less than generous comments about the Luddites, Mueller emphasizes Marx’s attention to the real struggles of workers against capitalism and notes that “the struggles against machines were the struggles against the society that utilized them” (24, italics in original text). And the frequency with which machines were becoming targets of worker’s ire in the 19th century demonstrates the way in which workers saw the machines not as neutral tools but as instruments of the factory owners’ power. While defenders of mass machinery may point to the abundance such machines create, some figures like William Morris pushed back on these promises of abundance by noting that such machinery sapped any pleasure out of the act of laboring while the abundance was just a share in shoddy goods. In Marx and Morris, as well as in the actual struggles of workers, Mueller points to the importance of technology becoming recognized as a site of political struggle—emphasizing that in worker’s resistance to technology can be found “a more liberatory politics of work and technology” (29).

    That the 19th century was home to the most renowned fight against technology, does not mean that these struggles (be they physical or philosophical) ended with the arrival of the 20th century. While much is often made of the “scientific management” of Frederick W. Taylor, less is often said of the ways in which workers resisted this system that turned them into living cogs—and even less is usually said of the strike at the Watertown Arsenal wherein (quite unlike the case of the Luddites) Congress sided with the workers (and their union). Nevertheless, the Taylorist viewpoint that “capitalist technologies like scientific management” were “an objective way to improve productivity and therefore the condition of workers” (35) was a viewpoint shared by a not inconsiderable number of socialists in those years. Within the international left of the early 20th century, debates about the meaning of machinery were heated: some like Karl Kautsky took a deterministic stance that developments in capitalist production methods were paving the way for communism; others like the IWW activist Elizabeth Gurley Flynn cheered the tactic of workers sabotaging their machines; still others like Thorstein Veblen dreamed of a technocratic society overseen by benevolent engineers; various Bolsheviks argued about the deployment of Taylorist techniques in the new Soviet state; and standing at the edge of the fascist abyss Walter Benjamin gestured towards a politics that does not praise speed but searches desperately for an emergency brake.

    While the direction of debates about technology in the early 20th century were significantly disrupted by the Second World War (just as they had been upended by the First World War), in the aftermath of Auschwitz and Hiroshima debates about technology and work only intensified. Automation represented a new hope to business owners even as it represented a new threat to workers, as automation could sap the power of agitated workers while centralizing further control in the hands of management. Importantly, automation was not simply accepted by workers, and Mueller notes “on the vanguard of opposing automation were those often marginalized by the official workers’ movement—women and African Americans” (63). Opposition to automation often took the form of “wildcat strikes” with union leaders failing to keep pace with the radicalism and fury of their members. In this period of post-war tumult, left-wing thinkers ranging from Raya Dunayevskaya to Herbert Marcuse to Shulamith Firestone articulated a spectrum of different responses to the promises and perils of automation—yet even as they theorized: workers in mines, factories, and at the docks continued to strike against what the introduction of automation meant for their lives. Simultaneously, automation became a topic of interest, and debate, within the social movements of the time, with automation being viewed by those movements as threat and hope.

    Lurking in the background of many of the discussions around automation was the spread of computers. As increasing numbers of people became aware of them, computers quickly conjured both adoration and dread—they were a frequent target of student activists in the 1960s and 1970s, even as elements of the counterculture (such as Stewart Brand’s Whole Earth Catalog) were enthusiastic about computers. Businesses were quick to adopt computers, and these machines often accelerated the automation of workplaces (while opening up new types of work to the threat of being automated). Yet the rise of the computer also gave rise to a new sort of figure, “the hacker” whose very technological expertise positioned them to challenge computerized capitalism. Though the “politics of hackers are complicated,” Mueller emphasizes that they are often some of technology’s “most critical users, and they regularly deploy their skills to subvert measures by corporations to rationalize and control computer user behavior. They are often Luddites to the core” (105). Not uniformly uncritical celebrants of technology, many hackers turn their intimate knowledge of computers into a way of knowing where best to strike—even as they champion initiatives such as free software, peer-to-peer sharing, and tools for avoiding surveillance.

    Yet as computers have infiltrated nearly every space and moment, it is not only hackers who find themselves regularly interacting with these machines. The omnipresence of computers creates a situation wherein “work seeps into every nook and cranny of human existence via capitalist technologies, accompanied by the erosion of wages and free time” (119) as more and more of our activities become fodder for corporate recommendation algorithms we find ourselves endlessly working for Facebook and Google even as we respond to work emails at 1 a.m. Despite the promises of digital plenty, computing technologies (broadly defined) seem to be giving rise to an increasing sense of frustration, and though there are some who advocate for an anodyne “tech humanism,” it may well be that “the strategy of refusal pursued by the industrial workers of old might be a more promising technique against the depression engines of social media” (122).

    Breaking Things at Work concludes with a call for the radical left to “put forth a decelerationist politics: a politics of slowing down change, undermining technological progress, and limiting capital’s rapacity, while developing organization and cultivating militancy” (127-128). Such a politics entails not a rejection of progress, but a critical reexamination of what it is that is actually meant when the word “progress” is bandied about, as too often what progress stands for is “the progress of elites at the expense of the rest of us” (128). Putting forth such a politics does not require creating something entirely new, but rather recognizing that the elements of just such a politics can be seen repeatedly in worker’s movements and social movements.

    In putting forth a clear definition of “Luddism,” Mueller highlights that Luddism “emphasizes autonomy” by seeking to put control back into the hands of the people actually doing the work, “views technology not as neutral but as a site of struggle,” “rejects production for production’s sake,” “can generalize” into a strategy for mass action, and is “antagonistic” taking a firm stance in clear opposition to capitalism and capitalist technology. In the increasing frustration with social media, in the growing environmental calls for “degrowth,” and in the cracks showing in the golden calf of technology, the space is opening for a politics that takes up the hammer of Luddism. Recognizing as it does so, that a hammer can be used not just to smash things that need to be broken, a hammer can also be used to build something different.

    *

    One of the factors that makes Luddism so appealing more than two centuries later is that it is an ideology that still calls out to be developed. The historic Luddites were undoubtedly real people, with real worries, and real thoughts on the tactics that they were deploying—and yet the historic Luddites did not leave any manifestoes or books of their own writing behind. What remains from the Luddites are primarily the letters they sent and snatches of songs in which they were immortalized (which have been helpfully collected in Kevin Binfield’s 2015 Writings of the Luddites). And though one can begin to cobble together a philosophy of technology from reading through those letters, the work of explaining exactly what it is that Luddism means has been a task that has largely fallen to others. Granted, part of what made the Luddites successful in their time was that the mask of General Ludd could be picked up and worn by many individuals, all of whom could claim to be General Ludd (or his representative).

    With Breaking Things at Work, Gavin Mueller has crafted a vital contribution to Luddism, and what makes this book especially important is the way in which it furthers Luddism in a variety of ways. On one level, Mueller’s book provides a solid introduction and overview to Luddite thinking and tactics throughout the ages, which makes the book a useful retort to those who act as though the historic Luddites were the only workers who ever dared oppose machinery. Yet Mueller makes it clear from the outset of his book that he is not primarily interested in writing a history, rather his book has a clear political goal as well—he wishes to raise the banner of General Ludd and encourage others to march behind this standard. Thus, Mueller’s book is simultaneously an account of Luddism’s past, while also an appeal for Luddism’s future. And while Mueller provides a thoughtful consideration of many past figures and movements that have dallied with Luddism, his book concludes with a clear articulation of what a present day Luddism might look like. For those who call themselves Luddites, or those who would call themselves Luddites, Mueller provides a historically grounded but present focused account of what it meant, and what it can mean, to be a Luddite.

    The clarity with which Mueller defines Luddism in Breaking Things at Work places the book into a genuine debate as to how exactly Luddism should be defined. And this is a debate that Mueller’s book engages with in a particularly provocative way considering how his book is both a scholarly account and an activist manifesto. Writing about the Luddites tends to fall into several camps: works that provide a fairly straightforward historical account of who the original Luddites were and what they literally did (this genre includes works like E.P. Thompson’s Making of the English Working Class, and Kevin Binfield’s Writings of the Luddites); works that treat Luddism as an idea and a philosophy that is not exclusive to the historic Luddites (this genre includes works like Nicols Fox’s Against the Machine, and Matt Tierney’s Dismantlings), works that emphasize that the tactic of machine-breaking was not practiced exclusively by the Luddites (this genre includes works like Eric Hobsbawm and Geogre Rudé’s Captain Swing, and David Noble’s Progress Without People),  and works that draw lines (good or bad) from Luddism to later activist practices (this genre includes approving works like Kirkpatrick Sale’s Rebels Against the Future, and disapproving works like Steven Jones’s Against Technology). Mueller’s Breaking Things at Work  does not fit neatly into any single one of those categories: the Marxist analysis makes the book pair nicely with Thompson’s book, the engagement with radical theorists makes the book pair nicely with Tierney’s book, the treatment of machine-breaking as a common tactic makes the book pair nicely with Noble’s book, and the call to arms places the book into debate with books by the likes of Sale and Jones.

    All of which is to say, the meaning of Luddism remains contested terrain. And even though many of technology’s celebrants remain content to use Luddite as an insult, those who would proudly wear the mask of General Ludd are not themselves all in agreement about exactly what this means.

    Mueller has written a wonderfully provocative book, and it is one in which he does not attempt to hide his own opinion behind two dozen carefully composed distractions. Instead, Mueller is quite clear “to be a good Marxist is to also be a Luddite” (5), and this is a point that leads directly into his goal of turning Marxists into Luddites and making Marxists out of those who are critical of technology. And in his engagement with Marx, Mueller tangles with the perceptions of Marx as technophilic, engages with a variety of Marxist thinkers who fall into a range of camps, all while trying “to be faithful to Marxism’s heretical side, its unofficial channels and para-academic spaces” (vii). And all the while Mueller endeavors to keep his book grounded as a contribution to real struggles around technology in the world today. Considering Mueller’s clear statement of his own position it is likely that some will level their critiques at the book’s Marxism, and still others might critique the book for not being sufficiently Marxist. And as is always the case with books that situate their critique within a particular radical tradition it seems inevitable that some will wonder why their favorite thinker is not included (or does not receive more attention), even as others will wonder why other branches from the tree of the radical left are missing. (Mueller does not spend much time on anarchist thinkers).

    Overall, the question of whether this book will turn its Marxist readers into Luddites, and its technologically critical readers into Marxists is one that can only be answered by each reader themselves. For what Mueller’s book presents is an argument, and the way in which a reader nods along or argues back is likely to be heavily influenced by the way they personally define Luddism. And Mueller is not the first to try to rally people beneath the Luddite’s standard.

    In 1990, Chellis Glendinning published her “Notes Towards a Neo-Luddite Manifesto” in the pages of the Utne Reader. Furiously lamenting the ways in which societies were struggling under the onslaught of new technologies, her manifesto was a call to take up oppositional arms. While taking on the mantle of “Neo-Luddite,” the manifesto articulated a Luddism (or Neo-Luddism) that was defined by three principles: “1. Neo-Luddites are not anti-technology,” “2. All technologies are political,” and “3. The personal view of technology is dangerously limited.” Based on these principles, Glendinning’s manifesto laid out a program that included the dismantling of a range of “destructive” technologies (including genetic engineering technologies and computer technologies), pushed for the search for “new technological forms” that would be “for the benefit of life on Earth,” and this in turn was couched in a call for “Western technological societies” to develop a “life-enhancing worldview.” The manifesto drew on the technological criticism of Lewis Mumford, on Langdon Winner’s call for “epistemological Luddism,” and on the uncompromising stance towards technologies deemed destructive typified by Jerry Mander’s Four Arguments For the Elimination of Television.

    The Neo-Luddites are more noteworthy for their attempt to reclaim and redefine Luddism than they are for their success in actually creating a movement. Indeed, the lasting legacy of Neo-Luddism is not that of a vital social movement that fought for (and continues to fight for) the principles Glendinning put forth, but instead about half a bookshelf worth of books with “Neo-Luddite” somewhere in their title. There are certainly critiques to be leveled at the Neo-Luddites, but when revisiting Glendinning’s manifesto it is also worth placing it in the moment at which it emerged. The backdrop for Breaking Things at Work is one in which most readers will be accustomed to seemingly omnipresent computing technologies, climate exacerbated disasters, and a world in which the wealth of tech billionaires grows massively by the minute. By contrast, the backdrop for Glendinning’s manifesto was a moment in which personal computers had not yet achieved ubiquity (no one was carrying the Internet around in their pocket), climate change still seemed like a distant threat, and Mark Zuckerberg was still a child. It is impossible to say whether or not Glendinning’s manifesto, had it been heeded, could have prevented us from getting into our present morass, but preventing us from winding up where we are now certainly seems to have been one of Glendinning’s goals. At the very least, Glendinning and the Neo-Luddites (as well as the thinkers upon whom they drew) are a reminder that the spirit of General Ludd was circulating before you could Google “Luddism.”

    There are many parallels between the stances outlined by Glendinning and those outlined by Mueller. Though it seems that the key space of conflict between the two is around the question of dismantling. Glendinning and the Neo-Luddites were not subtle in their calls for dismantling certain technologies, whereas Mueller is considerably more nuanced in this respect. Here attempts to define Luddism find themselves butting against the degree to which Luddism is destined to always be associated (for better or worse) with the actual breaking of machines. The naming of entire classes of technology that need to be dismantled may appear like indiscriminate smashing, while calls for careful reevaluation of technologies may appear more like thoughtful disassembly. Yet the underlying question for Luddism remains: are certain technologies irredeemable? Are there technologies that we can remake in a different image, or will those technologies only reshape us in their own image? And if the answer is that these technologies cannot be reshaped, than are there some technologies that we need to break before they can finish breaking us, even if we often find ourselves enjoying some of the benefits of those technologies?

    Writing of the reactions from a range of 1960s social movements to the technological changes they were seeing playing out, Mueller notes that the particular technology that evoked “both fear and fascination” was none other than “the computer” (91). This point leads into what is perhaps the most troubling and challenging element of Mueller’s account, as he goes on to argue that hackers and some of their projects (like free software) fit within the legacy of Luddism. I imagine that many hackers will not be too pleased to see themselves described as Luddites, just as I imagine that many self-professed Luddites will scoff at the idea that using bitcoins to buy drugs on the dark web is a Luddite pursuit. Yet the idea that those most familiar with a technology may know exactly where to strike certainly has some noteworthy resonances with the historic Luddites.

    And yet the matter of hackers and “high tech Luddism”  raises a much broader question, one that the left has been trying to answer for quite some time, and perhaps the key question for any attempt to formulate a Luddite politics in this moment: what are we to make of the computer? Is the computer (and computing technologies, broadly defined) the offspring of the military-industrial-academic complex with logics of control, surveillance, and dominance so deeply ingrained that it ultimately winds up bending all users to that logic? Despite those origins, are computing technologies something which can be seized upon to allow us to reconfigure ourselves into new sorts of beings (cyborgs, perhaps) to break out of the very categories that capitalism tries to sort us into? Have computers fundamentally altered what it means to be human?  Is the computer (and the Internet) simply something that has become so big and so widespread that the best we can hope for is to increase our knowledge of it so that we can perform sabotage strikes while playing in the dark corners? Are computers the “master’s tools”?

    Considering that computer technologies were amongst those that the Neo-Luddites called to be dismantled, it seems pretty clear where they came down on this question. Yet contemporary discussions on the left around computers, a discussion in which Breaking Things at Work is certainly making an intervention, is quite a bit more divided as to what is to be done with and about computers. At several junctures in his book, Mueller notes that attitudes of technological optimism are starting to break down, yet if you survey the books dealing with technology published by the left-wing publisher Verso Books (which is the publisher of Breaking Things at Work) it is clear that a hopeful attitude towards technology is still present in much of the left. Certainly, there are arguments about the way that tech companies are screwing things up, commentary on the environmental costs of the hunger for high-tech gadgets, and paeans for how the Internet could be different—but it often feels that leftist commentaries blast Silicon Valley for what it has done to computers and the Internet so that the readers of such books can continue believing that the problems with computers and the Internet is what capitalism has done to them rather than suggest that these are capitalist tools through and through.

    Is the problem that the train we are on is taking us somewhere we don’t want to go, so we need to slow down so that we can switch tracks? Or is the problem the train itself and we need to hit the emergency brake so that we can get off? To those who have grown accustomed to the comforts of being on board the train, the idea of getting off of it might be a scary thought, it might feel preferable to fight for a more equitable distribution of resources aboard the train, or to fight to seize control of the engine car. Besides, the idea of actually getting off the train seems like little more than a fantasy—it will be hard enough just to get it to reduce its speed. Yet the question remains as to whether the problem is the direction we’re going in, or if the problem is the direction we’re going in and the technology that is taking us in that direction.

    Here it is essential to return to an important fact about the historic Luddites: they were waging their campaign against the introduction of machinery in the moment of those machines’ newness. The machines they attacked had not yet become common, and the moment of negotiation as to what these machines would mean and how they would be deployed was still in flux. When technologies are new they provide a fertile space for resistance, in their moment of freshness they have not yet become taken for granted, previous lifeways have not been forgotten, the skills that were necessary prior to the introduction of the new machine remain vital, and the broader society has not become pleasantly accustomed to their share of machine generated plenitude. Unfortunately, once a technology has become fully incorporated into a workplace (or a society) resistance becomes more and more challenging. While Mueller evocatively captures the long history of workers resisting the introduction of new technologies, these cases show a consistent tendency for this resistance to take place most strongly at the point of the new technology’s introduction. The major challenge becomes what to do when the technology has ceased being new, and when the reliance on that technology has become so total that it becomes almost impossible to imagine turning it off.

    After all, it’s easy to say that “computers are the problem” but at this point it’s easier to imagine the end of capitalism than it is to imagine the end of computers. And besides, many of those who would be quite happy to see capitalism come to an end quite like their computerized doodads and would be distressed if they couldn’t scroll social media on the subway, stream music, go shopping at 2 a.m., play video games, have video calls with distant family, or write overly lengthy book reviews and then post them online. One of the major challenges for technological criticism today is the simple fact that the critics are also reliant on these gadgets, and many of the critics quite like some things about some of those gadgets. In this technological climate, where the idea of truly banishing certain technologies seems fantastical, feelings of dissatisfaction often wind up getting channeled in the direction of appeals to personal responsibility. As though an individual deciding that they will abstain from going on social media on the weekend will somehow be a sufficient response to social media eating the world. This is the way in which a massive social problem winds up being reduced to telling people that they really just need to turn off notifications on their phones.

    What makes Breaking Things at Work, and its definition of Luddism, vital is the way in which Mueller eschews such appeals to minor lifestyle tweaks. As Mueller makes clear the significance of the Luddites is not that they broke machines, but that they saw machines as a site of political struggle, and the thing we need to learn from them today is that machinery still must be a site of political struggle. Turning off notifications, following people with different politics, trying to spend a day a week offline—while these actions can be useful on an individual level, they are not a sufficient response to the ways that technology challenges us today. In a moment wherein so many of the proclamations from Silicon Valley are treated as though they are inevitable, Luddism functions as a powerful retort and as a useful reminder that the people most invested in the belief that you cannot resist capitalist technologies are the people who are most terrified that people might resist those technologies.

    In one of the most infamous of the surviving Luddite letters, “the General of the Army of Redressers,” Ned Ludd writes: “We will never lay down our Arms. The House of Commons passes an Act to put down all Machinery hurtful to Commonality, and repeal that to hang Frame Breakers. But We. We petition no more that won’t do fighting must.” These were militant words from a militant movement, but the idea that there is such a thing as “Machinery hurtful to Commonality” and that such machinery needs to be opposed remains clear two hundred years later.

    There is a specter haunting technological society—the specter of Luddism. And as Mueller makes clear in Breaking Things at Work that specter is becoming more corporeal by the moment.

    _____

    Zachary Loeb earned his MSIS from the University of Texas at Austin, an MA from the Media, Culture, and Communications department at NYU, and is currently a PhD candidate in the History and Sociology of Science department at the University of Pennsylvania. Loeb works at the intersection of the history of technology and disaster studies, and his research focusses on the ways that complex technological systems amplify risk, as well as the history of technological doom-saying. He is working on a dissertation on Y2K. Loeb writes at the blog Librarianshipwreck, and is a frequent contributor to The b2o Review Digital Studies section.

    Back to the essay

    _____

    Works Cited

     

  • Zachary Loeb — Burn It All (Review of Mullaney, Peters, Hicks and Philip, eds., Your Computer Is on Fire)

    Zachary Loeb — Burn It All (Review of Mullaney, Peters, Hicks and Philip, eds., Your Computer Is on Fire)

    a review of Thomas S. Mullaney, Benjamin Peters, Mar Hicks and Kavita Philip, eds., Your Computer Is on Fire (MIT Press, 2021)

    by Zachary Loeb

    ~

    It often feels as though contemporary discussions about computers have perfected the art of talking around, but not specifically about, computers. Almost every week there is a new story about Facebook’s malfeasance, but usually such stories say little about the actual technologies without which such conduct could not have happened. Stories proliferate about the unquenchable hunger for energy that cryptocurrency mining represents, but the computers eating up that power are usually deemed less interesting than the currency being mined. Debates continue about just how much AI can really accomplish and just how soon it will be able to accomplish even more, but the public conversation winds up conjuring images of gleaming terminators marching across a skull-strewn wasteland instead of rows of servers humming in an undisclosed location. From Zoom to dancing robots, from Amazon to the latest Apple Event, from misinformation campaigns to activist hashtags—we find ourselves constantly talking about computers, and yet seldom talking about computers.

    All of the aforementioned specifics are important to talk about. If anything, we need to be talking more about Facebook’s malfeasance, the energy consumption of cryptocurrencies, the hype versus the realities of AI, Zoom, dancing robots, Amazon, misinformation campaigns, and so forth. But we also need to go deeper. Case in point, though it was a very unpopular position to take for many years, it is now a fairly safe position to say that “Facebook is a problem;” however, it still remains a much less acceptable position to suggest that “computers are a problem.” At a moment in which it has become glaringly obvious that tech companies have politics, there still remains a common sentiment that computers are neutral. And thus such a view can comfortably disparage Bill Gates and Jeff Bezos and Sundar Pichai and Mark Zuckerberg for the ways in which they have warped the potential of computing, while still holding out hope that computing can be a wonderful emancipatory tool if it can just be put in better hands.

    But what if computers are themselves, at least part of, the problem? What if some of our present technological problems have their roots deep in the history of computing, and not just in the dorm room where Mark Zuckerberg first put together FaceSmash?

    These are the sorts of troubling and provocative questions with which the essential new book Your Computer Is on Fire engages. It is a volume that recognizes that when we talk about computers, we need to actually talk about computers. A vital intervention into contemporary discussions about technology, this book wastes no energy on carefully worded declarations of fealty to computers and the Internet, there’s a reason why the book is not titled Your Computer Might Be on Fire but Your Computer Is on Fire.

    The editors of the volume are quite upfront about the confrontational stance of the volume, Thomas Mullaney opens the book by declaring that “Humankind can no longer afford to be lulled into complacency by narratives of techno-utopianism or technoneutrality” (4). This is a point that Mullaney drives home as he notes that “the time for equivocation is over” before emphasizing that despite its at moments woebegone tonality, the volume is not “crafted as a call of despair but as a call to arms” (8). While the book sets out to offer a robust critique of computers, Mar Hicks highlights that the editors and contributors of the book shall do this in a historically grounded way, which includes a vital awareness that “there are almost always red flags and warning signs before a disaster, if one cares to look” (14). Though unfortunately many of those who attempted to sound the alarm about the potential hazards of computing were either ignored or derided as technophobes. Where Mullaney had described the book as “a call to arms,” Hicks describes what sorts of actions this call may entail: “we have to support workers, vote for regulation, and protest (or support those protesting) widespread harms like racist violence” (23). And though the focus is on collective action, Hicks does not diminish the significance of individual ethical acts, noting powerfully (in words that may be particularly pointed at those who work for the big tech companies): “Don’t spend your life as a conscientious cog in a terribly broken system” (24).

    Your Computer Is on Fire begins like a political manifesto; as the volume proceeds the contributors maintain the sense of righteous fury. In addition to introductions and conclusions, the book is divided into three sections: “Nothing is Virtual” wherein contributors cut through the airy talking points to bring ideas about computing back to the ground; “This is an Emergency” sounds the alarm on many of the currently unfolding crises in and around computing; and “Where Will the Fire Spread” turns a prescient gaze towards trajectories to be mindful of in the swiftly approaching future. Hicks notes, “to shape the future, look to the past” (24), and this is a prompt that the contributors take up with gusto as they carefully demonstrate how the outlines of our high-tech society were drawn long before Google became a verb.

    Drawing attention to the physicality of the Cloud, Nathan Ensmenger begins the “Nothing is Virtual” section by working to resituate “the history of computing within the history of industrialization” (35). Arguing that “The Cloud is a Factory,” Ensmenger digs beneath the seeming immateriality of the Cloud metaphor to extricate the human labor, human agendas, and environmental costs that get elided when “the Cloud” gets bandied about. The role of the human worker hiding behind the high-tech curtain is further investigated by Sarah Roberts, who explores how many of the high-tech solutions that purport to use AI to fix everything, are relying on the labor of human beings sitting in front of computers. As Roberts evocatively describes it, the “solutionist disposition toward AI everywhere is aspirational at its core” (66), and this desire for easy technological solutions covers up challenging social realities. While the Internet is often hailed as an American invention, Benjamin Peters discusses the US ARPANET alongside the ultimately unsuccessful network attempts of the Soviet OGAS and Chile’s Cybersyn, in order to show how “every network history begins with a history of the wider word” (81), and to demonstrate that networks have not developed by “circumventing power hierarchies” but by embedding themselves into those hierarchies (88). Breaking through the emancipatory hype surrounding the Internet, Kavita Philip explores the ways in which the Internet materially and ideologically reifies colonial logics, of dominance and control, demonstrating how “the infrastructural internet, and our cultural stories about it, are mutually constitutive.” (110). Mitali Thakor brings the volume’s first part to a close, with a consideration of how the digital age is “dominated by the feeling of paranoia” (120), by discussing the development and deployment of sophisticated surveillance technologies (in this case, for the detection of child pornography).

    “Electronic computing technology has long been an abstraction of political power into machine form” (137), these lines from Mar Hicks eloquently capture the leitmotif that plays throughout the chapters that make up the second part of the volume. Hicks’ comment comes from an exploration of the sexism that has long been “a feature, not a bug” (135) of the computing sector, with particular consideration of the ways in which sexist hiring and firing practices undermined the development of England’s computing sector. Further exploring how the sexism of today’s tech sector has roots in the development of the tech sector, Corinna Schlombs looks to the history of IBM to consider how that company suppressed efforts by workers to organize by framing the company as a family—albeit one wherein father still knew best. The biases built into voice recognition technologies (such as Siri) are delved into by Halcyon Lawrence who draws attention to the way that these technologies are biased towards those with accents, a reflection of the lack of diversity amongst those who design these technologies. In discussing robots, Safiya Umoja Noble explains how “Robots are the dreams of their designers, catering to the imaginaries we hold about who should do what in our societies” (202), and thus these robots reinscribe particular viewpoints and biases even as their creators claim they are creating robots for good. Shifting away from the flashiest gadgets of high-tech society, Andrea Stanton considers the cultural logics and biases embedded in word processing software that treat the demands of languages that are not written left to write as somehow aberrant. Considering how much of computer usage involves playing games, Noah Wardrip-Fruin argues that the limited set of video game logics keeps games from being about very much—a shooter is a shooter regardless of whether you are gunning down demons in hell or fanatics in a flooded ruin dense with metaphors.

    Oftentimes hiring more diverse candidates is hailed as the solution to the tech sector’s sexism and racism, but as Janet Abbate notes in the first chapter of the “Where Will the Fire Spread?” section, this approach generally attempts to force different groups to fit into Silicon Valley’s warped view of what attributes make for a good programmer. Abbate contends that equal representation will not be enough “until computer work is equally meaningful for groups who do not necessarily share the values and priorities that currently dominate Silicon Valley” (266). While computers do things to society, they also perform specific technical functions, and Ben Allen comments on source code to show the power that programmers have to insert nearly undetectable hacks into the systems they create. Returning to the question of code as empowerment, Sreela Sarkar discusses a skills training class held in Seelampur (near New Delhi), to show that “instead of equalizing disparities, IT-enabled globalization has created and further heightened divisions of class, caste, gender, religion, etc.” (308). Turning towards infrastructure, Paul Edwards considers how the speed with which platforms have developed to become infrastructure has been much swifter than the speed with which older infrastructural systems were developed, which he explores by highlighting three examples in various African contexts (FidoNet, M-Pesa, and Free Basiscs). And Thomas Mullaney closes out the third section with a consideration of the way that the QWERTY keyboard gave rise to pushback and creative solutions from those who sought to type in non-Latin scripts.

    Just as two of the editors began the book with a call to arms, so too the other two editors close the book with a similar rallying cry. In assessing the chapters that had come before, Kavita Philip emphasizes that the volume has chosen “complex, contradictory, contingent explanations over just-so stories.” (364) The contributors, and editors, have worked with great care to make it clear that the current state of computers was not inevitable—that things currently are the way they are does not mean they had to be that way, or that they cannot be changed. Eschewing simplistic solutions, Philip notes that language, history, and politics truly matter to our conversations about computing, and that as we seek for the way ahead we must be cognizant of all of them. In the book’s final piece, Benjamin Peters sets the computer fire against the backdrop of anthropogenic climate change and the COVID-19 pandemic, noting the odd juxtaposition between the progress narratives that surround technology and the ways in which “the world of human suffering has never so clearly appeared on the brink of ruin” (378). Pushing back against a simple desire to turn things off, Peters notes that “we cannot return the unasked for gifts of new media and computing” (380). Though the book has clearly been about computers, truly wrestling with the matters must force us to reflect on what it is that we really talk about when we talk about computers, and it turns out that “the question of life becomes how do not I but we live now?” (380)

    It is a challenging question, and it provides a fitting end to a book that challenges many of the dominant public narratives surrounding computers. And though the book has emphasized repeatedly how important it is to really talk about computers, this final question powers down the computer to force us to look at our own reflection in the mirrored surface of the computer screen.

    Yes, the book is about computers, but more than that it is about what it has meant to live with these devices—and what it might mean to live differently with them in the future.

    *

    With the creation of Your Computer Is on Fire the editors (Hicks, Mullaney, Peters, and Philip) have achieved an impressive feat. The volume is timely, provocative, wonderfully researched, filled with devastating insights, and composed in such a way as to make the contents accessible to a broad audience. It might seem a bit hyperbolic to suggest that anyone who has used a computer in the last week should read this book, but anyone who has used a computer in the last week should read this book. Scholars will benefit from the richly researched analysis, students will enjoy the forthright tone of the chapters, and anyone who uses computers will come away from the book with a clearer sense of the way in which these discussions matter for them and the world in which they live.

    For what this book accomplishes so spectacularly is to make it clear that when we think about computers and society it isn’t sufficient to just think about Facebook or facial recognition software or computer skills courses—we need to actually think about computers. We need to think about the history of computers, we need to think about the material aspects of computers, we need to think about the (oft-unseen) human labor that surrounds computers, we need to think about the language we use to discuss computers, and we need to think about the political values embedded in these machines and the political moments out of which these machines emerged. And yet, even as we shift our gaze to look at computers more critically, the contributors to Your Computer Is on Fire continually remind the reader that when we are thinking about computers we need to be thinking about deeper questions than just those about machines, we need to be considering what kind of technological world we want to live in. And moreover we need to be thinking about who is included and who is excluded when the word “we” is tossed about casually.

    Your Computer Is on Fire is simultaneously a book that will make you think, and a good book to think with. In other words, it is precisely the type of volume that is so desperately needed right now.

    The book derives much of its power from the willingness on the parts of the contributors to write in a declarative style. In this book criticisms are not carefully couched behind three layers of praise for Silicon Valley, and odes of affection for smartphones, rather the contributors stand firm in declaring that there are real problems (with historical roots) and that we are not going to be able to address them by pledging fealty to the companies that have so consistently shown a disregard for the broader world. This tone results in too many wonderful turns of phrase and incendiary remarks to be able to list all of them here, but the broad discussion around computers would be greatly enhanced with more comments like Janet Abbate’s “We have Black Girls Code, but we don’t have ‘White Boys Collaborate’ or ‘White Boys Learn Respect.’ Why not, if we want to nurture the full set of skills needed in computing?” (263) While critics of technology often find themselves having to argue from a defensive position, Your Computer Is on Fire is a book that almost gleefully goes on the offense.

    It almost seems like a disservice to the breadth of contributions to the volume to try to sum up its core message in a few lines, or to attempt to neatly capture the key takeaways in a few sentences. Nevertheless, insofar as the book has a clear undergirding position, beyond the titular idea, it is the one eloquently captured by Mar Hicks thusly:

    High technology is often a screen for propping up idealistic progress narratives while simultaneously torpedoing meaningful social reform with subtle and systemic sexism, classism, and racism…The computer revolution was not a revolution in any true sense: it left social and political hierarchies untouched, at times even strengthening them and heightening inequalities. (152)

    And this is the matter with which each contributor wrestles, as they break apart the “idealistic progress narratives” to reveal the ways that computers have time and again strengthened the already existing power structures…even if many people get to enjoy new shiny gadgets along the way.

    Your Computer Is on Fire is a jarring assessment of the current state of our computer dependent societies, and how they came to be the way they are; however, in considering this new book it is worth bearing in mind that it is not the first volume to try to capture the state of computers in a moment in time. That we find ourselves in the present position, is unfortunately a testament to decades of unheeded warnings.

    One of the objectives that is taken up throughout Your Computer Is on Fire is to counter the techno-utopian ideology that never so much dies as much as it shifts into the hands of some new would-be techno-savior wearing a crown of 1s and 0s. However, even as the mantle of techno-savior shifts from Mark Zuckerberg to Elon Musk, it seems that we may be in a moment when fewer people are willing to uncritically accept the idea that technological progress is synonymous with social progress. Though, if we are being frank, adoring faith in technology remains the dominant sentiment (at least in the US). Furthermore, this isn’t the first moment when a growing distrust and dissatisfaction with technological forces has risen, nor is this the first time that scholars have sought to speak out. Therefore, even as Your Computer is on Fire provides fantastic accounts of the history of computing, it is worthwhile to consider where this new vital volume fits within the history of critiques of computing. Or, to frame this slightly differently, in what ways is the 21st century critique of computing, different from the 20th century critique of computing?

    In 1979 the MIT Press published the edited volume The Computer Age: A Twenty Year View. Edited by Michael Dertouzos and Joel Moses, that book brought together a variety of influential figures from the early history of computing including J.C.R. Licklider, Herbert Simon, Marvin Minsky, and many others. The book was an overwhelmingly optimistic affair, and though the contributors anticipated that the mass uptake of computers would lead to some disruptions, they imagined that all of these changes would ultimately be for the best. Granted, the book was not without a critical voice. The computer scientist turned critic, Joseph Weizenbaum was afforded a chapter in a quarantined “Critiques” section from which to cast doubts on the utopian hopes that had filled the rest of the volume. And though Weizenbaum’s criticisms were presented, the book’s introduction politely scoffed at his woebegone outlook, and Weizenbaum’s chapter was followed by not one but two barbed responses, which ensured that his critical voice was not given the last word. Any attempt to assess The Computer Age at this point will likely say as much about the person doing the assessing as about the volume itself, and yet it would take a real commitment to only seeing the positive sides of computers to deny that the volume’s disparaged critic was one of its most prescient contributors.

    If The Computer Age can be seen as a reflection of the state of discourse surrounding computers in 1979, than Your Computer Is on Fire is a blazing demonstration of how greatly those discussions have changed by 2021. This is not to suggest that the techno-utopian mindset that so infused The Computer Age no longer exists. Alas, far from it.

    As the contributors to Your Computer Is on Fire make clear repeatedly, much of the present discussion around computing is dominated by hype and hopes. And a consideration of those conversations in the second half of the twentieth century reveals that hype and hope were dominant forces then as well. Granted, for much of that period (arguably until the mid-1980s and not really taking off until the 1990s), computers remained technologies with which most people had relatively little direct interaction. The mammoth machines of the 1960s and 1970s were not all top-secret (though some certainly were), but when social critics warned about computers in the 50s, 60s, and 70s they were not describing machines that had become ubiquitous—even if they warned that those machines would eventually become so. Thus, when Lewis Mumford warned in 1956, that:

    In creating the thinking machine, man has made the last step in submission to mechanization; and his final abdication before this product of his own ingenuity has given him a new object of worship: a cybernetic god. (Mumford, 173)

    It is somewhat understandable that his warning would be met with rolled eyes and impatient scoffs. For “the thinking machine” at that point remained isolated enough from most people’s daily lives that the idea that this was “a new object of worship” seemed almost absurd. Though he continued issuing dire predictions about computers, by 1970 when Mumford wrote of the development of “computer dominated society” this warning could still be dismissed as absurd hyperbole. And when Mumford’s friend, the aforementioned Joseph Weizenbaum, laid out a blistering critique of computers and the “artificial intelligentsia” in 1976 those warnings were still somewhat muddled as the computer remained largely out of sight and out of mind for large parts of society. Of course, these critics recognized that this “cybernetic god” had not as of yet become the new dominant faith, but they issued such warnings out of a sense that this was the direction in which things were developing.

    Already by the 1980s it was apparent to many scholars and critics that, despite the hype and revolutionary lingo, computers were primarily retrenching existing power relations while elevating the authority of a variety of new companies. And this gave rise to heated debates about how (and if) these technologies could be reclaimed and repurposed—Donna Haraway’s classic Cyborg Manifesto emerged out of those debates. By the time of 1990’s “Neo-Luddite Manifesto,” wherein Chellis Glendinning pointed to “computer technologies” as one of the types of technologies the Neo-Luddites were calling to be dismantled, the computer was becoming less and less an abstraction and more and more a feature of many people’s daily work lives. Though there is not space here to fully develop this argument, it may well be that the 1990s represent the decade in which many people found themselves suddenly in a “computer dominated society.”  Indeed, though Y2K is unfortunately often remembered as something of a hoax today, delving back into what was written about that crisis as it was unfolding makes it clear that in many sectors Y2K was the moment when people were forced to fully reckon with how quickly and how deeply they had become highly reliant on complex computerized systems. And, of course, much of what we know about the history of computing in those decades of the twentieth century we owe to the phenomenal research that has been done by many of the scholars who have contributed chapters to Your Computer Is on Fire.

    While Your Computer Is on Fire provides essential analyses of events from the twentieth century, as a critique it is very much a reflection of the twenty-first century. It is a volume that represents a moment in which critics are no longer warning “hey, watch out, or these computers might be on fire in the future” but in which critics can now confidently state “your computer is on fire.” In 1956 it could seem hyperbolic to suggest that computers would become “a new object of worship,” by 2021 such faith is on full display. In 1970 it was possible to warn of the threat of “computer dominated society,” by 2021 that “computer dominated society” has truly arrived. In the 1980s it could be argued that computers were reinforcing dominant power relations, in 2021 this is no longer a particularly controversial position. And perhaps most importantly, in 1990 it could still be suggested that computer technologies should be dismantled, but by 2021 the idea of dismantling these technologies that have become so interwoven in our daily lives seems dangerous, absurd, and unwanted. Your Computer Is on Fire is in many ways an acknowledgement that we are now living in the type of society about which many of the twentieth century’s technological critics warned. In the book’s final conclusion, Benjamin Peters pushes back against “Luddite self-righteousness” to note that “I can opt out of social networks; many others cannot” (377), and it is the emergence of this moment wherein the ability to “opt out” has itself become a privilege is precisely the sort of danger about which so many of the last century’s critics were so concerned.

    To look back at critiques of computers made throughout the twentieth century is in many ways a fairly depressing activity. For it reveals that many of those who were scorned as “doom mongers” had a fairly good sense of what computers would mean for the world. Certainly, some will continue to mock such figures for their humanism or borderline romanticism, but they were writing and living in a moment when the idea of living without a smartphone had not yet become unthinkable. As the contributors to this essential volume make clear, Your Computer Is on Fire, and yet too many of us still seem to believe that we are wearing asbestos gloves, and that if we suppress the flames of Facebook we will be able to safely warm our toes on our burning laptop.

    What Your Computer Is on Fire achieves so masterfully is to remind its readers that the wired up society in which they live was not inevitable, and what comes next is not inevitable either. And to remind them that if we are going to talk about what computers have wrought, we need to actually talk about computers. And yet the book is also a discomforting testament to a state of affairs wherein most of us simply do not have the option of swearing off computers. They fill our homes, they fill our societies, they fill our language, and they fill our imaginations. Thus, in dealing with this fire a first important step is to admit that there is a fire, and to stop absentmindedly pouring gasoline on everything. As Mar Hicks notes:

    Techno-optimist narratives surrounding high-technology and the public good—ones that assume technology is somehow inherently progressive—rely on historical fictions and blind spots that tend to overlook how large technological systems perpetuate structures of dominance and power already in place. (137)

    And as Kavita Philip describes:

    it is some combination of our addiction to the excitement of invention, with our enjoyment of individualized sophistications of a technological society, that has brought us to the brink of ruin even while illuminating our lives and enhancing the possibilities of collective agency. (365)

    Historically rich, provocatively written, engaging and engaged, Your Computer Is on Fire is a powerful reminder that when it is properly controlled fire can be useful, but when fire is allowed to rage out of control it turns everything it touches to ash. This book is not only a must read, but a must wrestle with, a must think with, and a must remember. After all, the “your” in the book’s title refers to you.

    Yes, you.

    _____

    Zachary Loeb earned his MSIS from the University of Texas at Austin, an MA from the Media, Culture, and Communications department at NYU, and is currently a PhD candidate in the History and Sociology of Science department at the University of Pennsylvania. Loeb works at the intersection of the history of technology and disaster studies, and his research focusses on the ways that complex technological systems amplify risk, as well as the history of technological doom-saying. He is working on a dissertation on Y2K. Loeb writes at the blog Librarianshipwreck, and is a frequent contributor to The b2o Review Digital Studies section.

    Back to the essay

    Works Cited

    • Lewis Mumford. The Transformations of Man. New York: Harper and Brothers, 1956.

     

     

     

     

     

  • Zachary Loeb — Does Facebook Have Politics? (Review of Langdon Winner, The Whale and the Reactor, second edition)

    Zachary Loeb — Does Facebook Have Politics? (Review of Langdon Winner, The Whale and the Reactor, second edition)

    a review of Langdon Winner, The Whale and the Reactor: A Search for Limits in an Age of High Technology, second edition (University of Chicago Press, 2020)

    by Zachary Loeb

    ~

    The announcement that Mark Zuckerberg and Priscilla Chan would be donating $300 million to help address some of the challenges COVID-19 poses for the 2020 elections was met with a great deal of derision. The scorn was not directed at the effort to recruit poll workers, or purchase PPE for them, but at the source from whence these funds were coming. Having profited massively from allowing COVID-19 misinformation to run rampant over Facebook, and having shirked responsibility as the platform exacerbated political tensions, the funding announcement came across not only as too little too late, but as a desperate publicity stunt. The incident was but another installment in Facebook’s tumult as the company (alongside its CEO/founder) continually finds itself cast as a villain. Facebook can take some solace in knowing that other tech companies—Google, Amazon, Uber—are also receiving increasingly negative attention, and yet it seems that for every one critical story about Amazon there are five harsh pieces about Facebook.

    Where Facebook, and Zuckerberg, had once enjoyed laudatory coverage, with the platform being hailed as an ally of democracy, by 2020 it has become increasingly common to see Facebook (and Zuckerberg) treated as democracy’s gravediggers. Indeed, much of the animus found in the increasingly barbed responses to Facebook seem to be animated by a sense of betrayal. Many people, including more than a few journalists and scholars, had initially been taken in by Facebook’s promises of a more open and connected world, even if they are loathe to admit that they had ever fallen for that ruse now. Certainly, or so the shift in sentiment conveys, Facebook and Zuckerberg deserve to be angrily upbraided and treated with withering skepticism now… but who could have seen this coming?

    “Technologies are not merely aids to human activity, but also powerful forces acting to reshape that activity and its meaning” (6). When those words were first published, in 1986, Mark Zuckerberg was around two years old, and yet those words provide a more concise explanation of Facebook than any Facebook press release or defensive public speech given by Zuckerberg. Granted, those words were not written specifically about Facebook (how could they have been?), but in order to express a key insight about the ways in which technologies impact the societies in which they are deployed. The point being not only to consider how technologies can have political implications, but to emphasize that technologies are themselves political. Or to put it slightly differently, Langdon Winner was warning about Facebook before there was a Facebook to warn about.

    More than thirty years after its initial publication, The University of Chicago Press has released a new edition of Langdon Winner’s The Whale and the Reactor. Considering the frequency with which this book, particularly its second chapter “Do Artifacts Have Politics?,” is still cited today, it is hard to suggest that Winner’s book has been forgotten by scholars. And beyond the academy, those who have spent even a small amount of time reading some of the prominent recent STS or media studies works will have likely come across his name. Therefore, the publication of the this second edition—equipped with a new preface, afterword, an additional chapter, and a spiffy red cover—represents an important opportunity to revisit Winner’s work. While its citational staying power suggests that The Whale and the Reactor has become something of an essential touchstone for works on the politics of technological systems, the larger concerns coursing through the book have not lost any of their weight in the years since the book was published.

    For at its core The Whale and the Reactor is not about the types of technologies we are making, but about the type of society we are making.

    Divided into three sections, The Whale and the Reactor wastes no time in laying out its central intervention. Noting that technology had rarely been treated as a serious topic for philosophical inquiry, Winner sets about arguing that an examined life must examine the technological systems that sustain that life. That technology has so often been relegated to the background has given rise to a sort of “technological somnambulism” whereby many “willingly sleepwalk” as the world is technologically reconfigured around them (10). Moving forward in this dreamy state, the sleepers may have some vague awareness of the extent to which these technological systems are becoming interwoven into their daily lives, but by the time they awaken (supposing they ever do awaken) these systems have accumulated sufficient momentum as to make it seemingly impossible to turn them off at all. Though The Whale and the Reactor is not a treatise on somnambulism, this characterization is significant insofar as a sleepwalker is one who staggers through the world in a state of unawareness, and thus cannot be held truly responsible. Contrary to such fecklessness, the argument presented by Winner is that responsibility for the world being remade by technology is shared by all those who live in that world. Sleepwalking is not an acceptable excuse.

    In what is almost certainly the best-known section of the book, Winner considers whether or not artifacts have politics—answering this question strongly in the affirmative. Couching his commentary in a recognition that “Scarcely a new invention comes along that someone doesn’t proclaim it as the salvation of a free society” (20), Winner highlights that social and economic forces leave clear markers on technologies, but he notes that the process works in the opposite direction as well. Two primary ways in which “artifacts can contain political priorities” (22) are explored: firstly, situations wherein a certain artifact is designed in such a way as to settle a particular larger issue; and secondly, technologies that are designed to function within, and reinforce, a certain variety of political organization. As an example of the first variety, Winner gives an example of mechanization at a nineteenth century reaper manufacturing plant, wherein the process of mechanization was pursued not to produce higher quality or less expensive products, but for the purposes of breaking the power of the factory’s union. While an example of the second sort of politics can be seen in the case of atomic weaponry (and nuclear power) wherein the very existence of these technologies necessitates complex organizations of control and secrecy. Though, of the two arguments, Winner frames the first example as presenting clearer proof, technologies of the latter case make a significant impact insofar as they tend to make “moral reasons other than those of practical necessity appear increasingly obsolete” (36) for the political governance of technological systems.

    Inquiring as to the politics of a particular technology provides a means by which to ask questions about the broader society, specifically: what kind of social order gets reified by this technology? One of freedom and equality? One of control and disenfranchisement? Or one that distracts from the maintenance of the status quo by providing the majority with a share in technological abundance? It is easy to avoid answering such questions when you are sleepwalking, and as a result, “without anyone having explicitly chosen it, dependency upon highly centralized organizations has gradually become a dominant social form” (47). That this has not been “explicitly chosen” is partially a result of the dominance of a technologically optimistic viewpoint that has held to “a conviction that all technology—whatever its size, shape, or complexion—is inherently liberating” (50). Though this bright-eyed outlook is periodically challenged by an awareness of the ways that some technologies can create or exacerbate hazards, these dangers wind up being treated largely as hurdles that will be overcome by further technological progress. When all technologies are seen as “inherently liberating” a situation arises wherein “liberation” comes to be seen only in terms of what can be technologically delivered. Thus, the challenge is to ask “What forms of technology are compatible with the kind of society we want to build?” (52) rather than simply assume that we will be content in whatever world we sleepily wander into. Rather than trust that technology will be “inherently liberating,” Winner emphasizes that it is necessary to ask what kinds of technology will be “compatible with freedom, social justice, and other key political ends” (55), and to pursue those technologies.

    Importantly, a variety of people and groups have been aware of the need to push for artifacts that more closely align with their political ideals, though these response have taken on a range of forms. Instead of seeing technology as deeply intertwined with political matters, some groups saw technology as a way of getting around political issues: why waste time organizing for political change when microcomputers and geodesic domes can allow you to build that alternative world here and now? In contrast to this consumeristic, individualistically oriented attitude (exemplified by works such as the Whole Earth Catalog), there were also efforts to ask broader political questions about the nature of technological systems such as the “appropriate technology” movement (which grew up around E.F. Schumacher’s Small is Beautiful). Yet such attempts appear already in the past, rearguard actions that were trying to meekly resist the increasing dominance of complex technical systems. As the long seventies shifted into the 1980s and increasing technological centralization became evident, such movements appear as romantic gestures towards the dream of decentralization. And though the longing for escape from centralized control persists, the direction  “technological ‘progress’ has followed” is one in which “people find themselves dependent upon a great many large, complex systems whose centers are, for all practical purposes, beyond their power to influence” (94).

    Perhaps no technology simultaneously demonstrates the tension between the dream of decentralization and growth of control quite like the computer. Written in the midst of what was being hailed as “the computer revolution” or the “information revolution” (98), The Whale and the Reactor bore witness to the exuberance with which the computer was greeted even as this revolution remained “conspicuously silent about its own ends” (102). Though it was not entirely clear what problem the computer was the solution to, there was still a clear sentiment that the computer had to be the solution to most problems. “Mythinformation” is the term Winner deploys to capture this “almost religious conviction that a widespread adoption of computers and communications systems along with easy access to electronic information will automatically produce a better world for human living” (105). Yet “mythinformation” performs technological politics in inverse order: instead of deciding on political goals and then seeking out the right technological forms for achieving those goals, it takes a technology (the computer) and then seeks to rearrange political problems in such a way as to make them appear as though they can be addressed by that technology. Thus, “computer romantics” hold to the view that “increasing access to information enhances democracy and equalizes social power” (108), less as a reflection of the way that political power works and more as a response to the fact that “increasing access to information” is one of the things that computers do well. Despite the equalizing hopes, earnest though they may have been, that were popular amongst the “computer romantics” the trends that were visible early in “the computer revolution” gave ample reason to believe that the main result would be “an increase in power by those who already had a great deal of power” (107). Indeed, contrary to the liberatory hopes that were pinned on “the computer revolution” the end result might be one wherein “confronted with omnipresent, all-seeing data banks, the populace may find passivity and compliance the safest route, avoiding activities that once represented political liberty” (115).

    Considering the overwhelming social forces working in favor of unimpeded technological progress, there are nevertheless a few factors that have been legitimated as reasons for arguing for limits. While there is a long trajectory of theorists and thinkers who have mulled over the matter of ecological despoilment, and while environmental degradation is a serious concern, “the state of nature” represents a fraught way to consider technological matters. For some, the environment has become little more than standing reserve to be exploited, while others have formed an almost mystical attachment to an imagination of pristine nature; in this context “ideas about things natural must be examined and criticized” as well (137). Related to environmental matters are concerns that take as their catchword “risk,” and which attempt to reframe the discussion away from hopes and towards potential dangers. Yet, in addition to cultural norms that praise certain kinds of “risk-taking,” a focus on risk assessment tends to frame situations in terms of tradeoffs wherein one must balance dangers against potential benefits—with the result being that the recontextualized benefit is generally perceived as being worth it. If the environment and risk are unsatisfactory ways to push for limits, so too has become the very notion of “human values” which “acts like a lawn mower that cuts flat whole fields of meaning and leaves them characterless” (158).

    In what had originally been The Whale and the Reactor’s last chapter, Winner brought himself fully into the discussion—recalling how it was that he came to be fascinated with these issues, and commenting on the unsettling juxtaposition he felt while seeing a whale swimming not far from the nuclear reactor at Diablo Canyon. It is a chapter that critiques the attitude towards technology, that Winner saw in many of his fellow citizens, as being one of people having “gotten used to having the benefits of technological conveniences without expecting to pay the costs” (171). This sentiment is still fully on display more than thirty years later, as Winner shifts his commentary (in a new chapter for this second edition) to the age of Facebook and the Trump Presidency. Treating the techno-utopian promises that had surrounded the early Internet as another instance of technology being seen as “inherently liberating,” Winner does not seem particularly surprised by the way that the Internet and social media are revealing that they “could become a seedbed for concentrated, ultimately authoritarian power” (189). In response to the “abuses of online power,” and beneath all of the glitz and liberating terminology that is affixed to the Internet, “it is still the concerns of consumerism and techno-narcissism that are emphasized above all” (195). Though the Internet had been hailed as a breakthrough, it has wound up leading primarily to breakdown.

    Near the book’s outset, Winner observes how “In debates about technology, society, and the environment, an extremely narrow range of concepts typically defines the realm of acceptable discussion” (xii), and it is those concepts that he wrestles with over the course of The Whale and the Reactor. And the point that Winner returns to throughout the volume is that technological choices—whether they are the result of active choice or a result of our “technological somnambulism”—are not just about technology. Rather, “What appear to be merely instrumental choices are better seen as choices about the form of social and political life a society builds, choices about the kinds of people we want to become” (52).

    Or, to put it a slightly different way, if we are going to talk about the type of technology we want, we first need to talk about the type of society we want, whether the year is 1986 or 2020.

    *

    Langdon Winner began his foreword to the 2010 edition of Lewis Mumford’s Technics and Civilization with the comment that “Anyone who studies the human dimensions of technological change must eventually come to terms with Lewis Mumford.” And it may be fair to note, in a similar vein, that anyone who studies the political dimensions of technological change must eventually come to terms with Langdon Winner. The staying power of The Whale and the Reactor is something which Winner acknowledges with a note of slightly self-deprecating humor, in the foreword to the book’s second edition, where he comments “At times, it seems my once bizarre heresy has finally become a weary truism” (vii).

    Indeed, to claim in 2020 that artifacts have politics is not to make a particularly radical statement. That statement has been affirmed enough times as to hardly make it a question that needs to be relitigated. Yet the second edition of The Whale and the Reactor is not a victory lap wherein Winner crows that he was right, nor is it the ashen lamentation of a Cassandra glumly observing that what they feared has transpired. Insofar as The Whale and the Reactor deserves this second edition, and to be clear it absolutely deserves this second edition, it is because the central concerns animating the book remain just as vital today.

    While the second edition contains a smattering of new material, the vast majority of the book remains as it originally was. As a result the book undergoes that strange kind of alchemy whereby a secondary source slowly transforms into a primary source—insofar as The Whale and the Reactor can now be treated as a document showing how, at least some, scholars were making sense of “the computer revolution” while in the midst of it. The book’s first third, which contains the “Do Artifacts Have Politics?” chapter, has certainly aged the best and the expansiveness with which Winner addresses the question of politics and technology makes it clear why those early chapters remain so widely read, while ensuring that these chapters have a certain timeless quality to them. However, as the book shifts into its exploration of “Technology: Reform and Revolution” the book does reveal its age. Read today, the commentary on “appropriate technology” comes across more as a reminder of a historical curio than as an exploration of the shortcomings of an experiment that recently failed. It feels somewhat odd to read Winner’s comments on “the state of nature,” bereft as they are of any real mention of climate change. And though Winner could have written in 1986 that technology was frequently overlooked as a topic deserving of philosophical scrutiny, today there are many works responding to that earlier lack (and many of those works even cite Winner). While Winner certainly cannot be faulted for not seeing the future, what makes some of these chapters feel particularly dated is that in many other places Winner excelled so remarkably at seeing the future.

    The chapter on “Mythinformation” stands as an excellent critical snapshot of the mid-80s enthusiasm that surrounded “the computer revolution,” with Winner skillfully noting how the utopian hopes surrounding computers were just the latest in the well-worn pattern wherein every new technology is seen as “inherently liberating.” In writing on computers, Winner does important work in separating the basics of what these machines literally can do, from the sorts of far-flung hopes that their advocates attached to them. After questioning whether the issues facing society are genuinely ones that boil down to access to information, Winner noted that it was more than likely that the real impact of computers would be to help those in control stay in control. As he puts it, “if there is to be a computer revolution, the best guess is that it will have a distinctively conservative character” (107) .In 1986, it may have been necessary to speak of this in terms of a “best guess,” and such comments may have met with angry responses from a host of directions, but in 2020 it seems fairly clear that Winner’s sense of what the impact of computers would be was not wrong.

    Considering the directions that widespread computerization would push societies, Winner hypothesized that it could lead to a breakdown in certain kinds of in-person contact and make it so that people would “become even more susceptible to the influence of employers, news media, advertisers, and national political leaders” (116). And moving to the present, in the second edition’s new chapter, Winner observes that despite the shiny toys of the Internet the result has been one wherein people “yield unthinkingly to various kinds of encoded manipulation (especially political manipulation), varieties of misinformation, computational propaganda, and political malware” (187). It is not that The Whale and the Reactor comes out to openly declare “don’t tell me that you weren’t warned,” but there is something about the second edition being published now, that feels like a pointed reminder. As former techno-optimists rebrand as techno-skeptics, the second edition is a reminder that some people knew to be wary from the beginning. Some may anxiously bristle as the CEOs of tech giants testify before Congress, some may feel a deep sense of disappointment every time they see yet another story about Facebook’s malfeasance, but The Whale and the Reactor is a reminder that these problems could have been anticipated. If we are unwilling to truly confront the politics of technologies when those technologies are new, we may find ourselves struggling to deal with the political impacts of those technologies once they have wreaked havoc.

    Beyond its classic posing of the important “do artifacts have politics?” question, the present collision between technology and politics helps draw attention to a deeper matter running through The Whale and the Reactor. Namely, that the book keeps coming back to the idea of democracy. Indeed, The Whale and the Reactor shows a refreshingly stubborn commitment to this idea. Technology clearly matters in the book, and technologies are taken very seriously throughout the book, but Winner keeps returning to democracy. In commenting on the ways in which artifacts have politics, the examples that Winner explores are largely ones wherein technological systems are put in place that entrench the political authority of a powerful minority, or which require the development of regimes that exceed democratic control. For Winner, democracy (and being a participant in a democracy) is an active process, one that cannot be replaced by “passive monitoring of electronic news and information” which “allows citizens to feel involved while dampening the desire to take an active part” (111). Insofar as “the vitality of democratic politics depends upon people’s willingness to act together in pursuit of their common ends” (111), a host of technological systems have been put in place that seem to have simultaneously sapped “people’s willingness” while also breaking down a sense of “common ends.” And though the Internet may trigger some nostalgic memory of active democracy, it is only a “pseudopublic realm” wherein the absence of the real conditions of democracy “helps generate wave after wave of toxic discourse along with distressing patterns of oligarchical rule, incipient authoritarianism, and governance by phonies and confidence men” (192).

    Those who remain committed to arguing for the liberatory potential of computers and the Internet, a group which includes individuals from a range of perspectives, might justifiably push back against Winner by critiquing the vision of democracy he celebrates. After all, there is something rather romantic about  Winner’s evocations of New England townhall meetings  and his comments on the virtues of face-to-face encounters. Do all participants in such encounters truly get to participate equally? Are such situations even set up so that all people can participate equally? What sorts of people and what modes of participation are privileged by such a model of democracy? Is a New England townhall meeting really a model for twenty-first century democracy? Here it is easy to picture Winner responding that what such questions reveal is the need to create technologies that will address those problems—and where a split may then open up is around the question of whether or not computers and the Internet represent such tools. That “technologies are not merely aids to human activity, but also powerful forces acting to reshape that activity and its meaning” (6) opens up a space in which different technologies can be built, even as other technologies can be dismantled, but such a recognition forces us to look critically at our technologies and truly confront the type of world that we are making and reinforcing for each other. And, in terms of computers and the Internet, the question that The Whale and the Reactor forces to the fore is one of: which are we putting first, computers or democracy?

    Winner warned his readers of the dangers of “technological somnambulism,” but it unfortunately seems that his call was not sufficient to wake up the sleepers in his midst in the 1980s. Alas, that The Whale and the Reactor remains so strikingly relevant is partially a testament to the persistence of the sleepwalkers’ continual slouch into the future. And though there may be some hopeful signs of late that more and more people are groggily stirring and rubbing the slumber from their eyes—the resistance to facial recognition is certainly a hopeful sign—a danger persists that many will conclude that since they have reached this spot that they must figure out some way to justify being here. After all, few want to admit that they have been sleepwalking. What makes The Whale and the Reactor worth revisiting today is not only that Winner asks the question “do artifacts have politics?” but the way in which, in responding to this question, he is willing to note that there are some artifacts that have bad politics. That there are some artifacts that do not align with our political goals and values. And what’s more, that when we are confronted with such artifacts, we do not need to pretend that they are our friends just because they have rearranged our society in such a way that we have no choice but to use them.

    In the foreword to the first edition of The Whale and the Reactor, Winner noted “In an age in which the inexhaustible power of scientific technology makes all things possible, it remains to be seen where we will draw the line, where we will be able to say, here are the possibilities that wisdom suggests we avoid” (xiii). For better, or quite likely for worse, that still remains to be seen today.

    _____

    Zachary Loeb earned his MSIS from the University of Texas at Austin, an MA from the Media, Culture, and Communications department at NYU, and is currently a PhD candidate in the History and Sociology of Science department at the University of Pennsylvania. Loeb works at the intersection of the history of technology and disaster studies, and his research focusses on the ways that complex technological systems amplify risk, as well as the history of technological doom-saying. He is working on a dissertation on Y2K. Loeb writes at the blog Librarianshipwreck, and is a frequent contributor to The b2 Review Digital Studies section.

    Back to the essay

  • Zachary Loeb — General Ludd in the Long Seventies (Review of Matt Tierney, Dismantlings)

    Zachary Loeb — General Ludd in the Long Seventies (Review of Matt Tierney, Dismantlings)

    a review of Matt Tierney, Dismantlings: Words Against Machines in the American Long Seventies (Cornell University Press, 2019)

    by Zachary Loeb

    ~

    The guy said, “If machinery
    makes you so happy
    go buy yourself
    a Happiness Machine.”
    Then he realized:
    They were trying to do
    exactly that.

    – Kenneth Burke, “Routine for a Stand-Up Comedian” (15)

    A sledgehammer is a fairly versatile tool. You can use it do destroy things, you can use it to build things, and in some cases you can use it to destroy things so that you can build things. Granted, it remains a rather heavy and fairly blunt tool, it is not particularly well suited for fine detail work requiring a high degree of precision. Which is, likely, one of the reasons why those who are famed for wielding sledgehammers often wind up being characterized as being just as blunt and unsubtle as the heavy instruments they swung.

    And, perhaps, no group has been more closely associated with sledgehammers, than the Luddites. Those early 19th century skilled crafts workers who took up arms to defend their communities and their livelihoods from the “obnoxious machines” being introduced by their employers. Though the tactic of machine breaking as a form of protest has a lengthy history that predates (and post-dates) the Luddites, it is a tactic that has come to be bound up with the name of the followers of the mysterious General Ludd. Despite the efforts of writers and thinkers to rescue the Luddite’s legacy from “the enormous condescension of posterity” (Thompson, 12), the term “Luddite” today generally has less to do with a specific historical group and has instead largely become an epithet to be hurled at anyone who dares question the gospel of technological progress. Yet, as the second decade of the twenty-first century comes to a close, it may well be that “Luddite” has lost some of its insulting sting against the backdrop of metastasizing tech giants, growing mountains of toxic e-waste, and an ecological crisis that owes much to an unquestioned faith in the benefits of technology.

    General Ludd may well get the last laugh.

    That the Luddites have lingered so fiercely in the public imagination is a testament to the fact that the Luddites, and the actions for which they are remembered, are good to think with. Insofar as one can talk about Luddism it represents less a coherent body of thought created by the Luddites themselves, and more the attempt by later scholars, critics, artists, and activists to try to make sense of what is usable from the Luddite legacy. And it is this effort to think through and think with, that Matt Tierney explores in his phenomenal book Dismantlings: Words Against Machines in the American Long Seventies. While the focus of Dismantlings, as its title makes clear, is on the “long seventies” (the years from 1965 to 1980) the book represents an important intervention in current discussions and debates around the impacts of technology on society. Just as the various figures Tierney discussed turned their thinking (to varying extents) back to the Luddites, so too the book argues is it worth revisiting the thinking and writing on the matter from the long seventies. This is not a book on the historical Luddites, instead this book is a vital contribution to attempts to theorize what Luddism might mean, and how we are to confront the various technological challenges facing us today.

    Largely remembered for occurrences including the Vietnam War, the Civil Rights movement, the space race, and a general tone of social upheaval – the long seventies also represented a period when technological questions were gaining prominence. With thinkers such as Marshall McLuhan, Buckminster Fuller, Norbert Wiener, and Stewart Brand all putting forth visions of the way that the new consumer technologies would remake society: creating “global villages” or giving rise to a perception of all of humanity as passengers on “spaceship earth.” Yet they were hardly the only figures contemplating technology in that period, and many of the other visions that emerged aimed to directly challenge some of the assumptions and optimism of the likes of McLuhan and Fuller. In the long seventies, the question of what would come next was closely entwined with an evaluation of what had come before, indeed “the breaking of retrogressive notions of technology coupled with the breaking of retrogressive technologies…undergoes a period of vital activity during the Long Seventies in the poems, fictions, and activist speech of what was then called cyberculture,” (15). Granted, this was a “breaking” that generally had more to do with theorizing than with actual machine smashing. Instead it could more accurately be seen as “dismantling,” the careful taking apart so that the functioning can be more fully understood and evaluated. Yet it is a thinking that, importantly, occurred against a recognition that the world was, as Norbert Wiener observed, “the world of Belsen and Hiroshima” (8). To make sense of the resistant narratives towards technology in the long seventies it is necessary to engage critically with the terminology of the period, and thus Tierney’s book represents a sort of conceptual “counterlexicon,” to do just that.

    As anyone who knows about the historical Luddites can attest, they did not hate technology (as such). Rather they were opposed to particular machines being used in a particular way at a particular place and time. And it is a similar attitude towards Luddism (not as an opposition to all technology, but as an understanding that technology has social implications) that Tierney discusses in the long seventies. Luddism here comes to represent “a gradual relinquishing of machines whose continued use would contravene ethical principles” (30), and this attitude is found in Langdon Winner’s concept of “epistemological Luddism” (as discussed in his book Autonomous Technology) and in the poetry of Audre Lorde. While Lorde’s line “for the master’s tools will never dismantle the master’s house” continues to be well known by activists, the question of “tools” can also be engaged with quite literally. Approached with a mind towards Luddism, Lorde’s remarks can be seen as indicating that it is not only that “the master’s house” must be dismantled but “the master’s tools” as well – and Lorde’s writing suggests poetry as a key tool for the dismantler. The version of Luddism that emerges in the late seventies represents a “sort of relinquishing” it “is not about machine-smashing at all” (47), instead it entails a careful work of examining machines to determine which are worth keeping.

    The attitudes towards technology of the long seventies were closely entwined with a sense of the world as made seemingly smaller and more connected thanks to the new technologies of the era. A certain strand of thinking in this period, exemplified by McLuhan’s “global village” or Fuller’s “Spaceship Earth,” achieved great popular success even as reactionary racist and nativist notions lurked just below the surface of the seeming technological optimism of those concepts. Contrary to the “fatalistic acceptance of new technological constraints on life” (48), works by science fiction authors like Ursula Le Guin and Samuel R. Delaney presented a notion of “communion, as a collaborative process of making do” (51). Works like The Dispossessed (Le Guin) and Triton (Delaney), presented readers with visions, and questions, of “real coexistence…not the passage but the sharing of a moment” (63). In contrast to the “technological Messianism” (74) of the likes of Fuller and McLuhan, the “communion” based works by the likes of Le Guin and Delaney focused less on exuberance for the machines themselves but instead sought to critically engage with what types of coexistence such machines would and could genuinely facilitate.

    Coined by Alice Mary Hilton, in 1963, the idea of “cyberculture” did not originally connote the sort of blissed-out-techno-optimism that the term evokes today. Rather it was meant to be “an alternative to the global village and the one-town world, and an insistence on collective action in a world not only of Belsen and Hiroshima but also of ongoing struggles toward decolonization, sexual and gender autonomy, and racial justice” (12). Thus, “cyberculture” (and cybernetics more generally) may represent one of the alternative pathways along which technological society could have developed. What “cyberculture” represented was not an exuberant embrace of all things “cyber,” but an attempt to name and thereby open a space for protest, not “against thinking machines” but which would “interrupt the advancing consensus that such machines had shrunk the globe” (81). These concepts achieved further maturation in the Ad Hoc Committee’s “Triple Revolution Manifesto” (from 1964), which sought to link an emancipatory political program to advances in new technology, linking “cybernation to a decrease in capitalist, racist, and militarist violence” (85). Seizing upon an earnest belief that the technological ethics could guide new technological developments towards just ends, “cyberculture” also imagined that such tools could supplant scarcity with abundance.

    What “cyberculture” based thinking consists of is a sort of theoretical imagining, which is why a document like a manifesto represents such an excellent example of “cyberculture” in practice. It is a sort of “distortion” that recognizes how “the fates of militarism, racism, and cybernation have only ever been knotted together” and “thus calls for imaginative practices, whether literary or activist, for cutting through the knot” (95). This is the sort of theorizing that can be seen in Martin Luther King, Jr.’s commentary on how science and technology had made of “this world a neighborhood” without yet making “of it a brotherhood” (96). The technological ethics of the advocates of “cyberculture” could be the tools with which to make “it a brotherhood” without discarding all of the tools that had made it first “a neighborhood.” The risks and opportunities of new technological forms were also commented upon in works like Shulamith Firestone’s Dialectic of Sex wherein she argued that women needed to seize and guide these technologies. Blending analysis of what is with a program for what could be, Firestone’s work shows “that if other technologies are possible, then other social practices, even practices that are rarely considered in relation to new technology, may be possible too” (105).

    For some, in the long seventies, challenging machinery still took on a destructive form. Though this often entailed a sort of “revolutionary suicide” which represented an attempt to “prevent the becoming-machine of subjugated human bodies and selves” (113). A refusal to become a machine oneself, and a refusal to allow oneself to become fodder for the machine. Such a self-destructive act flows from the Pynchon-esque tragic recognition of a growing consensus “that nothing can be done to oppose” the new machines (122). Such woebegone dejection is in contrast to other attitudes that sought to not only imagine but to also construct new tools that would put the people and community first. John Mohawk, of the Haudenosaunee Confederacy of Mohawk, Oneida, Onondaga, Cayuga, and Seneca people gave voice to this in his theorizing of “liberation technology.” As Mohawk explained at a UN session, “Decentralized technologies that meet the needs of the people those technologies serve will necessarily give life to a different kind of political structure, and it is safe to predict that the political structure that results will be anticolonial in nature” (127). The search for such alternative technologies suggested a framework in which what was needed was “machines to suit the community, or else no machines at all” (129) – a position that countered the technological abundance hoped for by “cyberculture” with an appeal for technologies of subsistence. After all, this was the world of Belsen and Hiroshima, “a world of new and barely understood technologies” (149), in such a world “where the very skin of the planet is a ledger of technological misapplications” (154) it is wise to proceed with caution and humility.

    The long seventies present a fascinating kaleidoscope of visions of technologies, how to live with them, how to select them, and how to think about them. What makes the long seventies so worthy of revisiting is that they and the present moment are both “seized with a critical discourse about technology, and by a popular social upheaval in which new social movements emerge, grow, and proliferate” (5). Luddism may be routinely held up as a foolish reaction, but “by breaking apart certain machines, we can learn to use them better, or never use them again. By dissecting certain technocentric cultural logics, we can likewise challenge or reject them” (162). That the Luddites are so constantly vilified may ultimately be a signal of their dangerous power, insofar as they show that people need not passively sit and accept everything that is sold to them as technological progress. Dismantling represents a politics “not as machine hating, but as a way to protect life against a large=scale regimentation and policing of security, labor, time, and community” (166).

    To engage in the fraught work of technological critique is to open oneself up to being labeled a Luddite (with the term being hurled as an epithet), to accusations of complicity in the very systems you are critiquing, and to a realization that many people simply don’t want to listen to their smartphone habits being criticized. Yet the various conceptual frameworks that can be derived from a consideration of “words against machines in the American long seventies” provide “tactics that might be repeated or emulated, if nostalgia and cynicism do not bar the way” (172). Such concepts present a method of pushing back at the “yes, but” logic which riddles so many discussions of technology today – conversations in which the downsides are acknowledged (the “yes”), yet where the counter is always offered that perhaps there’s still a way to use those technologies correctly (the “but”).

    In contrast to the comfortable rut of “yes, but” Tierney’s book argues for dismantling, wherein “to dismantle is to set aside the dithering of yes, but and to try instead the hard work of critique” (175).

    Running through many of the thinkers, writers, and activists detailed in Dismantlings is a genuine attempt to come to terms with the ways in which new technological forces are changing society. Though many of these individuals responded to such changes not by picking up hammers, but by turning to writing, this activity was always couched in a sense that the shifts afoot truly mattered. Agitated by the roaring clangor of the machines of their day, these figures from the long seventies were looking at the machines of their moment in order to consider what would need to be done to construct a different future. And they did this while looking askance at the more popular techno-utopian visions of the future being promulgated in their day. Writing of the historic Luddites, the historian David Noble commented that, “the Luddites were perhaps the last people in the West to perceive technology in the present tense and to act upon that perception” (Noble, 7), and it may be tempting to suggest that the various figures cataloged in Dismantlings were too focused on the future to have acted upon technology in their present. Nevertheless, as Tierney notes, “the present does not precede the future; rather the future (like its past) distorts and neighbors the present” (173) – the Luddites may have acted in the present, but their eyes were also on the future. It is worth remembering that we do not make sense of the technologies around us solely by what they mean now, but by what we think they will mean for the future.

    While Dismantlings provides a “counterlexicon” drawn from the writing/thinking/acting of a range of individuals in the late seventies, there is something rather tragic about reading these thoughts two decades into the twenty-first century. After all, readers of Dismantlings find themselves in what would have been the future to these late seventies thinkers. And, to be blunt, the world of today seems more in line with those thinkers’ fears for the future than with their hopes. An “epistemological Luddism” has not been used to carefully evaluate which tools to keep and which to discard, “communion” has not become a guiding principle, and “cyberculture” has drifted away from Hiton’s initial meaning to become a stand-in for a sort of uncritical techno-utopianism. The “master’s tools” have expanded to encompass ever more powerful tools, and the “master’s house” appears sturdier than ever – worse still many of us may have become so enamored by some of “the master’s tools” that we have started to entertain delusions that these are actually our tools. To a certain extent, Dismantlings stands as a reminder of a range of individuals who tried to warn us that we would wind up in the mess in which we find ourselves. Those who are equipped with such powers of perception are often mocked and derided in their own time, but looking back at them with hindsight one can get a discomforting sense of just how prescient they truly were.

    Matt Tierney’s Dismantlings: Words Against Machines in the American Long Seventies is a remarkable book. It is also a difficult book. Difficult not because of impenetrable theoretical prose (the writing is clear and crisp), but because it is always challenging to go back and confront the warnings that were ignored. At a moment when headlines are filled with sordid tales of the malfeasance of the tech behemoths, and increasingly terrifying news of the state of the planet, it is both reassuring and infuriating to recognize that it did not have to be this way. True, these long seventies figures did not specifically warn about Facebook, and climate change was not the term they used to speak of environmental degradation – but it’s doubtful that many of these figures would be particularly surprised by either occurrence.

    As a contribution to scholarship, Dismantlings represents a much needed addition to the literature on the long seventies – particularly the literature that considers technology in that period. While much of the present literature (much of it excellent) dealing with those years has tended to focus on the hippies who fell in love with their computers, Tierney’s book is a reminder of those who never composed poems of praise for their machines. After all, not everyone believed that the computer would be an emancipatory technology. This book brings together a wide assortment of figures and draws useful connections between them that will hopefully rescue many a name from obscurity. And even those names that can hardly be called obscure appear in a new light when viewed through the lenses that Tierney develops in this book. While readers may be familiar with names like Lorde, Le Guin, Delaney, and Pynchon – Tierney makes it clear that there is much to be gained by reading Hilton, Mohawk, Firestone, and revisiting the “Triple Revolution Manifesto.”

    Tierney also offers a vital intervention into ongoing discussions over the meaning of Luddism. While it may be fair to say that such discussions are occurring amongst a rather small group of people, it is a passionate debate nevertheless. Tierney avoids re-litigating the history of the original Luddites, and his timeline cuts off before the emergence of the Neo-Luddites, but his book provides valuable insight into the transformations the idea of Luddism went through in the long seventies. Granted, Luddism does not always appear to be a term that was being embraced by the figures in Tierney’s history. Certainly, Winner developed the concept of “epistemological Luddism,” and Pynchon is still remembered for his “Is it O.K. to Be a Luddite?” op-ed, but many of those who spoke about dismantling did not don the mask, or pick up the hammer, of General Ludd. Thus, this book is a clear attempt not to restate others’ views on Luddism, but to freshly theorize the idea. Drawing on his long seventies sources, Tierney writes that:

    Luddism is not the destruction of all machines. And neither is it the hatred of machines as such. Like cyberculture, it is another word for dismantling. Luddism is the performative breaking of machines that limit species expression and impede planetary survival. (13)

    This is a robust and loaded definition of Luddism. While it clearly moves Luddism towards a practice instead of simply a descriptor for particular historical actors, it also presents Luddism as a constructive (as opposed to destructive) process. There are several aspects of Tierney’s definition that deserve particular attention. First, by also evoking “cyberculture” (referring to Hilton’s ethically grounded notion when she coined the term), Tierney demonstrates that Luddism is not the only word or tactic for dismantling. Second, by evoking “the performative breaking,” Tierney moves Luddism away from the blunt force of hammers and towards the more difficult work of critical evaluation. Lastly, by linking Luddism to “species expression and…planetary survival,” Tierney highlights that even if this Luddism is not “the hatred of machines as such” it still entails the recognition that there are some machines that should be hated – and that should be taken apart. It’s the sort of message that you can imagine many people getting behind, even as one can anticipate the choruses of “yes, but” that would be sure to greet this.

    Granted, even though Tierney considers a fair number of manifestos of a revolutionary sort, Dismantlings is not a new Luddite manifesto (though it might be a Luddite lexicon). While Tierney writes of the various figures he analyzes with empathy and affection, he also writes with a certain weariness. After all, as was noted earlier, we are currently living in the world about which these critics tried to warn us. And therefore Tierney can note, “if no political overturning followed the literary politics of cyberculture and Luddism in their own moment, then certainly none will follow them now” (25). Nevertheless, Tierney couches these dour comments in the observation that, “even as a revolution fails, its failure fuels common feeling without which subsequent revolutions cannot succeed” (25). At the very least the assorted thinkers and works described in Dismantlings provide a rich resource to those in the present who are concerned about “species expression” and “planetary survival.” Indeed, those advocating to break up the tech companies or pushing for the Green New Deal can learn a great deal by revisiting the works discussed in Dismantlings.

    Nevertheless, it feels as though there are some key characters missing from Dismantlings. To be clear this point is not meant to detract from Tierney’s excellent and worthwhile book. Furthermore, it must be noted that devotees of particular theorists and social critics tend to have a strong “why isn’t [the theorist/social critic I am devoted to] discussed more in here!?” reaction to works. Nevertheless, there were certain figures who seemed to be oddly missing from Dismantlings. Reflecting on the types of machines against which figures in the long seventies were reacting, Tierney writes of “the war machine, the industrial machine, the computer, and the machines of state are all connected” (4). And it was the dangerous connection of all of these that the social critic Lewis Mumford sought to describe in his theorizing of “the megamachine” – theorizing which he largely did in his two volume Myth of the Machine (which was published in the long seventies). Though Mumford’s idea of “technic” eras is briefly mentioned early in Dismantlings, his broader thinking that touches directly on the core areas of Dismantlings are not remarked on. Several figures who were heavily influenced by Mumford’s work appear in Dismantlings (notably Bookchin and Roszak), and Mumford’s thought could have certainly bolstered some of the books arguments. Mumford, after all, saw himself as a bit of an anti-McLuhan – and in evaluating thinkers who were concerned with what technology meant for “species expression” and “planetary survival” Mumford deserves more attention. Given the overall thrust of Dismantlings it also might have been interesting to see Erich Fromm’s The Revolution of Hope: toward a humanized technology and Ivan Illich’s Tools for Conviviality discussed. Granted, these comments are not meant as attacks on Tierney’s excellent book – they are simply an observation by an avowed Mumford partisan.

    To fully appreciate why the thoughts from the long seventies still matter today it may be useful to consider a line from one of Mumford’s early works. As Mumford wrote, in 1931, “every generation revolts against its fathers and makes friends with its grandfathers” (Mumford, 1). To a certain extent, Dismantlings is an argument for those currently invested in debates around technology to revisit “and make friends” with earlier generations of critics. There is much to be gained from such a move. Notable here is a shift in an evaluation of dangers. Throughout Dismantlings Tierney returns frequently to Wiener’s line that “this is the world of Belsen and Hiroshima” – and without meaning to be crass this is an understanding of the world that has somewhat receded into the past as the memory of those events becomes enshrined in history books. Yet for the likes of Wiener and many of the other individuals discussed in Dismantlings, “Belsen and Hiroshima” were not abstractions or distant memories – they were not the crimes that could be consigned to the past. Rather they were bleak reminders of the depths to which humanity could sink, and the way in which science and technology could act as a weight to drag humanity even deeper. Today’s world is the world of climate change, border walls, and surveillance capitalism – but it is still “the world of Belsen and Hiroshima.”

    There is much that needs to be dismantled, and not much time in which to do that work.

    The lessons from the long seventies are those that we are still struggling to reckon with today, including the recognition that in order to fully make sense of the machines around us it may be necessary to dismantle many of them. Of course, “not everything should be dismantled, but many things should be and some things must be, even if we don’t know where to begin” (163).

    Tierney’s book does not provide an easy answer, but it does show where we should begin.

    _____

    Zachary Loeb earned his MSIS from the University of Texas at Austin, an MA from the Media, Culture, and Communications department at NYU, and is currently a PhD candidate in the History and Sociology of Science department at the University of Pennsylvania. Loeb works at the intersection of the history of technology and disaster studies, and his research focusses on the ways that complex technological systems amplify risk, as well as the history of technological doom-saying. He is working on a dissertation on Y2K. Loeb writes at the blog Librarianshipwreck, and is a frequent contributor to The b2 Review Digital Studies section.

    Back to the essay

    _____

    Works Cited

    • Lewis Mumford. The Brown Decades. New York: Dover Books, 1971.
    • David F. Noble. Progress Without People. Toronto: Between the Lines, 1995.
    • E.P. Thompson. The Making of the English Working Class. New York: Vintage Books, 1966.
  • Zachary Loeb — Flamethrowers and Fire Extinguishers (Review of Jeff Orlowski, dir., The Social Dilemma)

    Zachary Loeb — Flamethrowers and Fire Extinguishers (Review of Jeff Orlowski, dir., The Social Dilemma)

    a review of Jeff Orlowski, dir., The Social Dilemma (Netflix/Exposure Labs/Argent Pictures, 2020)

    by Zachary Loeb

    ~

    The myth of technological and political and social inevitability is a powerful tranquilizer of the conscience. Its service is to remove responsibility from the shoulders of everyone who truly believes in it. But, in fact, there are actors!

    – Joseph Weizenbaum (1976)

    Why did you last look at your smartphone? Did you need to check the time? Was picking it up a conscious decision driven by the need to do something very particular, or were you just bored? Did you turn to your phone because its buzzing and ringing prompted you to pay attention to it? Regardless of the particular reasons, do you sometimes find yourself thinking that you are staring at your phone (or other computerized screens) more often than you truly want? And do you ever feel, even if you dare not speak this suspicion aloud, that your gadgets are manipulating you?

    The good news is that you aren’t just being paranoid, your gadgets were designed in such a way as to keep you constantly engaging with them. The bad news is that you aren’t just being paranoid, your gadgets were designed in such a way as to keep you constantly engaging with them. What’s more, on the bad news front, these devices (and the platforms they run) are constantly sucking up information on you and are now pushing and prodding you down particular paths. Furthermore, alas more bad news, these gadgets and platforms are not only wreaking havoc on your attention span they are also undermining the stability of your society. Nevertheless, even though there is ample cause to worry, the new film The Social Dilemma ultimately has good news for you: a collection of former tech-insiders is starting to speak out! Sure, many of these individuals are the exact people responsible for building the platforms that are currently causing so much havoc—but they meant well, they’re very sorry, and (did you hear?) they meant well.

    Directed by Jeff Orlowski, and released to Netflix in early September 2020, The Social Dilemma is a docudrama that claims to provide a unsparing portrait of what social media platforms have wrought. While the film is made up of a hodgepodge of elements, at the core of the work are a series of interviews with Silicon Valley alumni who are concerned with the direction in which their former companies are pushing the world. Most notable amongst these, the film’s central character to the extent it has one, is Tristan Harris (formerly a design ethicists at Google, and one of the cofounders of The Center for Humane Technology) who is not only repeatedly interviewed but is also shown testifying before the Senate and delivering a TED style address to a room filled with tech luminaries. This cast of remorseful insiders is bolstered by a smattering of academics, and non-profit leaders, who provide some additional context and theoretical heft to the insiders’ recollections. And beyond these interviews the film incorporates a fictional quasi-narrative element depicting the members of a family (particularly its three teenage children) as they navigate their Internet addled world—with this narrative providing the film an opportunity to strikingly dramatize how social media “works.”

    The Social Dilemma makes some important points about the way that social media works, and the insiders interviewed in the film bring a noteworthy perspective. Yet beyond the sad eyes, disturbing animations, and ominous music The Social Dilemma is a piece of manipulative filmmaking on par with the social media platforms it critiques. While presenting itself as a clear-eyed expose of Silicon Valley, the film is ultimately a redemption tour for a gaggle of supposedly reformed techies wrapped in an account that is so desperate to appeal to “both sides” that it is unwilling to speak hard truths.

    The film warns that the social media companies are not your friends, and that is certainly true, but The Social Dilemma is not your friend either.

    The Social Dilemma

    As the film begins the insiders introduce themselves, naming the companies where they had worked, and identifying some of the particular elements (such as the “like” button) with which they were involved. Their introductions are peppered with expressions of concern intermingled with earnest comments about how “Nobody, I deeply believe, ever intended any of these consequences,” and that “There’s no one bad guy.” As the film transitions to Tristan Harris rehearsing for the talk that will feature later in the film, he comments that “there’s a problem happening in the tech industry, and it doesn’t have a name.” After recounting his personal awakening, whilst working at Google, and his attempt to spark a serious debate about these issues with his coworkers, the film finds “a name” for the “problem” Harris had alluded to: “surveillance capitalism.” The thinker who coined that term, Shoshana Zuboff, appears to discuss this concept which captures the way in which Silicon Valley thrives not off of users’ labor but off of every detail that can be sucked up about those users and then sold off to advertisers.

    After being named, “surveillance capitalism” hovers in the explanatory background as the film considers how social media companies constantly pursue three goals: engagement (to keep you coming back), growth (to get you to bring in more users), and advertising (to get better at putting the right ad in front of your eyes, which is how the platforms make money). The algorithms behind these platforms are constantly being tweaked through A/B testing, with every small improvement being focused around keeping users more engaged. Numerous problems emerge: designed to be addictive, these platforms and devices claw at users’ attention; teenagers (especially young ones) struggle as their sense of self-worth becomes tied to “likes;” misinformation spreads rapidly in an information ecosystem wherein the incendiary gets more attention than the true; and the slow processes of democracy struggle to keep up with the speed of technology. Though the concerns are grave, and the interviewees are clearly concerned, the tonality is still one of hopefulness; the problem here is not really social media, but “surveillance capitalism,” and if “surveillance capitalism” can be thwarted then the true potential of social media can be attained. And the people leading that charge against “surveillance capitalism”? Why, none other than the reformed insiders in the film.

    While the bulk of the film consists of interviews, and news clips, the film is periodically interrupted by a narrative in which a family with three teenage children is shown. The Mother (Barbara Gehring) and Step-Father (Chris Grundy) are concerned with their children’s social media usage, even as they are glued to their own devices. As for the children: the oldest Cassandra (Kara Hayward) is presented as skeptical towards social media, the youngest Isla (Sophia Hammons) Is eager for online popularity, and the middle child Ben (Skyler Gisondo) eventually falls down the rabbit hole of recommended conspiratorial content. As the insiders, and academics, talk about the various dangers of social media the film shifts to the narrative to dramatize these moments – thus a discussion of social media’s impact on young teenagers, particularly girls, cuts to Isla being distraught after an insulting comment is added to one of the images she uploads. Cassandra (that name choice can’t be a coincidence) is presented as most in line with the general message of the film and the character refers to Jaron Lanier as a “genius” and in another sequence is shown reading Zuboff’s The Age of Surveillance Capitalism. Yet the member of the family the film dwells on the most is almost certainly Ben. For the purposes of dramatizing how an algorithm works, the film repeatedly returns to a creepy depiction of the Advertising, Engagement, and Growth Ais (all played by Vincent Kartheiser) as they scheme to get Ben to stay glued to his phone. Beyond the screens, the world in the narrative is being rocked by a strange protest movement calling itself “The Extreme Center” – whose argument seems to be that both sides can’t be trusted – and Ben eventually gets wrapped up in their message. The family’s narrative concludes with Ben and Cassandra getting arrested at a raucous rally held by “The Extreme Center,” sitting handcuffed on the ground and wondering how it is that this could have happened.

    To the extent that The Social Dilemma builds towards a conclusion, it is the speech that Harris gives (before an audience that includes many of the other interviewees in the film). And in that speech, and the other comments made around it, the point that is emphasized is that Silicon Valley must get away from “surveillance capitalism.” It must embrace “humane technology” that seeks to empower users not entangle them. Emphasizing that, despite how things have turned out, that “I don’t think these guys set out to be evil” the various insiders double-down on their belief in high-tech’s liberatory potential. Contrasting rather unflattering imagery of Mark Zuckerberg (without genuinely calling him out) testifying with images of Steve Jobs in his iconic turtleneck, the film claims “the idea of humane technology, that’s where Silicon Valley got its start.” And before the credits roll, Harris seems to speak for his fellow insiders as he notes “we built these things, and we have a responsibility to change it.” For those who found the film unsettling, and who are confused by exactly what they are meant to do if they are not part of Harris’s “we,” the film offers some straightforward advice. Drawing on their own digital habits, the insiders recommend: turning off notifications, never watching a recommended video, opting for a less-invasive search engine, trying to escape your content bubble, keeping your devices out of your bedroom, and being a critical consumer of information.

    It is a disturbing film, and it is constructed so as to unsettle the viewer, but it still ends on a hopeful note: reform is possible, and the people in this film are leading that charge. The problem is not social media as such, but what the ways in which “surveillance capitalism” has thwarted what social media could really be. If, after watching The Social Dilemma, you feel concerned about what “surveillance capitalism” has done to social media (and you feel prepared to make some tweaks in your social media use) but ultimately trust that Silicon Valley insiders are on the case—then the film has succeeded in its mission. After all, the film may be telling you to turn off Facebook notifications, but it doesn’t recommend deleting your account.

    Yet one of the points the film makes is that you should not accept the information that social media presents to you at face value. And in the same spirit, you should not accept the comments made by oh-so-remorseful Silicon Valley insiders at face value either. To be absolutely clear: we should be concerned about the impacts of social media, we need to work to rein in the power of these tech companies, we need to be willing to have the difficult discussion about what kind of society we want to live in…but we should not believe that the people who got us into this mess—who lacked the foresight to see the possible downsides in what they were building—will get us out of this mess. If these insiders genuinely did not see the possible downsides of what they were building, than they are fools who should not be trusted. And if these insiders did see the possible downsides, continued building these things anyways, and are now pretending that they did not see the downsides, than they are liars who definitely should not be trusted.

    It’s true, arsonists know a lot about setting fires, and a reformed arsonist might be able to give you some useful fire safety tips—but they are still arsonists.

    There is much to be said about The Social Dilemma. Indeed, anyone who cares about these issues (unfortunately) needs to engage with The Social Dilemma if for no other reason than the fact that this film will be widely watched, and will thus set much of the ground on which these discussions take place. Therefore, it is important to dissect certain elements of the film. To be clear, there is a lot to explore in The Social Dilemma—a book or journal issue could easily be published in which the docudrama is cut into five minute segments with academics and activists being each assigned one segment to comment on. While there is not the space here to offer a frame by frame analysis of the entire film, there are nevertheless a few key segments in the film which deserve to be considered. Especially because these key moments capture many of the film’s larger problems.

    “when bicycles showed up”

    A moment in The Social Dilemma that perfectly, if unintentionally, sums up many of the major flaws with the film occurs when Tristan Harris opines on the history of bicycles. There are several problems in these comments, but taken together these lines provide you with almost everything you need to know about the film. As Harris puts it:

    No one got upset when bicycles showed up. Right? Like, if everyone’s starting to go around on bicycles, no one said, ‘Oh, my God, we’ve just ruined society. [chuckles] Like, bicycles are affecting people. They’re pulling people away from their kids. They’re ruining the fabric of democracy. People can’t tell what’s true.’ Like we never said any of that stuff about a bicycle.

    Here’s the problem, Harris’s comments about bicycles are wrong.

    They are simply historically inaccurate. Some basic research into the history of bicycles that looks at the ways that people reacted when they were introduced would reveal that many people were in fact quite “upset when bicycles showed up.” People absolutely were concerned that bicycles were “affecting people,” and there were certainly some who were anxious about what these new technologies meant for “the fabric of democracy.” Granted, that there were such adverse reactions to the introduction of bicycles should not be seen as particularly surprising, because even a fairly surface-level reading of the history of technology reveals that when new technologies are introduced they tend to be met not only with excitement, but also with dread.

    Yet, what makes Harris’s point so interesting is not just that he is wrong, but that he is so confident while being so wrong. Smiling before the camera, in what is obviously supposed to be a humorous moment, Harris makes a point about bicycles that is surely one that will stick with many viewers—and what he is really revealing is that he needs to take some history classes (or at least do some reading). It is genuinely rather remarkable that this sequence made it into the final cut of the film. This was clearly an expensive production, but they couldn’t have hired a graduate student to watch the film and point out “hey, you should really cut this part about bicycles, it’s wrong”? It is hard to put much stock in Harris, and friends, as emissaries of technological truth when they can’t be bothered to do basic research.

    That Harris speaks so assuredly about something which he is so wrong about gets at one of the central problems with the reformed insiders of The Social Dilemma. Though these are clearly intelligent people (lots of emphasis is placed on the fancy schools they attended), they know considerably less than they would like the viewers to believe. Of course, one of the ways that they get around this is by confidently pretending they know what they’re talking about, which manifests itself by making grandiose claims about things like bicycles that just don’t hold up. The point is not to mock Harris for this mistake (though it really is extraordinary that the segment did not get cut), but to make the following point: if Harris, and his friends, had known a bit more about the history of technology, and perhaps if they had a bit more humility about what they don’t know, perhaps they would not have gotten all of us into this mess.

    A point that is made by many of the former insiders interviewed for the film is that they didn’t know what the impacts would be. Over and over again we hear some variation of “we meant well” or “we really thought we were doing something great.” It is easy to take such comments as expressions of remorse, but it is more important to see such comments as confessions of that dangerous mixture of hubris and historical/social ignorance that is so common in Silicon Valley. Or, to put it slightly differently, these insiders really needed to take some more courses in the humanities. You know how you could have known that technologies often have unforeseen consequences? Study the history of technology. You know how you could have known that new media technologies have jarring political implications? Read some scholarship from media studies. A point that comes up over and over again in such scholarly work, particularly works that focus on the American context, is that optimism and enthusiasm for new technology often keeps people (including inventors) from seeing the fairly obvious risks—and all of these woebegone insiders could have known that…if they had only been willing to do the reading. Alas, as anyone who has spent time in a classroom knows, a time honored way of covering up for the fact that you haven’t done the reading is just to speak very confidently and hope that your confidence will successfully distract from the fact that you didn’t do the reading.

    It would be an exaggeration to claim “all of these problems could have been prevented if these people had just studied history!” And yet, these insiders (and society at large) would likely be better able to make sense of these various technological problems if more people had an understanding of that history. At the very least, such historical knowledge can provide warnings about how societies often struggle to adjust to new technologies, can teach how technological progress and social progress are not synonymous, can demonstrate how technologies have a nasty habit of biting back, and can make clear the many ways in which the initial liberatory hopes that are attached to a technology tend to fade as it becomes clear that the new technology has largely reinscribed a fairly conservative status quo.

    At the very least, knowing a bit more about the history of technology can keep you from embarrassing yourself by confidently making claiming that “we never said any of that stuff about a bicycle.”

    “to destabilize”

    While The Social Dilemma expresses concern over how digital technologies impact a person’s body, the film is even more concerned about the way these technologies impact the body politic. A worry that is captured by Harris’s comment that:

    We in the tech industry have created the tools to destabilize and erode the fabric of society.

    That’s quite the damning claim, even if it is one of the claims in the film that probably isn’t all that controversial these days. Though many of the insiders in the film pine nostalgically for those idyllic days from ten years ago when much of the media and the public looked so warmly towards Silicon Valley, this film is being released at a moment when much of that enthusiasm has soured. One of the odd things about The Social Dilemma is that politics are simultaneously all over the film, and yet politics in the film are very slippery. When the film warns of looming authoritarianism: Bolsanaro gets some screen time, Putin gets some ominous screen time—but though Trump looms in the background of the film he’s pretty much unseen and unnamed. And when US politicians do make appearances we get Marco Rubio and Jeff Flake talking about how people have become too polarized and Jon Tester reacting with discomfort to Harris’s testimony. Of course, in the clip that is shown, Rubio speaks some pleasant platitudes about the virtues of coming together…but what does his voting record look like?

    The treatment of politics in The Social Dilemma comes across most clearly in the narrative segment, wherein much attention is paid to a group that calls itself “The Extreme Center.” Though the ideology of this group is never made quite clear, it seems to be a conspiratorial group that takes as its position that “both sides are corrupt” – rejecting left and right it therefore places itself in “the extreme center.” It is into this group, and the political rabbit hole of its content, that Ben falls in the narrative – and the raucous rally (that ends in arrests) in the narrative segment is one put on by the “extreme center.” It may appear that “the extreme center” is just a simple storytelling technique, but more than anything else it feels like the creation of this fictional protest movement is really just a way for the film to get around actually having to deal with real world politics.

    The film includes clips from a number of protests (though it does not bother to explain who these people are and why they are protesting), and there are some moments when various people can be heard specifically criticizing Democrats or Republicans. But even as the film warns of “the rabbit hole” it doesn’t really spend much time on examples. Heck, the first time that the words “surveillance capitalism” get spoken in the film are in a clip of Tucker Carlson. Some points are made about “pizzagate” but the documentary avoids commenting on the rapidly spreading QAnon conspiracy theory. And to the extent that any specific conspiracy receives significant attention it is the “flat earth” conspiracy. Granted, it’s pretty easy to deride the flat earthers, and in focusing on them the film makes a very conscious decision to not focus on white supremacist content and QAnon. Ben falls down the “extreme center” rabbit hole, and it may well be that the reason why the filmmakers have him fall down this fictional rabbit hole is so that they don’t have to talk about the likelihood that (in the real world) he would likely fall down a far-right rabbit hole. But The Social Dilemma doesn’t want to make that point, after all, in the political vision it puts forth the problem is that there is too much polarization and extremism on both sides.

    The Social Dilemma clearly wants to avoid taking sides. And in so doing demonstrates the ways in which Silicon Valley has taken sides. After all, to focus so heavily on polarization and the extremism of “both sides” just serves to create a false equivalency where none exists. But, the view that “the Trump administration has mismanaged the pandemic” and the view that “the pandemic is a hoax” – are not equivalent. The view that “climate change is real” and “climate change is a hoax” – are not equivalent. People organizing for racial justice and people organizing because they believe that Democrats are satanic cannibal pedophiles – are not equivalent. The view that “there is too much money in politics” and the view that “the Jews are pulling the strings” – are not equivalent. Of course, to say that these things “are not equivalent” is to make a political judgment, but by refusing to make such a judgment The Social Dilemma presents both sides as being equivalent. There are people online who are organizing for the cause of racial justice, and there are white-supremacists organizing online who are trying to start a race war—those causes may look the same to an algorithm, and they may look the same to the people who created those algorithms, but they are not the same.

    You cannot address the fact that Facebook and YouTube have become hubs of violent xenophobic conspiratorial content unless you are willing to recognize that Facebook and YouTube actively push violent xenophobic conspiratorial content.

    It is certainly true that there are activist movements from the left and the right organizing online at the moment, but when you watch a movie trailer on YouTube the next recommended video isn’t going to be a talk by Angela Davis.

    “it’s the critics”

    Much of the content of The Social Dilemma is unsettling, and the film makes it clear that change is necessary. Nevertheless, the film ends on a positive note. Pivoting away from gloominess, the film shows the rapt audience nodding as Harris speaks of the need for “humane technology,” and this assembled cast of reformed insiders is presented as proof that Silicon Valley is waking up to the need to take responsibility. Near the film’s end, Jaron Lanier hopefully comments that:

    it’s the critics that drive improvement. It’s the critics who are the true optimists.

    Thus, the sense that is conveyed at the film’s close is that despite the various worries that had been expressed—the critics are working on it, and the critics are feeling good.

    But, who are the critics?

    The people interviewed in the film, obviously.

    And that is precisely the problem. “Critic” is something of a challenging term to wrestle with as it doesn’t necessarily take much to be able to call yourself, or someone else, a critic. Thus, the various insiders who are interviewed in the film can all be held up as “critics” and can all claim to be “critics” thanks to the simple fact that they’re willing to say some critical things about Silicon Valley and social media. But what is the real content of the criticisms being made? Some critics are going to be more critical than others, so how critical are these critics? Not very.

    The Social Dilemma is a redemption tour that allows a bunch of remorseful Silicon Valley insiders to rebrand themselves as critics. Based on the information provided in the film it seems fairly obvious that a lot of these individuals are responsible for causing a great deal of suffering and destruction, but the film does not argue that these men (and they are almost entirely men) should be held accountable for their deeds. The insiders have harsh things to say about algorithms, they too have been buffeted about by nonstop nudging, they are also concerned about the rabbit hole, they are outraged at how “surveillance capitalism” has warped technological possibilities—but remember, they meant well, and they are very sorry.

    One of the fascinating things about The Social Dilemma is that in one scene a person will proudly note that they are responsible for creating a certain thing, and then in the next scene they will say that nobody is really to blame for that thing. Certainly not them, they thought they were making something great! The insiders simultaneously want to enjoy the cultural clout and authority that comes from being the one who created the like button, while also wanting to escape any accountability for being the person who created the like button. They are willing to be critical of Silicon Valley, they are willing to be critical of the tools they created, but when it comes to their own culpability they are desperate to hide behind a shield of “I meant well.” The insiders do a good job of saying remorseful words, and the camera catches them looking appropriately pensive, but it’s no surprise that these “critics” should feel optimistic, they’ve made fortunes utterly screwing up society, and they’ve done such a great job of getting away with it that now they’re getting to elevate themselves once again by rebranding themselves as “critics.”

    To be a critic of technology, to be a social critic more broadly, is rarely a particularly enjoyable or a particularly profitable undertaking. Most of the time, if you say anything critical about technology you are mocked as a Luddite, laughed at as a “prophet of doom,” derided as a technophobe, accused of wanting everybody to go live in caves, and banished from the public discourse. That is the history of many of the twentieth century’s notable social critics who raised the alarm about the dangers of computers decades before most of the insiders in The Social Dilemma were born. Indeed, if you’re looking for a thorough retort to The Social Dilemma you cannot really do better than reading Joseph Weizenbaum’s Computer Power and Human Reason—a book which came out in 1976. That a film like The Social Dilemma is being made may be a testament to some shifting attitudes towards certain types of technology, but it was not that long ago that if you dared suggest that Facebook was a problem you were denounced as an enemy of progress.

    There are many phenomenal critics speaking out about technology these days. To name only a few: Safiya Noble has written at length about the ways that the algorithms built by companies like Google and Facebook reinforce racism and sexism; Virginia Eubanks has exposed the ways in which high-tech tools of surveillance and control are first deployed against society’s most vulnerable members; Wendy Hui Kyong Chun has explored how our usage of social media becomes habitual; Jen Schradie has shown the ways in which, despite the hype to the contrary, online activism tends to favor right-wing activists and causes; Sarah Roberts has pulled back the screen on content moderation to show how much of the work supposedly being done by AI is really being done by overworked and under-supported laborers; Ruha Benjamin has made clear the ways in which discriminatory designs get embedded in and reified by technical systems; Christina Dunbar-Hester has investigated the ways in which communities oriented around technology fail to overcome issues of inequality; Sasha Costanza-Chock has highlighted the need for an approach to design that treats challenging structural inequalities as the core objective, not an afterthought; Morgan Ames expounds upon the “charisma” that develops around certain technologies; and Meredith Broussard has brilliantly inveighed against the sort of “technochauvinist” thinking—the belief that technology is the solution to every problem—that is so clearly visible in The Social Dilemma. To be clear, this list of critics is far from all-inclusive. There are numerous other scholars who certainly could have had their names added here, and there are many past critics who deserve to be named for their disturbing prescience.

    But you won’t hear from any of those contemporary critics in The Social Dilemma. Instead, viewers of the documentary are provided with a steady set of mostly male, mostly white, reformed insiders who were unable to predict that the high-tech toys they built might wind up having negative implications.

    It is not only that The Social Dilemma ignores most of the figures who truly deserve to be seen as critics, but that by doing so what The Social Dilemma does is set the boundaries for who gets to be a critic and what that criticism can look like. The world of criticism that The Social Dilemma sets up is one wherein a person achieves legitimacy as a critic of technology as a result of having once been a tech insider. Thus what the film does is lay out, and then set about policing the borders of, what can pass for acceptable criticism of technology. This not only limits the cast of critics to a narrow slice of mostly white mostly male insiders, it also limits what can be put forth as a solution. You can rest assured that the former insiders are not going to advocate for a response that would involve holding the people who build these tools accountable for what they’ve created. On the one hand it’s remarkable that no one in the film really goes after Mark Zuckerberg, but many of these insiders can’t go after Zuckerberg—because any vitriol they direct at him could just as easily be directed at them as well.

    It matters who gets to be deemed a legitimate critic. When news networks are looking to have a critic on it matters whether they call Tristan Harris or one of the previously mentioned thinkers, when Facebook does something else horrendous it matters whether a newspaper seeks out someone whose own self-image is bound up in the idea that the company means well or someone who is willing to say that Facebook is itself the problem. When there are dangerous fires blazing everywhere it matters whether the voices that get heard are apologetic arsonists or firefighters.

    Near the film’s end, while the credits play, as Jaron Lanier speaks of Silicon Valley he notes “I don’t hate them. I don’t wanna do any harm to Google or Facebook. I just want to reform them so they don’t destroy the world. You know?” And these comments capture the core ideology of The Social Dilemma, that Google and Facebook can be reformed, and that the people who can reform them are the people who built them.

    But considering all of the tangible harm that Google and Facebook have done, it is far past time to say that it isn’t enough to “reform” them. We need to stop them.

    Conclusion: On “Humane Technology”

    The Social Dilemma is an easy film to criticize. After all, it’s a highly manipulative piece of film making, filled with overly simplified claims, historical inaccuracies, conviction lacking politics, and a cast of remorseful insiders who still believe Silicon Valley’s basic mythology. The film is designed to scare you, but it then works to direct that fear into a few banal personal lifestyle tweaks, while convincing you that Silicon Valley really does mean well. It is important to view The Social Dilemma not as a genuine warning, or as a push for a genuine solution, but as part of a desperate move by Silicon Valley to rehabilitate itself so that any push for reform and regulation can be captured and defanged by “critics” of its own choosing.

    Yet, it is too simple (even if it is accurate) to portray The Social Dilemma as an attempt by Silicon Valley to control both the sale of flamethrowers and fire extinguishers. Because such a focus keeps our attention pinned to Silicon Valley. It is easy to criticize Silicon Valley, and Silicon Valley definitely needs to be criticized—but the bright-eyed faith in high-tech gadgets and platforms that these reformed insiders still cling to is not shared only by them. The people in this film blame “surveillance capitalism” for warping the liberatory potential of Internet connected technologies, and many people would respond to this by pushing back on Zuboff’s neologism to point out that “surveillance capitalism” is really just “capitalism” and that therefore the problem is really that capitalism is warping the liberatory potential of Internet connected technologies. Yes, we certainly need to have a conversation about what to do with Facebook and Google (dismantle them). But at a certain point we also need to recognize that the problem is deeper than Facebook and Google, at a certain point we need to be willlng to talk about computers.

    The question that occupied many past critics of technology was the matter of what kinds of technology do we really need? And they were clear that this was a question that was far too important to be left to machine-worshippers.

    The Social Dilemma responds to the question of “what kind of technology do we really need?” by saying “humane technology.” After all, the organization The Center for Humane Technology is at the core of the film, and Harris speaks repeatedly of “humane technology.” At the surface level it is hard to imagine anyone saying that they disapprove of the idea of “humane technology,” but what the film means by this (and what the organization means by this) is fairly vacuous. When the Center for Humane Technology launched in 2018, to a decent amount of praise and fanfare, it was clear from the outset that its goal had more to do with rehabilitating Silicon Valley’s image than truly pushing for a significant shift in technological forms. Insofar as “humane technology” means anything, it stands for platforms and devices that are designed to be a little less intrusive, that are designed to try to help you be your best self (whatever that means), that try to inform you instead of misinform you, and that make it so that you can think nice thoughts about the people who designed these products. The purpose of “humane technology” isn’t to stop you from being “the product,” it’s to make sure that you’re a happy product. “Humane technology” isn’t about deleting Facebook, it’s about renewing your faith in Facebook so that you keep clicking on the “like” button. And, of course, “humane technology” doesn’t seem to be particularly concerned with all of the inhumanity that goes into making these gadgets possible (from mining, to conditions in assembly plants, to e-waste). “Humane technology” isn’t about getting Ben or Isla off their phones, it’s about making them feel happy when they click on them instead of anxious. In a world of empowered arsonists, “humane technology” seeks to give everyone a pair of asbestos socks.

    Many past critics also argued that what was needed was to place a new word before technology – they argued for “democratic” technologies, or “holistic” technologies, or “convivial” technologies, or “appropriate” technologies, and this list could go on. Yet at the core of those critiques was not an attempt to salvage the status quo but a recognition that what was necessary in order to obtain a different sort of technology was to have a different sort of society. Or, to put it another way, the matter at hand is not to ask “what kind of computers do we want?” but to ask “what kind of society do we want?” and to then have the bravery to ask how (or if) computers really fit into that world—and if they do fit, how ubiquitous they will be, and who will be responsible for the mining/assembling/disposing that are part of those devices’ lifecycles. Certainly, these are not easy questions to ask, and they are not pleasant questions to mull over, which is why it is so tempting to just trust that the Center for Humane Technology will fix everything, or to just say that the problem is Silicon Valley.

    Thus as the film ends we are left squirming unhappily as Netflix (which has, of course, noted the fact that we watched The Social Dilemma) asks us to give the film a thumbs up or a thumbs down – before it begins auto-playing something else.

    The Social Dilemma is right in at least one regard, we are facing a social dilemma. But as far as the film is concerned, your role in resolving this dilemma is to sit patiently on the couch and stare at the screen until a remorseful tech insider tells you what to do.

    _____

    Zachary Loeb earned his MSIS from the University of Texas at Austin, an MA from the Media, Culture, and Communications department at NYU, and is currently a PhD candidate in the History and Sociology of Science department at the University of Pennsylvania. Loeb works at the intersection of the history of technology and disaster studies, and his research focusses on the ways that complex technological systems amplify risk, as well as the history of technological doom-saying. He is working on a dissertation on Y2K. Loeb writes at the blog Librarianshipwreck, and is a frequent contributor to The b2 Review Digital Studies section.

    Back to the essay

    _____

    Works Cited

    • Weizenbaum, Joseph. 1976. Computer Power and Human Reason: From Judgment to Calculation. New York: WH Freeman & Co.
  • Zachary Loeb — Hashtags Lean to the Right (Review of Schradie, The Revolution that Wasn’t: How Digital Activism Favors Conservatives)

    Zachary Loeb — Hashtags Lean to the Right (Review of Schradie, The Revolution that Wasn’t: How Digital Activism Favors Conservatives)

    a review of Jen Schradie,The Revolution that Wasn’t: How Digital Activism Favors Conservatives (Harvard University Press, 2019)

    by Zachary Loeb

    ~

    Despite the oft-repeated, and rather questionable, trope that social media is biased against conservatives; and beyond the attention that has been lavished on tech-savvy left-aligned movements (such as Occupy!) in recent years—this does not necessarily mean that social media is of greater use to the left. It may be quite the opposite. This is a topic that documentary filmmaker, activist and sociologist Jen Schradie explores in depth in her excellent and important book The Revolution That Wasn’t: How Digital Activism Favors Conservatism. Engaging with the political objectives of activists on the left and the right, Schradie’s book considers the political values that are reified in the technical systems themselves and the ways in which those values more closely align with the aims of conservative groups. Furthermore, Schradie emphasizes the socio-economic factors that allow particular groups to successfully harness high-tech tools, thereby demonstrating how digital activism reinforces the power of those who are already enjoying a fair amount of power. Rather than suggesting that high-tech tools have somehow been stolen from the left by the right, The Revolution That Wasn’t argues that these were not the left’s tools in the first place.

    The background against which Schradie’s analysis unfolds is the state of North Carolina in the years after 2011. Generally seen as a “red state,” North Carolina had flipped blue for Barack Obama in 2008, leading to the state being increasingly seen as a battleground. Even though the state was starting to take on a purplish color, North Carolina was still home to a deeply entrenched conservativism that was reflected (and still is reflected) in many aspects of the state’s laws, and in the legacy of racist segregation that is still felt in the state. Though the Occupy! movement lingers in the background of Schradie’s account, her focus is on struggles in North Carolina around unionization, the rapid growth of the Tea Party, and the emergence of the “Moral Monday” movement which inspired protests across the state (starting in 2013). While many considerations of digital activism have focused on hip young activists festooned with piercings, hacker skills, and copies of The Coming Insurrection—the central characters of Schradie’s book are members of the labor movement, campus activists, Tea Party members, Preppers, people associated with “Patriot” groups, as well as a smattering of paid organizers working for large organizations. And though Schradie is closely attuned to the impact that financial resources have within activist movements, she pushes back against the “astroturf” accusation that is sometimes aimed at right-wing activists, arguing that the groups she observed on both the right and the left reflected genuine populist movements.

    There is a great deal of specificity to Schradie’s study, and many of the things that Schradie observes are particular to the context of North Carolina, but the broader lessons regarding political ideology and activism are widely applicable. In looking at the political landscape in North Carolina, Schradie carefully observes the various groups that were active around the unionization issue, and pays close attention to the ways in which digital tools were used in these groups’ activism. The levels of digital savviness vary across the political groups, and most of the groups demonstrate at least some engagement with digital tools; however, some groups embraced the affordances of digital tools to a much greater extent than others. And where Schradie’s book makes its essential intervention is not simply in showing these differing levels of digital use, but in explaining why. For one of the core observations of Schradie’s account of North Carolina, is that it was not the left-leaning groups, but the right-leaning groups who were able to make the most out of digital tools. It’s a point which, to a large degree, runs counter to general narratives on the left (and possibly also the right) about digital activism.

    In considering digital activism in North Carolina, Schradie highlights the “uneven digital terrain that largely abandoned left working-class groups while placing right-wing reformist groups at the forefront of digital activism” (Schradie, 7). In mapping out this terrain, Schradie emphasizes three factors that were pivotal in tilting this ground, namely class, organization, and ideology. Taken independently of one another, each of these three factors provides valuable insight into the challenges posed by digital activism, but taken together they allow for a clear assessment of the ways that digital activism (and digital tools themselves) favor conservatives. It is an analysis that requires some careful wading into definitions (the different ways that right and left groups define things like “freedom” really matters), but these three factors make it clear that “rather than offering a quick technological fix to repair our broken democracy, the advent of digital activism has simply ended up reproducing, and in some cases, intensifying, preexisting power imbalances” (Schradie, 7).

    Considering that the core campaign revolves around unionization, it should not particularly be a surprise that class is a major issue in Schradie’s analysis. Digital evangelists have frequently suggested that high-tech tools allow for the swift breaking down of class barriers by providing powerful tools (and informational access) to more and more people—but the North Carolinian case demonstrates the ways in which class endures. Much of this has to do with the persistence of the digital divide, something which can easily be overlooked by onlookers (and academics) who have grown accustomed to digital tools. Schradie points to the presence of “four constraints” that have a pivotal impact on the class aspect of digital activism: “Access, Skills, Empowerment, and Time” (or ASETs for short; Schradie, 61). “Access” points to the most widely understood part of the digital divide, the way in which some people simply do not have a reliable and routine way of getting ahold of and/or using digital tools—it’s hard to build a strong movement online, when many of your members have trouble getting online. This in turn reverberates with “Skills,” as those who have less access to digital tools often lack the know-how that develops from using those tools—not everyone knows how to craft a Facebook post, or how best to make use of hashtags on Twitter. While digital tools have often been praised precisely for the ways in which they empower users, this empowerment is often not felt by those lacking access and skills, leading many individuals from working-class groups to see “digital activism as something ‘other people’ do” (Schradie, 64). And though it may be the easiest factor to overlook, engaging in digital activism requires Time, something which is harder to come by for individuals working multiple jobs (especially of the sort with bosses that do not want to see any workers using phones at work).

    When placed against the class backgrounds of the various activist groups considered in the book, the ASETs framework clearly sets up a situation in which conservative activists had the advantage. What Schradie found was “not just a question of the old catching up with the young, but of the poor never being able to catch up with the rich” (Schradie, 79), as the more financially secure conservative activists simply had more ASETs than the working-class activists on the left. And though the right-wing activists skewed older than the left-wing activists, they proved quite capable of learning to use new high-tech tools. Furthermore, an extremely important aspect here is that the working-class activists (given their economic precariousness) had more to lose from engaging in digital activism—the conservative retiree will be much less worried about losing their job, than the garbage truck driver interested in unionizing.

    Though the ASETs echo throughout the entirety of Schradie’s account, “Time” plays an essential connective role in the shift from matters of class to matters of organization. Contrary to the way in which the Internet has often been praised for invigorating horizontal movements (such as Occupy!), the activist groups in North Carolina attest to the ways in which old bureaucratic and infrastructural tools are still essential. Or, to put it another way, if the various ASETs are viewed as resources, then having a sufficient quantity of all four is key to maintaining an organization. This meant that groups with hierarchical structures, clear divisions of labor, and more staff (be these committed volunteers or paid workers) were better equipped to exploit the affordances of digital tools.

    Importantly, this was not entirely one-sided. Tea Party groups were able to tap into funding and training from larger networks of right-wing organizations, but national unions and civil rights organizations were also able to support left-wing groups. In terms of organization, the overwhelming bias is less pronounced in terms of a right/left dichotomy and more a reflection of a clash between reformist/radical groups. When it came to organization the bias was towards “reformist” groups (right and left) that replicated present power structures and worked within the already existing social systems; the groups that lose out here tend to be the ones that more fully eschew hierarchy (an example of this being student activists). Though digital democracy can still be “participatory, pluralist, and personalized,” Schradie’s analysis demonstrates how “the internet over the long-term favored centralized activism over connective action; hierarchy over horizontalism; bureaucratic positions over networked persons” (Schradie, 134). Thus, the importance of organization, demonstrates not how digital tools allowed for a new “participatory democracy” but rather how standard hierarchical techniques continue to be key for groups wanting to participate in democracy.

    Beyond class and organization (insofar as it is truly possible to get past either), the ideology of activists on the left and activists on the right has a profound influence on how these groups use digital tools. For it isn’t the case that the left and the right try to use the Internet for the exact same purpose. Schradie captures this as a difference between pursuing fairness (the left), and freedom (the right)—this largely consisted of left-wing groups seeking a “fairer” allocation of societal power, while those on the right defined “freedom” largely in terms of protecting the allocation of power already enjoyed by these conservative activists. Believing that they had been shut out by the “liberal media,” many conservatives flocked to and celebrated digital tools as a way of getting out “the Truth,” their “digital practices were unequivocally focused on information” (Schradie, 167). As a way of disseminating information, to other people already in possession of ASETs, digital means provided right-wing activists with powerful tools for getting around traditional media gatekeepers. While activists on the left certainly used digital tools for spreading information, their use of the internet tended to be focused more heavily on organizing: on bringing people together in order to advocate for change. Further complicating things for the left is that Schradie found there to be less unity amongst leftist groups in contrast to the relative hegemony found on the right. Comparing the intersection of ideological agendas with digital tools, Schradie is forthright in stating, “the internet was simply more useful to conservatives who could broadcast propaganda and less effective for progressives who wanted to organize people” (Schradie, 223).

    Much of the way that digital activism has been discussed by the press, and by academics, has advanced a narrative that frames digital activism as enhancing participatory democracy. In these standard tales (which often ground themselves in accounts of the origins of the internet that place heavy emphasis on the counterculture), the heroes of digital activism are usually young leftists. Yet, as Schradie argues, “to fully explain digital activism in this era, we need to take off our digital-tinted glasses” (Schradie, 259). Removing such glasses reveals the way in which they have too often focused attention on the spectacular efforts of some movements, while overlooking the steady work of others—thus, driving more attention to groups like Occupy!, than to the buildup of right-wing groups. And looking at the state of digital activism through clearer eyes reveals many aspects of digital life that are obvious, yet which are continually forgotten, such as the fact that “the internet is a tool that favors people with more money and power, often leaving those without resources in the dust” (Schradie, 269). The example of North Carolina shows that groups on the left and the right are all making use of the Internet, but it is not just a matter of some groups having more ASETs, it is also the fact that the high-tech tools of digital activism favor certain types of values and aims better than others. And, as Schradie argues throughout her book, those tend to be the causes and aims of conservative activists.

    Despite the revolutionary veneer with which the Internet has frequently been painted, “the reality is that throughout history, communications tools that seemed to offer new voices are eventually owned or controlled by those with more resources. They eventually are used to consolidate power, rather than to smash it into pieces and redistribute it” (Schradie, 25). The question with which activists, particularly those on the left, need to wrestle is not just whether or not the Internet is living up to its emancipatory potential—but whether or not it ever really had that potential in the first place.

    * * *

    In an iconic photograph from 1948, a jubilant Harry S. Truman holds aloft a copy of The Chicago Daily Tribune emblazoned with the headline “Dewey Beats Truman.” Despite the polls having predicted that Dewey would be victorious, when the votes were counted Truman had been sent back to the White House and the Democrats took control of the House and the Senate. An echo of this moment occurred some sixty-eight years later, though there was no comparable photo of Donald Trump smirking while holding up a newspaper carrying the headline “Clinton Beats Trump.” In the aftermath of Trump’s victory pundits ate crow in a daze, pollsters sought to defend their own credibility by emphasizing that their models had never actually said that there was no chance of a Trump victory, and even some in Trump’s circle seemed stunned by his victory.

    As shock turned to resignation, the search for explanations and scapegoats began in earnest. Democrats blamed Russian hackers, voter suppression, the media’s obsession with Trump, left-wing voters who didn’t fall in line, and James Comey; while Republicans claimed that the shock was simply proof that the media was out of touch with the voters. Yet, Republicans and Democrats seemed to at least agree on one thing: to understand Trump’s victory, it was necessary to think about social media. Granted, Republicans and Democrats were divided on whether this was a matter of giving credit or assigning blame. On the one hand, Trump had been able to effectively use Twitter to directly engage with his fan base; on the other hand, platforms like Facebook had been flooded with disinformation that spread rapidly through the online ecosystem. It did not take long for representatives, including executives, from the various social media companies to find themselves called before Congress, where these figures were alternately grilled about supposed bias against conservatives on their platforms, and taken to task for how their platforms had been so easily manipulated into helping Trump win election.

    If the tech companies were only finding themselves summoned before Congress it would have been bad enough, but they were also facing frustrated employees, as well as disgruntled users, and the word “techlash” was being used to describe the wave of mounting frustration with these companies. Certainly, unease with the power and influence of the tech titans had been growing for years. Cambridge Analytica was hardly the first tech scandal. Yet much of that earlier displeasure was tempered by an overwhelmingly optimistic attitude towards the tech giants, as though the industry’s problematic excesses were indicative of growing pains as opposed to being signs of intrinsic anti-democratic (small d) biases. There were many critics of the tech industry before the arrival of the “techlash,” but they were liable to find themselves denounced as Luddites if they failed to show sufficient fealty to the tech companies. From company CEOs to an adoring tech press to numerous technophilic academics, in the years prior to the 2016 election smart phones and social media were hailed for their liberating and democratizing potential. Videos shot on smart phone cameras and uploaded to YouTube, political gatherings organized on Facebook, activist campaigns turning into mass movements thanks to hashtags—all had been treated as proof positive that high tech tools were breaking apart the old hierarchies and ushering in a new era of high-tech horizontal politics.

    Alas, the 2016 election was the rock against which many of these high-tech hopes crashed.

    And though there are many strands contributing to the “techlash,” it is hard to make sense of this reaction without seeing it in relation to Trump’s victory. Users of Facebook and Twitter had been frustrated with those platforms before, but at the core of the “techlash” has been a certain sense of betrayal. How could Facebook have done this? Why was Twitter allowing Trump to break its own terms of service on a daily basis? Why was Microsoft partnering with ICE? How come YouTube’s recommendation algorithms always seemed to suggest far-right content?

    To state it plainly: it wasn’t supposed to be this way.

    But what if it was? And what if it had always been?

    In a 1985 interview with MIT’s newspaper The Tech, the computer scientist and social critic, Joseph Weizenbaum had some blunt words about the ways in which computers had impacted society, telling his interviewer: “I think the computer has from the beginning been a fundamentally conservative force. It has made possible the saving of institutions pretty much as they were, which otherwise might have had to be changed” (ben-Aaron, 1985). This was not a new position for Weizenbaum; he had largely articulated the same idea in his 1976 book Computer Power and Human Reason, wherein he had pushed back at those he termed the “artificial intelligentsia” and the other digital evangelists of his day. Articulating his thoughts to the interviewer from The Tech, Weizenbaum raised further concerns about the close links between the military and computer work at MIT, and cast doubt on the real usefulness of computers for society—couching his dire fears in the social critic’s common defense “I hope I’m wrong” (ben-Aaron, 1985). Alas, as the decades passed, Weizenbaum unfortunately felt he had been right. When he turned his critical gaze to the internet in a 2006 interview, he decried the “flood of disinformation,” while noting “it just isn’t true that everyone has access to the so-called Information age” (Weizenbaum and Wendt 2015, 44-45).

    Weizenbaum was hardly the only critic to have looked askance at the growing importance that was placed on computers during the 20th century. Indeed, Weizenbaum’s work was heavily influenced by that of his friend and fellow social critic Lewis Mumford who had gone so far as to identify the computer as the prototypical example of “authoritarian” technology (even suggesting that it was the rebirth of the “sun god” in technical form). Yet, societies that are in love with their high-tech gadgets, and which often consider technological progress and societal progress to be synonymous, generally have rather little time for such critics. When times are good, such social critics are safely quarantined to the fringes of academic discourse (and completely ignored within broader society), but when things get rocky they have their woebegone revenge by being proven right.

    All of which is to say, that thinkers like Weizenbaum and Mumford would almost certainly agree with The Revolution That Wasn’t. However, they would probably not be surprised by it. After all, The Revolution That Wasn’t is a confirmation that we are today living in the world about which previous generations of critics warned. Indeed, if there is one criticism to be made of Schradie’s work, it is that the book could have benefited by more deeply grounding its analysis in the longstanding critiques of technology that have been made by the likes of Weizenbaum, Mumford, and quite a few other scholars and critics. Jo Freeman and Langdon Winner are both mentioned, but it’s important to emphasize that many social critics warned about the conservative biases of computers long before Trump got a Twitter account, and long before Mark Zuckerberg was born. Our widespread refusal to heed these warnings, and the tendency to mock those issuing these warnings as Luddites, technophobes, and prophets of doom, is arguably a fundamental cause of the present state of affairs which Schradie so aptly describes.

    With The Revolution That Wasn’t, Jen Schradie has made a vital intervention in current discussions (inside the academy and amongst activists) regarding the politics of social media. Eschewing a polemical tone, which refuses to sing the praises of social media or to condemn it outright, Schradie provides a measured assessment that addresses the way in which social media is actually being used by activists of varying political stripes—with a careful emphasis on the successes these groups have enjoyed. There is a certain extent to which Schradie’s argument, and some of her conclusions, represent a jarring contrast to much of the literature that has framed social media as being a particular boon to left-wing activists. Yet, Schradie’s book highlights with disarming detail the ways in which a desire (on the part of left-leaning individuals) to believe that the Internet favors people on the left has been a sort of ideological blinder that has prevented them from fully coming to terms with how the Internet has re-entrenched the dominant powers in society.

    What Schradie’s book reveals is that “the internet did not wipe out barriers to activism; it just reflected them, and even at times exacerbated existing power differences” (Schradie, 245). Schradie allows the activists on both sides to speak in their own words, taking seriously their claims about what they were doing. And while the book is closely anchored in the context of a particular struggle in North Carolina, the analytical tools that Schradie develops (such as the ASET framework, and the tripartite emphasis on class/organization/ideology) allow Schradie’s conclusions to be mapped onto other social movements and struggles.

    While the research that went into The Revolution That Wasn’t clearly predates the election of Donald Trump, and though he is not a main character in the book, the 45th president lurks in the background of the book (or perhaps just in the reader’s mind). Had Trump lost the election, every part of Schradie’s analysis would be just as accurate and biting; however, those seeking to defend social media tools as inherently liberating would probably not be finding themselves on the defensive today (a position that most of them were never expecting themselves to be in). Yet, what makes Schradie’s account so important, is that the book is not simply concerned with whether or not particular movements used digital tools; rather, Schradie is able to step back to consider the degree to which the use of social media tools has been effective in fulfilling the political aims of the various groups. Yes, Occupy! might have made canny use of hashtags (and, if one wants to be generous one can say that it helped inject the discussion of inequality back into American politics), but nearly ten years later the wealth gap is continuing to grow. For all of the hopeful luster that has often surrounded digital tools, Schradie’s book shows the way in which these tools have just placed a fresh coat of paint on the same old status quo—even if this coat of paint is shiny and silvery.

    As the technophiles scramble to rescue the belief that the Internet is inherently democratizing, The Revolution That Wasn’t takes its place amongst a growing body of critical works that are willing to challenge the utopian aura that has been built up around the Internet. While it must be emphasized, as the earlier allusion to Weizenbaum shows, that there have been thinkers criticizing computers and the Internet for as long as there have been computers and the Internet—of late there has been an important expansion of such critical works. There is not the space here to offer an exhaustive account of all of the critical scholarship being conducted, but it is worthwhile to mention some exemplary recent works. Safiya Umoja Noble’s Algorithms of Oppression provides an essential examination of the ways in which societal biases, particularly about race and gender, are reinforced by search engines. The recent work on the “New Jim Code” by Ruha Benjamin as seen in such works as Race After Technology, and the Captivating Technology volume she edited, foreground the ways in which technological systems reinforce white supremacy. The work of Virginia Eubanks, both Digital Dead End (whose concerns make it likely the most important precursor to Schradie’s book) and her more recent Automating Inequality, discuss the ways in which high tech systems are used to police and control the impoverished. Examinations of e-waste (such as Jennifer Gabry’s Digital Rubbish) and infrastructure (such as Nicole Starosielski’s The Undersea Network, and Tung-Hui Hu’s A Prehistory of the Cloud) point to the ways in which colonial legacies are still very much alive in today’s high tech systems. While the internationalist sheen that is often ascribed to digital media is carefully deconstructed in works like Ramesh Srnivasan’s Whose Global Village? Works like Meredith Broussard’s Artificial Unintelligence and Shoshana Zuboff’s Age of Surveillance Capitalism raise deep questions about the overall politics of digital technology. And, with its deep analysis of the way that race and class are intertwined with digital access and digital activism, The Revolution That Wasn’t deserves a place amongst such works.

    What much of this recent scholarship has emphasized is that technology is never neutral. And while this may be a point which is accepted wisdom amongst scholars in these relevant fields, these works (and scholars) have taken great care to make this point to the broader public. It is not just that tools can be used for good, or for bad—but that tools have particular biases built into them. Pretending those biases aren’t there, doesn’t make them go away. Kranzberg’s Laws asserted that technology is not good, or bad, or neutral—but when one moves away from talking about technology to particular technologies, it is quite important to be able to say that certain technologies may actually be bad. This is a particular problem when one wants to consider things like activism. There has always been something asinine to the tactic of mocking activists pushing for social change while using devices created by massive multinational corporations (as the well-known comic by Matt Bors notes); however, the reason that this mockery is so often repeated is that it has a kernel of troubling truth to it. After all, there is something a little discomforting about using a device running on minerals mined in horrendous conditions, which was assembled in a sweatshop, and which will one day go on to be poisonous e-waste—for organizing a union drive.

    Matt Bors, detail from "Mister Gotcha" (2016)
    Matt Bors, detail from “Mister Gotcha” (2016)

    Or, to put it slightly differently, when we think about the democratizing potential of technology, to what extent are we privileging those who get to use (and discard) these devices, over those whose labor goes into producing them? That activists may believe that they are using a given device or platform for “good” purposes, does not mean that the device itself is actually good. And this is a tension Schradie gets at when she observes that “instead of a revolutionary participatory tool, the internet just happened to be the dominant communication tool at the time of my research and simply became normalized into the groups’ organizing repertoire” (Schradie, 133). Of course, activists (of varying political stripes) are making use of the communication tools that are available to them and widely used in society. But just because activists use a particular communication tool, doesn’t mean that they should fall in love with it.

    This is not in any way to call activists using these tools hypocritical, but it is a further reminder of the ways in which high-tech tools inscribe their users within the very systems they may be seeking to change. And this is certainly a problem that Schradie’s book raises, as she notes that one of the reasons conservative values get a bump from digital tools is that these conservatives are generally already the happy beneficiaries of the systems that created these tools. Scholarship on digital activism has considered the ideologies of various technologically engaged groups before, and there have been many strong works produced on hackers and open source activists, but often the emphasis has been placed on the ideologies of the activists without enough consideration being given to the ways in which the technical tools themselves embody certain political values (an excellent example of a work that truly considers activists picking their tools based on the values of those tools is Christina Dunbar-Hester’s Low Power to the People). Schradie’s focus on ideology is particularly useful here, as it helps to draw attention to the way in which various groups’ ideologies map onto or come into conflict with the ideologies that these technical systems already embody. What makes Schradie’s book so important is not just its account of how activists use technologies, but its recognition that these technologies are also inherently political.

    Yet the thorny question that undergirds much of the present discourse around computers and digital tools remains “what do we do if, instead of democratizing society, these tools are doing just the opposite?” And this question just becomes tougher the further down you go: if the problem is just Facebook, you can pose solutions such as regulation and breaking it up; however, if the problem is that digital society rests on a foundation of violent extraction, insatiable lust for energy, and rampant surveillance, solutions are less easily available. People have become so accustomed to thinking that these technologies are fundamentally democratic that they are loathe to believe analyses, such as Mumford’s, that they are instead authoritarian by nature.

    While reports of a “techlash” may be overstated, it is clear that at the present moment it is permissible to be a bit more critical of particular technologies and the tech giants. However, there is still a fair amount of hesitance about going so far as to suggest that maybe there’s just something inherently problematic about computers and the Internet. After decades of being told that the Internet is emancipatory, many people remain committed to this belief, even in the face of mounting evidence to the contrary. Trump’s election may have placed some significant cracks in the dominant faith in these digital devices, but suggesting that the problem goes deeper than Facebook or Amazon is still treated as heretical. Nevertheless, it is a matter that is becoming harder and harder to avoid. For it is increasingly clear that it is not a matter of whether or not these devices can be used for this or that political cause, but of the overarching politics of these devices themselves. It is not just that digital activism favors conservatism, but as Weizenbaum observed decades ago, that “the computer has from the beginning been a fundamentally conservative force.”

    With The Revolution That Wasn’t, Jen Schradie has written an essential contribution to current conversations around not only the use of technology for political purposes, but also about the politics of technology. As an account of left-wing and right-wing activists, Schradie’s book is a worthwhile consideration of the ways that various activists use these tools. Yet where this, altogether excellent, work really stands out is in the ways in which it highlights the politics that are embedded and reified by high-tech tools. Schradie is certainly not suggesting that activists abandon their devices—in so far as these are the dominant communication tools at present, activists have little choice but to use them—but this book puts forth a nuanced argument about the need for activists to really think critically about whether they’re using digital tools, or whether the digital tools are using them.

    _____

    Zachary Loeb earned his MSIS from the University of Texas at Austin, an MA from the Media, Culture, and Communications department at NYU, and is currently a PhD candidate in the History and Sociology of Science department at the University of Pennsylvania. Loeb works at the intersection of the history of technology and disaster studies, and his research focusses on the ways that complex technological systems amplify risk, as well as the history of technological doom-saying. He is working on a dissertation on Y2K. Loeb writes at the blog Librarianshipwreck, and is a frequent contributor to The b2 Review Digital Studies section.

    Back to the essay

    _____

    Works Cited

    • ben-Aaron, Diana. 1985. “Weizenbaum Examines Computers and Society.” The Tech (Apr 9).
    • Weizenbaum, Joseph, and Gunna Wendt. 2015. Islands in the Cyberstream: Seeking Havens of Reason in a Programmed Society. Duluth, MN: Litwin Books.
  • Zachary Loeb — From Megatechnic Bribe to Megatechnic Blackmail: Mumford’s ‘Megamachine’ After the Digital Turn

    Zachary Loeb — From Megatechnic Bribe to Megatechnic Blackmail: Mumford’s ‘Megamachine’ After the Digital Turn

    Zachary Loeb

    Without even needing to look at the copyright page, an aware reader may be able to date the work of a technology critic simply by considering the technological systems, or forms of media, being critiqued. Unfortunately, in discovering the date of a given critique one may be tempted to conclude that the critique itself must surely be dated. Past critiques of technology may be read as outdated curios, can be considered as prescient warnings that have gone unheeded, or be blithely disregarded as the pessimistic braying of inveterate doomsayers. Yet, in the case of Lewis Mumford, even though his activity peaked by the mid-1970s, it would be a mistake to deduce from this that his insights are of no value to the world of today. Indeed, when it comes to the “digital turn,” it is a “turn” in the road which Mumford saw coming.

    It would be reductive to simply treat Mumford as a critic of technology. His body of work includes literary analysis, architectural reviews, treatises on city planning, iconoclastic works of history, impassioned calls to arms, and works of moral philosophy (Mumford 1982; Miller 1989; Blake 1990; Luccarelli 1995; Wojtowicz 1996). Leo Marx described Mumford as “a generalist with strong philosophic convictions,” one whose body of work represents the steady unfolding of “a single view of reality, a comprehensive historical, moral, and metaphysical—one might say cosmological—doctrine” (L. Marx 1990: 167). In the opinion of the literary scholar Charles Molesworth, Mumford is an “axiologist with a clear social purpose: he wants to make available to society a better and fuller set of harmoniously integrated values” (Molesworth 1990: 241), while Christopher Lehmann-Haupt caricatured Mumford as “perhaps our most distinguished flagellator,” and Lewis Croser denounced him as a “prophet of doom” who “hates almost all modern ideas and modern accomplishments without discrimination” (Mendelsohn 1994: 151-152). Perhaps Mumford is captured best by Rosalind Williams, who identified him alternately as an “accidental historian” (Williams 1994: 228) and as a “cultural critic” (Williams 1990: 44) or by Don Ihde who referred to him as an “intellectual historian” (Ihde 1993; 96). As for Mumford’s own views, he saw himself in the mold of the prophet Jonah, “that terrible fellow who keeps on uttering the very words you don’t want to hear, reporting the bad news and warning you that it will get even worse unless you yourself change your mind and alter your behavior” (Mumford 1979: 528).

    Therefore, in the spirit of this Jonah let us go see what is happening in Ninevah after the digital turn. Drawing upon Mumford’s oeuvre, particularly the two volume The Myth of the Machine, this paper investigates similarities between Mumford’s concept of “the megamachine” and the post digital-turn technological world. In drawing out these resonances, I pay particular attention to the ways in which computers featured in Mumford’s theorizing of the “megamachine” and informed his darkening perception. In addition I expand upon Mumford’s concept of “the megatechnic bribe” to argue that, after the digital-turn, what takes place is a move from “the megatechnic bribe” towards what I term “megatechnic blackmail.”

    In a piece provocatively titled “Prologue for Our Times,” which originally appeared in The New Yorker in 1975, Mumford drolly observed: “Even now, perhaps a majority of our countrymen still believe that science and technics can solve all human problems. They have no suspicion that our runaway science and technics themselves have come to constitute the main problem the human race has to overcome” (Mumford 1975: 374). The “bad news” is that more than forty years later a majority may still believe that.

    Towards “The Megamachine”

    The two-volume Myth of the Machine was not Mumford’s first attempt to put forth an overarching explanation of the state of the world mixing cultural criticism, historical analysis, and free-form philosophizing; he had previously attempted a similar feat with his Renewal of Life series.

    Mumford originally planned the work as a single volume, but soon came to realize that this project was too ambitious to fit within a single book jacket (Miller 1989, 299). The Renewal of Life ultimately consisted of four volumes: Technics and Civilization (1934), The Culture of Cities (1938), The Condition of Man (1944), and The Conduct of Life (1951)—of which Technics and Civilization remains the text that has received the greatest continued attention. A glance at the nearly twenty-year period encompassed in the writing of these four books should make it obvious that they were written during a period of immense change and upheaval in the world and this certainly impacted the shape and argument of these books. These books fall evenly on opposite sides of two events that were to have a profound influence on Mumford’s worldview: the 1944 death of his son Geddes on the Italian front during World War II, and the dropping of atomic bombs on Hiroshima and Nagasaki in 1945.

    The four books fit oddly together and reflect Mumford’s steadily darkening view of the world—a pendulous swing from hopefulness to despair (Blake 1990, 286-287). With the Renewal of Life, Mumford sought to construct a picture of the sort of “whole” which could develop such marvelous potential, but which was so morally weak that it wound up using that strength for destructive purposes. Unwelcome though Mumford’s moralizing may have been, it was an attempt, albeit from a tragic perspective (Fox 1990), to explain why things were the way that they were, and what steps needed to be taken for positive change to occur. That the changes that were taking place were those which, in Mumford’s estimation, were for the worse propelled him to develop concepts like “the megamachine” and the “megatechnic bribe” to explain the societal regression he was witnessing.

    By the time Mumford began work on The Renewal of Life he had already established himself as a prominent architectural critic and public intellectual. Yet he remained outside of any distinct tradition, school, or political ideology. Mumford was an iconoclastic thinker whose ethically couched regionalist radicalism, influenced by the likes of Ebenezer Howard, Thorstein Veblen, Peter Kropotkin and especially Patrick Geddes, placed him at odds with liberals and socialists alike in the early decades of the twentieth century (Blake 1990, 198-199). For Mumford the prevailing progressive and radical philosophies had been buried amongst the rubble of World War I and he felt that a fresh philosophy was needed, one that would find in history the seeds for social and cultural renewal, and Mumford thought himself well-equipped to develop such a philosophy (Miller 1989, 298-299). Mumford was hardly the first in his era to attempt such a synthesis (Lasch 1991): by the time Mumford began work on The Renewal of Life, Oswald Spengler had already published a grim version of such a new philosophy (300). Indeed, there is something of a perhaps not-accidental parallel between Spengler’s title The Decline of the West and Mumford’s choice of The Renewal of Life as the title for his own series.

    In Mumford’s estimation, Spengler’s work was “more than a philosophy of history” it was “a work of religious consolation” (Mumford 1938, 218). The two volumes of The Decline of the West are monuments to Prussian pessimism in which Spengler argues that cultures pass “from the organic to the inorganic, from spring to winter, from the living to the mechanical, from the subjectively conditioned to the objectively conditioned” (220). Spengler argued that this is the fate of all societies, and he believed that “the West” had entered into its winter. It is easy to read Spengler’s tracts as woebegone anti-technology dirges (Farrenkopf 2001, 110-112), or as a call for “Faustian man” (Western man) to assert dominance over the machine and wield it lest it be wielded against him (Herf 1984, 49-69); but Mumford observed that Spengler had “predicted, better than more hopeful philosophers, the disastrous downward course that modern civilization is now following” (Mumford 1938, 235). Spengler had been an early booster of the Nazi regime, if a later critic of it, and though Mumford criticized Spengler for the politics he helped unleash, Mumford still saw him as one with “much to teach the historian and the sociologist” (Mumford 1938, 227). Mumford was particularly drawn to, and influenced by, Spengler’s method of writing moral philosophy in the guise of history (Miller 1989, 301). And it may well be that Spengler’s woebegone example prompted Mumford to distance himself from being a more “hopeful” philosopher in his later writings. Nevertheless, where Spengler had gazed longingly towards the coming fall, Mumford, even in the grip of the megamachine, still believed that the fall could be avoided.

    Mumford concludes the final section of The Renewal of Life, called The Conduct of Life, with measured optimism, noting: “The way we must follow is untried and heavy with difficulty; it will test to the utmost our faith and our powers. But it is the way toward life, and those who follow it will prevail” (Mumford 1951, 292). Alas, as the following sections will demonstrate, Mumford grew steadily less confident in the prospects of “the way toward life,” and the rise of the computer only served to make the path more “heavy with difficulty.”

    The Megamachine

    The volumes of The Renewal of Life hardly had enough time to begin gathering dust, before Mumford was writing another work that sought to explain why the prophesized renewal had not come. In the two volumes of The Myth of the Machine Mumford revisits the themes from The Renewal of Life while advancing an even harsher critique and developing his concept of the “megamachine.” The idea of the megamachine has been taken up for its explanatory potential by many others beyond Mumford in a range of fields, it was drawn upon by some of his contemporary critics of technology (Fromm 1968; Illich 1973; Ellul 1980), has been commented on by historians and philosophers of technology (Hughes 2004; Jacoby 2005; Mitcham 1994; Segal 1994), has been explored in post-colonial thinking (Alvares 1988), and has sparked cantankerous disagreements amongst those seeking to deploy the term to advance political arguments (Bookchin 1995; Watson 1997). It is a term that shares certain similarities with other concepts that aim to capture the essence of totalitarian technological control such as Jacque Ellul’s “technique,” (Ellul 1967) and Neil Postman’s “technopoly” (Postman 1993). It is an idea that, as I will demonstrate, is still useful for describing, critiquing, and understanding contemporary society.

    Mumford first gestured in the direction of the megamachine in his 1964 essay “Authoritarian and Democratic Technics” (Mumford 1964). There Mumford argued that small scale technologies which require the active engagement of the human, that promote autonomy, and that are not environmentally destructive are inherently “democratic” (2-3); while large scale systems that reduce humans to mere cogs, that rely on centralized control and are destructive of planet and people, are essentially “authoritarian” (3-4). For Mumford, the rise of “authoritarian technics” was a relatively recent occurrence; however, by “recent” he had in mind “the fourth millennium B.C.” (3). Though Mumford considered “nuclear bombs, space rockets, and computers” all to be examples of contemporary “authoritarian technics” (5) he considered the first examples of such systems to have appeared under the aegis of absolute rulers who exploited their power and scientific knowledge for immense construction feats such as the building of the pyramids. As those endeavors had created “complex human machines composed of specialized, standardized, replaceable, interdependent parts—the work army, the military army, the bureaucracy” (3). In drawing out these two tendencies, Mumford was clearly arguing in favor of “democratic technics,” but he moved away from these terms once he coined the neologism “megamachine.”

    Like the Renewal of Life before it, The Myth of the Machine was originally envisioned as a single book (Mumford 1970, xi). The first volume of the two represents something of a rewriting of Technics and Civilization, but gone from Technics and Human Development is the optimism that had animated the earlier work. By 1959 Mumford had dismissed of Technics and Civilization as “something of a museum piece” wherein he had “assumed, quite mistakenly, that there was evidence for a weakening of faith in the religion of the machine” (Mumford 1934, 534). As Mumford wrote The Myth of the Machine he found himself looking at decades of so-called technological progress and seeking an explanation as to why this progress seemed to primarily consist of mountains of corpses and rubble.

    With the rise of kingship, in Mumford’s estimation, so too came the ability to assemble and command people on a scale that had been previously unknown (Mumford 1967, 188). This “machine” functioned by fully integrating all of its components to complete a particular goal and “when all the components, political and economic, military, bureaucratic and royal, must be included” what emerges is “the megamachine” and along with it “megatechnics” (188-189). It was a structure in which, originally, the parts were not made of steel, glass, stone or copper but flesh and blood—though each human component was assigned and slotted into a position as though they were a cog. While the fortunes of the megamachine ebbed and flowed for a period, Mumford saw the megamachine as becoming resurgent in the 1500s as faith in the “sun god” came to be replaced by the “divine king” exploiting new technical and scientific knowledge (Mumford 1970: 28-50). Indeed, in assessing the thought of Hobbes, Mumford goes so far as to state “the ultimate product of Leviathan was the megamachine, on a new and enlarged model, one that would completely neutralize or eliminate its once human parts” (100).

    Unwilling to mince words, Mumford had started The Myth of the Machine by warning that with the “new ‘megatechnics’ the dominant minority will create a uniform, all-enveloping, super-planetary structure, designed for automatic operation” in which “man will become a passive, purposeless, machine-conditioned animal” (Mumford 1967, 3). Writing at the close of the 1960s, Mumford observed that the impossible fantasies of the controllers of the original megamachines were now actual possibilities (Mumford 1970, 238). The rise of the modern megamachine was the result of a series of historic occurrences: the French revolution which replaced the power of the absolute monarch with the power of the nation state; World War I wherein scientists and scholars were brought into service of the state whilst moderate social welfare programs were introduced to placate the masses (245); and finally the emergence of tools of absolute control and destructive power such as the atom bomb (253). Figures like Stalin and Hitler were not exceptions to the rule of the megamachine but only instances that laid bare “the most sinister defects of the ancient megamachine” its violent, hateful and repressive tendencies (247).

    Even though the power of the megamachine may make it seem that resistance is futile, Mumford was no defeatist. Indeed, The Pentagon of Power ends with a gesture towards renewal that is reminiscent of his argument in The Conduct of Life—albeit with a recognition that the state of the world had grown steadily more perilous. A core element of Mumford’s arguments is that the megamachine’s power was reliant on the belief invested in it (the “myth”), but if such belief in the megamachine could be challenged, so too could the megamachine itself (Miller 1989, 156). The Pentagon of Power met with a decidedly mixed reaction: it was selected as a main selection by the Book-of-the-Month-Club and The New Yorker serialized much of the argument about the megamachine (157). Yet, many of the reviewers of the book denounced Mumford for his pessimism; it was in a review of the book in the New York Times that Mumford was dubbed “our most distinguished flagellator” (Mendelsohn 1994, 151-154). And though Mumford chafed at being dubbed a “prophet of doom” (Segal 1994, 149) it is worth recalling that he liked to see himself in the mode of that “prophet of doom” Jonah (Mumford 1979).

    After all, even though Mumford held out hope that the megamachine could be challenged—that the Renewal of Life could still beat back The Myth of the Machine—he glumly acknowledged that the belief that the megamachine was “absolutely irresistible” and “ultimately beneficent…still enthralls both the controllers and the mass victims of the megamachine today” (Mumford 1967, 224). Mumford described this myth as operating like a “magical spell,” but as the discussion of the megatechnic bribe will demonstrate, it is not so much that the audience is transfixed as that they are bought off. Nevertheless, before turning to the topic of the bribe and blackmail, it is necessary to consider how the computer fit into Mumford’s theorizing of the megamachine.

    The Computer and the Megamachine

    Five years after the publication of The Pentagon of Power, Mumford was still claiming that “the Myth of the Machine” was “the ultimate religion of our seemingly rational age” (Mumford 1975, 375). While it is certainly fair to note that Mumford’s “today” is not our today, it would be foolhardy to merely dismiss the idea of the megamachine as anachronistic moralizing. And to credit the megamachine for its full prescience and continued utility, it is worth closely reading the text to consider the ways in which Mumford was writing about the computer—before the digital turn.

    Writing to his friend, the British garden city advocate Frederic J. Osborn, Mumford noted: “As to the megamachine, the threat that it now offers turns out to be even more frightening, thanks to the computer, than even I in my most pessimistic moments had ever suspected. Once fully installed our whole lives would be in the hands of those who control the system…no decision from birth to death would be left to the individual” (M. Hughes 1971, 443). It may be that Mumford was merely engaging in a bit of hyperbolic flourish in referring to his view of the computer as trumping his “most pessimistic moments,” but Mumford was no stranger (or enemy) of pessimistic moments. Mumford was always searching for fresh evidence of “renewal,” his deepening pessimism points to the types of evidence he was actually finding.  In constructing a narrative that traced the origins of the megamachine across history Mumford had been hoping to show “that human nature is biased toward autonomy and against submission to technology,” (Miller 1990, 157) but in the computer Mumford saw evidence pointing in the opposite direction.

    In assessing the computer, Mumford drew a contrast between the basic capabilities of the computers of his day and the direction in which he feared that “computerdom” was moving (Mumford 1970, plate 6).  Computers to him were not simply about controlling “the mechanical process” but also “the human being who once directed it” (189). Moving away from historical antecedents like Charles Babbage, Mumford emphasized Norbert Wiener’s attempt to highlight human autonomy and he praised Wiener’s concern for the tendency on the part of some technicians to begin to view the world only in terms of the sorts of data that computers could process (189). Mumford saw some of the enthusiasm for the computer’s capability as being rather “over-rated” and he cited instances—such as the computer failure in the case of the Apollo 11 moon landing—as evidence that computers were not quite as all-powerful as some claimed (190). In the midst of a growing ideological adoration for computers, Mumford argued that their “life-efficiency and adaptability…must be questioned” (190). Mumford’s critiquing of computers can be read as an attempt on his part to undermine the faith in computers when such a belief was still in its nascent cult state—before it could become a genuine world religion.

    Mumford does not assume a wholly dismissive position towards the computer. Instead he takes a stance toward it that is similar to his position towards most forms of technology: its productive use “depends upon the ability of its human employers quite literally to keep their own heads, not merely to scrutinize the programming but to reserve the right for ultimate decision” (190). To Mumford, the computer “is a big brain in its most elementary state: a gigantic octopus, fed with symbols instead of crabs,” but just because it could mimic some functions of the human mind did not mean that the human mind should be discarded (Mumford 1967: 29). The human brain was for Mumford infinitely more complex than a computer could be, and even where computers might catch up in terms of quantitative comparison, Mumford argued that the human brain would always remain superior in qualitative terms (39). Mumford had few doubts about the capability of computers to perform the functions for which they had been programmed, but he saw computers as fundamentally “closed” systems whereas the human mind was an “open” one; computers could follow their programs but he did not think they could invent new ones from scratch (Mumford 1970: 191). For Mumford the rise in the power of computers was linked largely to the shift away from the “old-fashioned” machines such as Babbage’s Calculating Engine—and towards the new digital and electric machines which were becoming smaller and more commonplace (188). And though Mumford clearly respected the ingenuity of scientists like Weiner, he amusingly suggested that “the exorbitant hopes for a computer dominated society” were really the result of “the ‘pecuniary-pleasure’ center” (191). While Mumford’s measured consideration of the computer’s basic functioning is important, what is of greater significance is his thinking regarding the computer’s place in the megamachine.

    Whereas much of Technics and Human Development focuses upon the development of the first megamachine, in The Pentagon of Power Mumford turns his focus to the fresh incarnation of the megamachine. This “new megamachine” was distinguished by the way in which it steadily did away with the need for the human altogether—now that there were plenty of actual cogs (and computers) human components were superfluous (258). To Mumford, scientists and scholars had become a “new priesthood” who had abdicated their freedom and responsibility as they came to serve the “megamachine” (268). But if they were the “priesthood” than who did they serve? As Mumford explained, in the command position of this new megamachine was to be found a new “ultimate ‘decision-maker’ and Divine King” and this figure had emerged in “a transcendent, electronic form” it was “the Central Computer” (273).

    Writing in 1970, before the rise of the personal computer or the smartphone, Mumford’s warnings about computers may have seemed somewhat excessive. Yet, in imagining the future of a “a computer dominated society” Mumford was forecasting that the growth of the computer’s power meant the consolidation of control by those already in power. Whereas the rulers of yore had dreamt of being all-seeing, with the rise of the computer such power ceased being merely a fantasy as “the computer turns out to be the Eye of the reinstated Sun God” capable of exacting “absolute conformity to his demands, because no secret can be hidden from him, and no disobedience can go unpunished” (274). And this “eye” saw a great deal: “In the end, no action, no conversation, and possibly in time no dream or thought would escape the wakeful and relentless eye of this deity: every manifestation of life would be processed into the computer and brought under its all-pervading system of control. This would mean, not just the invasion of privacy, but the total destruction of autonomy: indeed the dissolution of the human soul” (274-275). The mention of “the human soul” may be evocative of a standard bit of Mumfordian moralizing, but the rest of this quote has more to say about companies like Google and Facebook, as well as about the mass surveillance of the NSA than many things written since. Indeed, there is something almost quaint about Mumford writing of “no action” decades before social media made it so that an action not documented on social media is of questionable veracity. While the comment regarding “no conversation” seems uncomfortably apt in an age where people are cautioned not to disclose private details in front of their smart TVs and in which the Internet of Things populates people’s homes with devices that are always listening.

    Mumford may have written these words in the age of large mainframe computers but his comments on “the total destruction of autonomy” and the push towards “computer dominated society” demonstrate that he did not believe that the power of such machines could be safely locked away. Indeed, that Mumford saw the computer as an example of an “authoritarian technic” makes it highly questionable that he would have been swayed by the idea that personal computers could grant individuals more autonomy. Rather, as I discuss below, it is far more likely that he would have seen the personal computer as precisely the sort of democratic seeming gadget used to “bribe” people into accepting the larger “authoritarian” system. As it is precisely through the placing of personal computers in people’s homes, and eventually on their persons, that the megamachine is able to advance towards its goal of total control.

    The earlier incarnations of the megamachine had dreamt of the sort of power that became actually available in the aftermath of World War II thanks to “nuclear energy, electric communication, and the computer” (274). And finally the megamachine’s true goal became clear: “to furnish and process an endless quantity of data, in order to expand the role and ensure the domination of the power system” (275). In short, the ultimate purpose of the megamachine was to further the power and enhance the control of the megamachine itself. It is easy to see in this a warning about the dangers of “big data” many decades before that term had entered into common use. Aware of how odd these predictions may have sounded to his contemporaries, Mumford recognized that only a few decades earlier such ideas could have been dismissed of as just so much “satire,” but he emphasized that such alarming potentialities were now either already in existence or nearly within reach (275).

    In the twenty-first century, after the digital turn, it is easy to find examples of entities that fit the bill of the megamachine. It may, in fact, be easier to do this today than it was during Mumford’s lifetime. For one no longer needs to engage in speculative thinking to find examples of technologies that ensure that “no action” goes unnoticed. The handful of massive tech conglomerates that dominate the digital world today—companies like Google, Facebook, and Amazon—seem almost scarily apt manifestations of the megamachine. Under these platforms “every manifestation of life” gets “processed into the computer and brought under its all-pervading system of control,” whether it be what a person searches for, what they consider buying, how they interact with friends, how they express their likes, what they actually purchase, and so forth. And as these companies compete for data they work to ensure that nothing is missed by their “relentless eye[s].” Furthermore, though these companies may be technology firms they are like the classic megamachines insofar as they bring together the “political and economic, military, bureaucratic and royal.” Granted, today’s “royal” are not those who have inherited their thrones but those who owe their thrones to the tech empires at the heads of which they sit. While the status of these platform’s users, reduced as they are to cogs supplying an endless stream of data, further demonstrates the totalizing effects of the megamachine as it coordinates all actions to serve its purposes. And yet, Google, Facebook, and Amazon are not the megamachine, but rather examples of megatechnics; the megamachine is the broader system of which all of those companies are merely parts.

    Though the chilling portrait created by Mumford seems to suggest a definite direction, and a grim final destination, Mumford tried to highlight that such a future “though possible, is not determined, still less an ideal condition of human development” (276). Nevertheless, it is clear that Mumford saw the culmination of “the megamachine” in the rise of the computer and the growth of “computer dominated society.” Thus, “the megamachine” is a forecast of the world after “the digital turn.” Yet, the continuing strength of Mumford’s concept is based not only on the prescience of the idea itself, but in the way in which Mumford sought to explain how it is that the megamachine secures obedience to its strictures. It is to this matter that our attention, at last, turns.

    From the Megatechnic Bribe to Megatechnic Blackmail

    To explain how the megamachine had maintained its power, Mumford provided two answers, both of which avoid treating the megamachine as a merely “autonomous” force (Winner 1989, 108-109). The first explanation that Mumford gives is an explanation of the titular idea itself: “the ultimate religion of our seemingly rational age” which he dubbed ““the myth of the machine” (Mumford 1975, 375). The key component of this “myth” is “the notion that this machine was, by its very nature, absolutely irresistible—and yet, provided that one did not oppose it, ultimately beneficial” (Mumford 1967, 224) —once assembled and set into action the megamachine appears inevitable, and those living in megatechnic societies are conditioned from birth to think of the megamachine in such terms (Mumford 1970, 331).

    Yet, the second part of the myth is especially, if not more, important: it is not merely that the megamachine appears “absolutely irresistible” but that many are convinced that it is “ultimately beneficial.” This feeds into what Mumford described as “the megatechnic bribe,” a concept which he first sketched briefly in “Authoritarian and Democratic Technics” (Mumford 1964, 6) but which he fully developed in The Pentagon of Power (Mumford 1970, 330-334). The “bribe” functions by offering those who go along with it a share in the “perquisites, privileges, seductions, and pleasures of the affluent society” so long that is as they do not question or ask for anything different from that which is offered (330). And this, Mumford recognizes, is a truly tempting offer, as it allows its recipients to believe they are personally partaking in “progress” (331). After all, a “bribe” only really works if what is offered is actually desirable. But Mumford warns, once a people opt for the megamachine, once they become acclimated to the air-conditioned pleasure palace of the megatechnic bribe “no other choices will remain” (332).

    By means of this “bribe,” the megamachine is able to effect an elaborate bait and switch: one through which people are convinced that an authoritarian technic is actually a democratic one. For the bribe accepts “the basic principle of democracy, that every member of society should have a share in its goods,” (Mumford 1964, 6). Mumford did not deny the impressive things with which people were being bribed, but to see them as only beneficial required, in his estimation, a one-sided assessment which ignored “long-term human purposes and a meaningful pattern of life” (Mumford 1970, 333). It entailed confusing the interests of the megamachine with the interests of actual people. Thus, the problem was not the gadgets as such, but the system in which these things were created, produced, and the purposes for which they were disseminated: the problem was that the true purpose of these things was to incorporate people into the megamachine (334). The megamachine created a strange and hostile new world, but offered its denizens bribes to convince them that life in this world was actually a treat. Ruminating on the matter of the persuasive power of the bribe, Mumford wondered if democracy could survive after “our authoritarian technics consolidates its powers, with the aid of its new forms of mass control, its panoply of tranquilizers and sedatives and aphrodisiacs” (Mumford 1964, 7). And in typically Jonah-like fashion, Mumford balked at the very question, noting that in such a situation “life itself will not survive, except what is funneled through the mechanical collective” (7).

    If one chooses to take the framework of the “megatechnic bribe” seriously then it is easy to see it at work in the 21st century. It is the bribe that stands astride the dais at every gaudy tech launch, it is the bribe which beams down from billboards touting the slightly sleeker design of the new smartphone, it is the bribe which promises connection or health or beauty or information or love or even technological protection from the forces that technology has unleashed. The bribe is the offer of the enticing positives that distracts from the legion of downsides. And in all of these cases that which is offered is that which ultimately enhances the power of the megamachine. As Mumford feared, the values that wind up being transmitted across these “bribes,” though they may attempt a patina of concern for moral or democratic values, are mainly concerned with reifying (and deifying) the values of the system offering up these forms of bribery.

    Yet this reading should not be taken as a curmudgeonly rejection of technology as such, in keeping with Mumford’s stance, one can recognize that the things put on offer after the digital turn provide people with an impressive array of devices and platforms, but such niceties also seem like the pleasant distraction that masks and normalizes rampant surveillance, environmental destruction, labor exploitation, and the continuing concentration of wealth in a few hands. It is not that there is a total lack of awareness about the downsides of the things that are offered as “bribes,” but that the offer is too good to refuse. And especially if one has come to believe that the technological status quo is “absolutely irresistible” then it makes sense why one would want to conclude that this situation is “ultimately beneficial.” As Langdon Winner put it several decades ago, “the prevailing consensus seems to be that people love a life of high consumption, tremble at the thought that it might end, and are displeased about having to clean up the messes that technologies sometimes bring” (Winner 1986, 51), such a sentiment is the essence of the bribe.

    Nevertheless, it seems that more thought needs to be given to the bribe after the digital turn, the point after which the bribe has already become successful. The background of the Cold War may have provided a cultural space for Mumford’s skepticism, but, as Wendy Hui Kyong Chun has argued, with the technological advances around the Internet in the last decade of the twentieth century, “technology became once again the solution to political problems” (Chun 2006, 25). Therefore, in the twenty-first century it is not merely about bribery needing to be deployed as a means of securing loyalty to a system of control towards which there is substantial skepticism. Or, to put it slightly differently, at this point there are not many people who still really need to be convinced that they should use a computer. We no longer need to hypothesize about “computer dominated society,” for we already live there.  After all, the technological value systems about which Mumford was concerned have now gained significant footholds not only in the corridors of power, but in every pocket that contains a smart phone. It would be easy to walk through the library brimming with e-books touting the wonders of all that is digital and persuasively disseminating the ideology of the bribe, but such “sugar-coated soma pills”—to borrow a turn of phrase from Howard Segal (1994, 188)—serve more as examples of the continued existence of the bribe than as explanations of how it has changed.

    At the end of her critical history of social media, José Van Dijck (Van Dijck 2013, 174) offers what can be read as an important example of how the bribe has changed, when she notes that “opting out of connective media is hardly an option. The norm is stronger than the law.” On a similar note, Laura Portwood-Stacer in her study of Facebook abstention portrays the very act of not being on that social media platform as “a privilege in itself” —an option that is not available to all (Portwood-Stacer 2012, 14). In interviews with young people, Sherry Turkle has found many “describing how smartphones and social media have infused friendship with the Fear of Missing Out” (Turkle 2015, 145). Though smartphones and social media platforms certainly make up the megamachine’s ecosystem of bribes, what Van Dijck, Portwood-Stacer, and Turkle point to is an important shift in the functioning of the bribe. Namely, that today we have moved from the megatechnic bribe, towards what can be called “megatechnic blackmail.”

    Whereas the megatechnic bribe was concerned with assimilating people into the “new megamachine,” megatechnic blackmail is what occurs once the bribe has already been largely successful. This is not to claim that the bribe does not still function—for it surely does through the mountain of new devices and platforms that are constantly being rolled out—but, rather, that it does not work by itself. The bribe is what is at work when something new is being introduced, it is what convinces people that the benefits outweigh any negative aspects, and it matches the sense of “irresistibility” with a sense of “beneficence.” Blackmail, in this sense, works differently—it is what is at work once people become all too aware of the negative side of smartphones, social media, and the like. Megatechnic blackmail is what occurs once, as Van Dijck put it, “the norm” becomes “stronger than the law” as here it is not the promise of something good that draws someone in but the fear of something bad that keeps people from walking away.

    This puts the real “fear” in the “fear of missing out” which no longer needs to promise “use this platform because it’s great” but can instead now threaten “you know there are problems with this platform, but use it or you will not know what is going on in the world around you.” The shift from bribe to blackmail can further be seen in the consolidation of control in the hands of fewer companies behind the bribes—the inability of an upstart social network (a fresh bribe) to challenge the social network is largely attributable to the latter having moved into a blackmail position. It is no longer the case that a person, in a Facebook saturated society, has a lot to gain by joining the site, but that (if they have already accepted its bribe) they have a lot to lose by leaving it. The bribe secures the adoration of the early-adopters, and it convinces the next wave of users to jump on board, but blackmail is what ensures their fealty once the shiny veneer of the initial bribe begins to wear thin.

    Mumford had noted that in a society wherein the bribe was functioning smoothly, “the two unforgivable sins, or rather punishable vices, would be continence and selectivity” (Mumford 1970, 332) and blackmail is what keeps those who would practice “continence and selectivity” in check. As Portwood-Stacer noted, abstention itself may come to be a marker of performative privilege—to opt out becomes a “vice” available only to those who can afford to engage in it. To not have a smartphone, to not have a Facebook account, to not buy things on Amazon, or use Google, becomes either a signifier of one’s privilege or marks one as an outsider.

    Furthermore, choosing to renounce a particular platform (or to use it less) rarely entails swearing off the ecosystem of megatechnics entirely. As far as the megamachine is concerned, insofar as options are available and one can exercise a degree of “selectivity” what matters is that one is still selecting within that which is offered by the megamachine. The choice between competing systems of particular megatechnics is still a choice that takes place within the framework of the megamachine. Thus, Douglas Rushkoff’s call “program or be programmed” (Rushkoff 2010) appears less as a rallying cry of resistance, than as a quiet acquiescence: one can program, or one can be programmed, but what is unacceptable is to try to pursue a life outside of programs. Here the turn that seeks to rediscover the Internet’s once emancipatory promise in wikis, crowd-funding, digital currency, and the like speaks to a subtle hope that the problems of the digital day can be defeated by doubling down on the digital. From this technologically-optimistic view the problem with companies like Google and Facebook is that they have warped the anarchic promise, violated the independence, of cyberspace (Barlow 1996; Turner 2006); or that capitalism has undermined the radical potential of these technologies (Fuchs 2014; Srnicek and Williams 2015). Yet, from Mumford’s perspective such hopes and optimism are unwarranted. Indeed, they are the sort of democratic fantasies that serve to cover up the fact that the computer, at least for Mumford, was ultimately still an authoritarian technology. For the megamachine it does not matter if the smartphone with a Twitter app is used by the President or by an activist: either use is wholly acceptable insofar as both serve to deepen immersion in the “computer dominated society” of the megamachine.  And thus, as to the hope that megatechnics can be used to destroy the megamachine it is worth recalling Mumford’s quip, “Let no one imagine that there is a mechanical cure for this mechanical disease” (Mumford 1954, 50).

    In this situation the only thing worse than falling behind or missing out is to actually challenge the system itself, to practice or argue that others practice “continence and selectivity” leads to one being denounced as a “technophobe” or “Luddite.” That kind of derision fits well with Mumford’s observation that the attempt to live “detached from the megatechnic complex” to be “cockily independent of it, or recalcitrant to its demands, is regarded as nothing less than a form of sabotage” (Mumford 1970, 330). Minor criticisms can be permitted if they are of the type that can be assimilated and used to improve the overall functioning of the megamachine, but the unforgiveable heresy is to challenge the megamachine itself. It is acceptable to claim that a given company should be attempting to be more mindful of a given social concern, but it is unacceptable to claim that the world would actually be a better place if this company were no more. One sees further signs of the threat of this sort of blackmail at work in the opening pages of the critical books about technology aimed at the popular market, wherein the authors dutifully declare that though they have some criticisms they are not anti-technology. Such moves are not the signs of people merrily cooperating with the bribe, but of people recognizing that they can contribute to a kinder, gentler bribe (to a greater or lesser extent) or risk being banished to the margins as fuddy-duddies, kooks, environmentalist weirdos, or as people who really want everyone to go back to living in caves. The “myth of the machine” thrives on the belief that there is no alternative. One is permitted (in some circumstances) to say “don’t use Facebook” but one cannot say “don’t use the Internet.” Blackmail is what helps to bolster the structure that unfailingly frames the megamachine as “ultimately beneficial.”

    The megatechnic bribe dazzles people by muddling the distinction between, to use a comparison Mumford was fond of, “the goods life” and “the good life.” But megatechnic blackmail threatens those who grow skeptical of this patina of “the good life” that they can either settle for “the goods life” or they can look forward to an invisible life on the margins. Those who can’t be bribed are blackmailed. Thus it is no longer just that the myth of the machine is based on the idea that the megamachine is “absolutely irresistible” and “ultimately beneficial” but that it now includes the idea that to push back is “unforgivably detrimental.”

    Conclusion

    Of the various biblical characters from whom one can draw inspiration, Jonah is something of an odd choice for a public intellectual. After all, Jonah first flees from his prophetic task, sleeps in the midst of a perilous storm, and upon delivering the prophecy retreats to a hillside to glumly wait to see if the prophesized destruction will come. There is a certain degree to which Jonah almost seems disappointed that the people of Ninevah mend their ways and are forgiven by God. Yet some of Jonah’s frustrated disappointment flows from his sense that the whole ordeal was pointless—he had always known that God would forgive the people of Ninevah and not destroy the city. Given that, why did Jonah have to leave the comfort of his home in the first place? (JPS 1999, 1333-1337). Mumford always hoped to be proven wrong. As he put it in the very talk in which he introduced himself as Jonah, “I would die happy if I knew that on my tombstone could be written these words, ‘This man was an absolute fool. None of the disastrous things that he reluctantly predicted ever came to pass!’ Yes: then I could die happy” (Mumford 1979, 528). But those words do not appear on Mumford’s tombstone.

    Assessing whether Mumford was “an absolute fool” and whether any “of the disastrous things that he reluctantly predicted ever came to pass” is a tricky mire to traverse. For the way that one responds to that probably has as much to do with whether or not one shares Mumford’s outlook than with anything particular he wrote. During his lifetime Mumford had no shortage of critics who viewed him as a stodgy pessimist. But what is one to expect if one is trying to follow the example of Jonah? If you see yourself as “that terrible fellow who keeps on uttering the very words you don’t want to hear, reporting the bad news and warning you that it will get even worse unless you yourself change your mind and alter your behavior” (528) than you can hardly be surprised when many choose to dismiss you as a way of dismissing the bad news you bring.

    Yet, it has been the contention of this paper, that Mumford should not be ignored—and that his thought provides a good tool to think with after the digital turn. In his introduction to the 2010 edition of Mumford’s Technics and Civilization, Langdon Winner notes that it “openly challenged scholarly conventions of the early twentieth century and set the stage for decades of lively debate about the prospects for our technology-centered ways of living” (Mumford 2010, ix).  Even if the concepts from The Myth of the Machine have not “set the stage” for debate in the twenty-first century, the ideas that Mumford develops there can pose useful challenges for present discussions around “our technology-centered ways of living.” True, “the megamachine” is somewhat clunky as a neologism but as a term that encompasses the technical, political, economic, and social arrangements of a powerful system it seems to provide a better shorthand to capture the essence of Google or the NSA than many other terms. Mumford clearly saw the rise of the computer as the invention through which the megamachine would be able to fully secure its throne. At the same time, the idea of the “megatechnic bribe” is a thoroughly discomforting explanation for how people can grumble about Apple’s labor policies or Facebook’s uses of user data while eagerly lining up to upgrade to the latest model of iPhone or clicking “like” on a friend’s vacation photos. But in the present day the bribe has matured beyond a purely pleasant offer into a sort of threat that compels consent. Indeed, the idea of the bribe may be among Mumford’s grandest moves in the direction of telling people what they “don’t want to hear.” It is discomforting to think of your smartphone as something being used to “bribe” you, but that it is unsettling may be a result of the way in which that claim resonates.

    Lewis Mumford never performed a Google search, never made a Facebook account, never Tweeted or owned a smartphone or a tablet, and his home was not a repository for the doodads of the Internet of Things. But it is doubtful that he would have been overly surprised by any of them. Though he may have appreciated them for their technical capabilities he would have likely scoffed at the utopian hopes that are hung upon them. In 1975 Mumford wrote: “Behold the ultimate religion of our seemingly rational age—the Myth of the Machine! Bigger and bigger, more and more, farther and farther, faster and faster became ends in themselves, as expressions of godlike power; and empires, nations, trusts, corporations, institutions, and power-hungry individuals were all directed to the same blank destination” (Mumford 1975, 375).

    Is this assessment really so outdated today? If so, perhaps the stumbling block is merely the term “machine,” which had more purchase in the “our” of Mumford’s age than in our own. Today, that first line would need to be rewritten to read “the Myth of the Digital” —but other than that, little else would need to be changed.

    _____

    Zachary Loeb is a graduate student in the History and Sociology of Science department at the University of Pennsylvania. His research focuses on technological disasters, computer history, and the history of critiques of technology (particularly the work of Lewis Mumford). He is a frequent contributor to The b2 Review Digital Studies section.

    Back to the essay

    _____

    Works Cited

    • Alvares, Claude. 1988. “Science, Colonialism, and Violence: A Luddite View” In Science, Hegemony and Violence: A Requiem for Modernity, edited by Ashis Nandy. Delhi: Oxford University Press.
    • Barlow, John Perry. 1996. “A Declaration of the Independence of Cyberspace” (Feb 8).
    • Blake, Casey Nelson. 1990. Beloved Community: The Cultural Criticism of Randolph Bourne, Van Wyck Brooks, Waldo Frank, and Lewis Mumford. Chapel Hill: The University of North Carolina Press.
    • Bookchin, Murray. 1995. Social Anarchism or Lifestyle Anarchism: An Unbridgeable Chasm. Oakland: AK Press.
    • Cowley, Malcolm and Bernard Smith, eds. 1938. Books That Changed Our Minds. New York: The Kelmscott Editions.
    • Ezrahi, Yaron, Mendelsohn, Everett, and Segal, Howard P., eds. 1994. Technology, Pessimism, and Postmodernism. Amherst: University of Massachusetts Press.
    • Ellul, Jacques. 1967. The Technological Society. New York: Vintage Books.
    • Ellul, Jacques. 1980. The Technological System. New York: Continuum.
    • Farrenkopf, John. 2001 Prophet of Decline: Spengler on World History and Politics. Baton Rouge: LSU Press.
    • Fox, Richard Wightman. 1990. “Tragedy, Responsibility, and the American Intellectual, 1925-1950” In Lewis Mumford: Public Intellectual, edited by Thomas P. Hughes, and Agatha C. Hughes. New York: Oxford University Press.
    • Fromm, Erich. 1968. The Revolution of Hope: Toward a Humanized Technology. New York: Harper & Row, Publishers.
    • Fuchs, Christian. 2014. Social Media: A Critical Introduction. Los Angeles: Sage.
    • Herf, Jeffrey. 1984. Reactionary Modernism: Technology, Culture, and Politics in Weimar and the Third Reich. Cambridge: Cambridge University Press.
    • Hughes, Michael (ed.) 1971. The Letters of Lewis Mumford and Frederic J. Osborn: A Transatlantic Dialogue, 1938-1970. New York: Praeger Publishers.
    • Hughes, Thomas P. and Agatha C. Hughes. 1990. Lewis Mumford: Public Intellectual. New York: Oxford University Press.
    • Hughes, Thomas P. 2004. Human-Built World: How to Think About Technology and Culture. Chicago: University of Chicago Press.
    • Hui Kyong Chun, Wendy. 2006. Control and Freedom. Cambridge: The MIT Press.
    • Ihde, Don. 1993. Philosophy of Technology: an Introduction. New York: Paragon House.
    • Jacoby, Russell. 2005 Picture Imperfect: Utopian Thought for an Anti-Utopian Age. New York: Columbia University Press.
    • JPS Hebrew-English Tanakh. 1999. Philadelphia: The Jewish Publication Society.
    • Lasch, Christopher. 1991. The True and Only Heaven: Progress and Its Critics. New York: W. W.Norton and Company.
    • Luccarelli, Mark. 1996. Lewis Mumford and the Ecological Region: The Politics of Planning. New York: The Guilford Press.
    • Marx, Leo. 1988. The Pilot and the Passenger: Essays on Literature, Technology, and Culture in the United States. New York: Oxford University Press.
    • Marx, Leo. 1990. “Lewis Mumford” Prophet of Organicism” In Lewis Mumford: Public Intellectual, edited by Thomas P. Hughes and Agatha C. Hughes. New York: Oxford University Press.
    • Marx, Leo. 1994. “The Idea of ‘Technology’ and Postmodern Pessimism.” In Does Technology Drive History? The Dilemma of Technological Determinism, edited by Merritt Roe Smith and Leo Marx. Cambridge: MIT Press.
    • Mendelsohn, Everett. 1994. “The Politics of Pessimism: Science and Technology, Circa 1968.” In Technology, Pessimism, and Postmodernism, edited by Yaron Ezrahi, Everett Mendelsohn, and Howard P. Segal. Amherst: University of Massachusetts Press.
    • Miller, Donald L. 1989. Lewis Mumford: A Life. New York: Weidenfeld and Nicolson.
    • Molesworth, Charles. 1990. “Inner and Outer: The Axiology of Lewis Mumford.” In Lewis Mumford: Public Intellectual, edited by Thomas P. Hughes and Agatha C. Hughes. New York: Oxford University Press.
    • Mitcham, Carl. 1994. Thinking Through Technology: The Path between Engineering and Philosophy. Chicago: University of Chicago Press.
    • Mumford, Lewis. 1926. “Radicalism Can’t Die.” The Jewish Daily Forward (English section, Jun 20).
    • Mumford, Lewis. 1934. Technics and Civilization. New York: Harcourt, Brace and Company.
    • Mumford, Lewis. 1938. The Culture of Cities. New York, Harcourt, Brace and Company.
    • Mumford, Lewis. 1944. The Condition of Man. New York, Harcourt, Brace and Company.
    • Mumford, Lewis. 1951. The Conduct of Life. New York, Harcourt, Brace and Company.
    • Mumford, Lewis. 1954. In the Name of Sanity. New York: Harcourt, Brace and Company.
    • Mumford, Lewis. 1959. “An Appraisal of Lewis Mumford’s Technics and Civilization (1934).” Daedalus 88:3 (Summer). 527-536.
    • Mumford, Lewis. 1962. The Story of Utopias. New York: Compass Books, Viking Press.
    • Mumford, Lewis. 1964. “Authoritarian and Democratic Technics.” Technology and Culture 5:1 (Winter). 1-8.
    • Mumford, Lewis. 1967. Technics and Human Development. Vol. 1 of The Myth of the Machine. Technics and Human Development. New York: Harvest/Harcourt Brace Jovanovich.
    • Mumford, Lewis. 1970. The Pentagon of Power. Vol. 2 of The Myth of the Machine. Technics and Human Development. New York: Harvest/Harcourt Brace Jovanovich.
    • Mumford, Lewis. 1975. Findings and Keepings: Analects for an Autobiography. New York, Harcourt, Brace and Jovanovich.
    • Mumford, Lewis. 1979. My Work and Days: A Personal Chronicle. New York: Harcourt, Brace, Jovanovich.
    • Mumford, Lewis. 1982. Sketches from Life: The Autobiography of Lewis Mumford. New York: The Dial Press.
    • Mumford, Lewis. 2010. Technics and Civilization. Chicago: The University of Chicago Press.
    • Portwood Stacer, Laura. 2012. “Media Refusal and Conspicuous Non-consumption: The Performative and Political Dimensions of Facebook Abstention.” New Media and Society (Dec 5).
    • Postman, Neil. 1993. Technopoly: The Surrender of Culture to Technology. New York: Vintage Books.
    • Rushkoff, Douglas. 2010. Program or Be Programmed. Berkeley: Soft Skull Books.
    • Segal, Howard P. 1994a. “The Cultural Contradictions of High Tech: or the Many Ironies of Contemporary Technological Optimism.” In Pessimism, and Postmodernism, edited by Yaron Ezrahi, Everett Mendelsohn, and Howard P. Segal. Amherst: University of Massachusetts Press.
    • Segal, Howard P. 1994b. Future Imperfect: The Mixed Blessings of Technology in America. Amherst: The University of Amherst Press.
    • Spengler, Oswald. 1932a. Form and Actuality. Vol. 1 of The Decline of the West. New York: Alfred K. Knopf.
    • Spengler, Oswald. 1932b. Perspectives of World-History. Vol. 2 of The Decline of the West. New York: Alfred K. Knopf.
    • Spengler, Oswald. 2002. Man and Technics: A Contribution to a Philosophy of Life. Honolulu: University Press of the Pacific.
    • Srnicek, Nick and Alex Williams. 2015. Inventing the Future: Postcapitalism and a World Without Work. New York: Verso Books.
    • Turkle, Sherry. 2015. Reclaiming Conversation: The Power of Talk in a Digital Age. New York: Penguin Press.
    • Turner, Fred. 2006. From Counterculture to Cyberculture: Stewart Brand, The Whole Earth Network and the Rise of Digital Utopianism. Chicago: The University of Chicago Press.
    • Van Dijck, José. 2013. The Culture of Connectivity. Oxford: Oxford University Press.
    • Watson, David. 1997. Against the Megamachine: Essays on Empire and Its Enemies. Brooklyn: Autonomedia.
    • Williams, Rosalind. 1990. “Lewis Mumford as a Historian of Technology in Technics and Civilization.” In Lewis Mumford: Public Intellectual, edited by Thomas P. Hughes and Agatha C. Hughes. New York: Oxford University Press.
    • Williams, Rosalind. 1994. “The Political and Feminist Dimensions of Technological Determinism.” In Does Technology Drive History? The Dilemma of Technological Determinism, edited by Merritt Roe Smith and Leo Marx. Cambridge: MIT Press.
    • Winner, Langdon. 1989. Autonomous Technology: Technics-out-of-Control as a Theme in Political Thought. Cambridge: MIT Press.
    • Winner, Langdon. 1986. The Whale and the Reactor. Chicago: University of Chicago Press.
    • Wojtowicz, Robert. 1996. Lewis Mumford and American Modernism: Eutopian Themes for Architecture and Urban Planning. Cambridge: Cambridge University Press.

     

  • Zachary Loeb – All Watched Over By Machines (Review of Levine, Surveillance Valley)

    Zachary Loeb – All Watched Over By Machines (Review of Levine, Surveillance Valley)

    a review of Yasha Levine, Surveillance Valley: The Secret Military History of the Internet (PublicAffairs, 2018)

    by Zachary Loeb

    ~

    There is something rather precious about Google employees, and Internet users, who earnestly believe the “don’t be evil” line. Though those three words have often been taken to represent a sort of ethos, their primary function is as a steam vent – providing a useful way to allow building pressure to escape before it can become explosive. While “don’t be evil” is associated with Google, most of the giants of Silicon Valley have their own variations of this comforting ideological façade: Apple’s “think different,” Facebook’s talk of “connecting the world,” the smiles on the side of Amazon boxes. And when a revelation troubles this carefully constructed exterior – when it turns out Google is involved in building military drones, when it turns out that Amazon is making facial recognition software for the police – people react in shock and outrage. How could this company do this?!?

    What these revelations challenge is not simply the mythos surrounding particular tech companies, but the mythos surrounding the tech industry itself. After all, many people have their hopes invested in the belief that these companies are building a better brighter future, and they are naturally taken aback when they are forced to reckon with stories that reveal how these companies are building the types of high-tech dystopias that science fiction has been warning us about for decades. And in this space there are some who seem eager to allow a new myth to take root: one in which the unsettling connections between big tech firms and the military industrial complex is something new. But as Yasha Levine’s important new book, Surveillance Valley, deftly demonstrates the history of the big tech firms, complete with its panoptic overtones, is thoroughly interwoven with the history of the repressive state apparatus. While many people may be at least nominally aware of the links between early computing, or the proto-Internet, and the military, Levine’s book reveals the depth of these connections and how they persist. As he provocatively puts it, “the Internet was developed as a weapon and remains a weapon today” (9).

    Thus, cases of Google building military drones, Facebook watching us all, and Amazon making facial recognition software for the police, need to be understood not as aberrations. Rather, they are business as usual.

    Levine begins his account with the war in Vietnam, and the origins of a part of the Department of Defense known as the Advanced Research Projects Agency (ARPA) – an outfit born of the belief that victory required the US to fight a high-tech war. ARPA’s technocrats earnestly believed “in the power of science and technology to solve the world’s problems” (23), and they were confident that the high-tech systems they developed and deployed (such as Project Igloo White) would allow the US to triumph in Vietnam. And though the US was not ultimately victorious in that conflict, the worldview of ARPA’s technocrats was, as was the linkage between the nascent tech sector and the military. Indeed, the tactics and techniques developed in Vietnam were soon to be deployed for dealing with domestic issues, “giving a modern scientific veneer to public policies that reinforced racism and structural poverty” (30).

    Much of the early history of computers, as Levine documents, is rooted in systems developed to meet military and intelligence needs during WWII – but the Cold War provided plenty of impetus for further military reliance on increasingly complex computing systems. And as fears of nuclear war took hold, computer systems (such as SAGE) were developed to surveil the nation and provide military officials with a steady flow of information. Along with the advancements in computing came the dispersion of cybernetic thinking which treated humans as information processing machines, not unlike computers, and helped advance a worldview wherein, given enough data, computers could make sense of the world. All that was needed was to feed more, and more, information into the computers – and intelligence agencies proved to be among the first groups interested in taking advantage of these systems.

    While the development of these systems of control and surveillance ran alongside attempts to market computers to commercial firms, Levine’s point is that it was not an either/or situation but a both/and, “computer technology is always ‘dual use,’ to be used in both commercial and military applications” (58) – and this split allows computer scientists and engineers who would be morally troubled by the “military applications” of their work to tell themselves that they work strictly on the commercial, or scientific side. ARPANET, the famous forerunner of the Internet, was developed to connect computer centers at a variety of prominent universities. Reliant on Interface Message Processors (IMPs) the system routed messages through the network through a variety of nodes and in the case that one node went down the system would reroute the message through other nodes – it was a system of relaying information built to withstand a nuclear war.

    Though all manner of utopian myths surround the early Internet, and by extension its forerunner, Levine highlights that “surveillance was baked in from the very beginning” (75). Case in point, the largely forgotten CONUS Intel program that gathered information on millions of Americans. By encoding this information on IBM punch cards, which were then fed into a computer, law enforcement groups and the army were able to access information not only regarding criminal activity, but activities protected by the first amendment. As news of these databases reached the public they generated fears of a high-tech surveillance society, leading some Senators, such as Sam Ervin, to push back against the program. And in a foreshadowing of further things to come, “the army promised to destroy the surveillance files, but the Senate could not obtain definitive proof that the files were ever fully expunged,” (87). Though there were concerns about the surveillance potential of ARPANET, its growing power was hardly checked, and more government agencies began building their own subnetworks (PRNET, SATNET). Yet, as they relied on different protocols, these networks could not connect to each other, until TCP/IP “the same basic network language that powers the Internet today” (95), allowed them to do so.

    Yet surveillance of citizens, and public pushback against computerized control, is not the grand origin story that most people are familiar with when it comes to the Internet. Instead the story that gets told is one whereby a military technology is filtered through the sieve of a very selective segment of the 1960s counterculture to allow it to emerge with some rebellious credibility. This view, owing much to Stewart Brand, transformed the nascent Internet from a military technology into a technology for everybody “that just happened to be run by the Pentagon” (106). Brand played a prominent and public role in rebranding the computer, as well as those working on the computers – turning these cold calculating machines into doors to utopia, and portraying computer programmers and entrepreneurs as the real heroes of the counterculture. In the process the military nature of these machines disappeared behind a tie-dyed shirt, and the fears of a surveillance society were displaced by hip promises of total freedom. The government links to the network were further hidden as ARPANET slowly morphed into the privatized commercial system we know as the Internet. It may seem mind boggling that the Internet was simply given away with “no real public debate, no discussion, no dissension, and no oversight” (121), but it is worth remembering that this was not the Internet we know. Rather it was how the myth of the Internet we know was built. A myth that combined, as was best demonstrated by Wired magazine, “an unquestioning belief in the ultimate goodness and rightness of markets and decentralized computer technology, no matter how it was used” (133).

    The shift from ARPANET to the early Internet to the Internet of today presents a steadily unfolding tale wherein the result is that, today, “the Internet is like a giant, unseen blob that engulfs the modern world” (169). And in terms of this “engulfing” it is difficult to not think of a handful of giant tech companies (Amazon, Facebook, Apple, eBay, Google) who are responsible for much of that. In the present Internet atmosphere people have become largely inured to the almost clichéd canard that “if you’re not paying, you are the product,” but what this represents is how people have, largely, come to accept that the Internet is one big surveillance machine. Of course, feeding information to the giants made a sort of sense, many people (at least early on) seem to have been genuinely taken in by Google’s “Don’t Be Evil” image, and they saw themselves as the beneficiaries of the fact that “the more Google knew about someone, the better its search results would be” (150). The key insight that firms like Google seem to have understood is that a lot can be learned about a person based on what they do online (especially when they think no one is watching) – what people search for, what sites people visit, what people buy. And most importantly, what these companies understand is that “everything that people do online leaves a trail of data” (169), and controlling that data is power. These companies “know us intimately, even the things that we hide from those closest to us” (171). ARPANET found itself embroiled in a major scandal, at its time, when it was revealed how it was being used to gather information on and monitor regular people going about their lives – and it may well be that “in a lot of ways” the Internet “hasn’t changed much from its ARPANET days. It’s just gotten more powerful” (168).

    But even as people have come to gradually accept, by their actions if not necessarily by their beliefs, that the Internet is one big surveillance machine – periodically events still puncture this complacency. Case in point: Edward Snowden’s revelations about the NSA which splashed the scale of Internet assisted surveillance across the front pages of the world’s newspapers. Reporting linked to the documents Snowden leaked revealed how “the NSA had turned Silicon Valley’s globe-spanning platforms into a de facto intelligence collection apparatus” (193), and these documents exposed “the symbiotic relationship between Silicon Valley and the US government” (194). And yet, in the ensuing brouhaha, Silicon Valley was largely able to paint itself as the victim. Levine attributes some of this to Snowden’s own libertarian political bent, as he became a cult hero amongst technophiles, cypher-punks, and Internet advocates, “he swept Silicon Valley’s role in Internet surveillance under the rug” (199), while advancing a libertarian belief in “the utopian promise of computer networks” (200) similar to that professed by Steward Brand. In many ways Snowden appeared as the perfect heir apparent to the early techno-libertarians, especially as he (like them) focused less on mass political action and instead more on doubling-down on the idea that salvation would come through technology. And Snowden’s technology of choice was Tor.

    While Tor may project itself as a solution to surveillance, and be touted as such by many of its staunchest advocates, Levine casts doubt on this. Noting that, “Tor works only if people are dedicated to maintaining a strict anonymous Internet routine,” one consisting of dummy e-mail accounts and all transactions carried out in Bitcoin, Levine suggests that what Tor offers is “a false sense of privacy” (213). Levine describes the roots of Tor in an original need to provide government operatives with an ability to access the Internet, in the field, without revealing their true identities; and in order for Tor to be effective (and not simply signal that all of its users are spies and soldiers) the platform needed to expand its user base: “Tor was like a public square—the bigger and more diverse the group assembled there, the better spies could hide in the crowd” (227).

    Though Tor had spun off as an independent non-profit, it remained reliant for much of its funding on the US government, a matter which Tor aimed to downplay through emphasizing its radical activist user base and by forming close working connections with organizations like WikiLeaks that often ran afoul of the US government. And in the figure of Snowden, Tor found a perfect public advocate, who seemed to be living proof of Tor’s power – after all, he had used it successfully. Yet, as the case of Ross Ulbricht (the “Dread Pirate Roberts” of Silk Road notoriety) demonstrated, Tor may not be as impervious as it seems – researchers at Carnegie Mellon University “had figured out a cheap and easy way to crack Tor’s super-secure network” (263). To further complicate matters Tor had come to be seen by the NSA “as a honeypot,” to the NSA “people with something to hide” were the ones using Tor and simply by using it they were “helping to mark themselves for further surveillance” (265). And much of the same story seems to be true for the encrypted messaging service Signal (it is government funded, and less secure than its fans like to believe). While these tools may be useful to highly technically literate individuals committed to maintaining constant anonymity, “for the average users, these tools provided a false sense of security and offered the opposite of privacy” (267).

    The central myth of the Internet frames it as an anarchic utopia built by optimistic hippies hoping to save the world from intrusive governments through high-tech tools. Yet, as Surveillance Valley documents, “computer technology can’t be separated from the culture in which it is developed and used” (273). Surveillance is at the core of, and has always been at the core of, the Internet – whether the all-seeing eye be that of the government agency, or the corporation. And this is a problem that, alas, won’t be solved by crypto-fixes that present technological solutions to political problems. The libertarian ethos that undergirds the Internet works well for tech giants and cypher-punks, but a real alternative is not a set of tools that allow a small technically literate gaggle to play in the shadows, but a genuine democratization of the Internet.

     

    *

     

    Surveillance Valley is not interested in making friends.

    It is an unsparing look at the origins of, and the current state of, the Internet. And it is a book that has little interest in helping to prop up the popular myths that sustain the utopian image of the Internet. It is a book that should be read by anyone who was outraged by the Facebook/Cambridge Analytica scandal, anyone who feels uncomfortable about Google building drones or Amazon building facial recognition software, and frankly by anyone who uses the Internet. At the very least, after reading Surveillance Valley many of those aforementioned situations seem far less surprising. While there are no shortage of books, many of them quite excellent, that argue that steps need to be taken to create “the Internet we want,” in Surveillance Valley Yasha Levine takes a step back and insists “first we need to really understand what the Internet really is.” And it is not as simple as merely saying “Google is bad.”

    While much of the history that Levine unpacks won’t be new to historians of technology, or those well versed in critiques of technology, Surveillance Valley brings many, often separate strands into one narrative. Too often the early history of computing and the Internet is placed in one silo, while the rise of the tech giants is placed in another – by bringing them together, Levine is able to show the continuities and allow them to be understood more fully. What is particularly noteworthy in Levine’s account is his emphasis on early pushback to ARPANET, an often forgotten series of occurrences that certainly deserves a book of its own. Levine describes students in the 1960s who saw in early ARPANET projects “a networked system of surveillance, political control, and military conquest being quietly assembled by diligent researchers and engineers at college campuses around the country,” and as Levine provocatively adds, “the college kids had a point” (64). Similarly, Levine highlights NBC reporting from 1975 on the CIA and NSA spying on Americans by utilizing ARPANET, and on the efforts of Senators to rein in these projects. Though Levine is not presenting, nor is he claiming to present, a comprehensive history of pushback and resistance, his account makes it clear that liberatory claims regarding technology were often met with skepticism. And much of that skepticism proved to be highly prescient.

    Yet this history of resistance has largely been forgotten amidst the clever contortions that shifted the Internet’s origins, in the public imagination, from counterinsurgency in Vietnam to the counterculture in California. Though the area of Surveillance Valley that will likely cause the most contention is Levine’s chapters on crypto-tools like Tor and Signal, perhaps his greatest heresy is in his refusal to pay homage to the early tech-evangels like Stewart Brand and Kevin Kelly. While the likes of Brand, and John Perry Barlow, are often celebrated as visionaries whose utopian blueprints have been warped by power-hungry tech firms, Levine is frank in framing such figures as long-haired libertarians who knew how to spin a compelling story in such a way that made empowering massive corporations seem like a radical act. And this is in keeping with one of the major themes that runs, often subtlety, through Surveillance Valley: the substitution of technology for politics. Thus, in his book, Levine does not only frame the Internet as disempowering insofar as it runs on surveillance and relies on massive corporations, but he emphasizes how the ideological core of the Internet focuses all political action on technology. To every social, economic, and political problem the Internet presents itself as the solution – but Levine is unwilling to go along with that idea.

    Those who were familiar with Levine’s journalism before he penned Surveillance Valley will know that much of his reporting has covered crypto-tech, like Tor, and similar privacy technologies. Indeed, to a certain respect, Surveillance Valley can be read as an outgrowth of that reporting. And it is also important to note, as Levine does in the book, that Levine did not make himself many friends in the crypto community by taking on Tor. It is doubtful that cypherpunks will like Surveillance Valley, but it is just as doubtful that they will bother to actually read it and engage with Levine’s argument or the history he lays out. This is a shame, for it would be a mistake to frame Levine’s book as an attack on Tor (or on those who work on the project). Levine’s comments on Tor are in keeping with the thrust of the larger argument of his book: such privacy tools are high-tech solutions to problems created by high-tech society, that mainly serve to keep people hooked into all those high-tech systems. And he questions the politics of Tor, noting that “Silicon Valley fears a political solution to privacy. Internet Freedom and crypto offer an acceptable solution” (268). Or, to put it another way, Tor is kind of like shopping at Whole Foods – people who are concerned about their food are willing to pay a bit more to get their food there, but in the end shopping there lets people feel good about what they’re doing without genuinely challenging the broader system. And, of course, now Whole Foods is owned by Amazon. The most important element of Levine’s critique of Tor is not that it doesn’t work, for some (like Snowden) it clearly does, but that most users do not know how to use it properly (and are unwilling to lead a genuinely full-crypto lifestyle) and so it fails to offer more than a false sense of security.

    Thus, to say it again, Surveillance Valley isn’t particularly interested in making a lot of friends. With one hand it brushes away the comforting myths about the Internet, and with the other it pushes away the tools that are often touted as the solution to many of the Internet’s problems. And in so doing Levine takes on a variety of technoculture’s sainted figures like Stewart Brand, Edward Snowden, and even organizations like the EFF. While Levine clearly doesn’t seem interested in creating new myths, or propping up new heroes, it seems as though he somewhat misses an opportunity here. Levine shows how some groups and individuals had warned about the Internet back when it was still ARPANET, and a greater emphasis on such people could have helped create a better sense of alternatives and paths that were not taken. Levine notes near the book’s end that, “we live in bleak times, and the Internet is a reflection of them: run by spies and powerful corporations just as our society is run by them. But it isn’t all hopeless” (274). Yet it would be easier to believe the “isn’t all hopeless” sentiment, had the book provided more analysis of successful instances of pushback. While it is respectable that Levine puts forward democratic (small d) action as the needed response, this comes as the solution at the end of a lengthy work that has discussed how the Internet has largely eroded democracy. What Levine’s book points to is that it isn’t enough to just talk about democracy, one needs to recognize that some technologies are democratic while others are not. And though we are loathe to admit it, perhaps the Internet (and computers) simply are not democratic technologies. Sure, we may be able to use them for democratic purposes, but that does not make the technologies themselves democratic.

    Surveillance Valley is a troubling book, but it is an important book. It smashes comforting myths and refuses to leave its readers with simple solutions. What it demonstrates in stark relief is that surveillance and unnerving links to the military-industrial complex are not signs that the Internet has gone awry, but signs that the Internet is functioning as intended.

    _____

    Zachary Loeb is a writer, activist, librarian, and terrible accordion player. He earned his MSIS from the University of Texas at Austin, an MA from the Media, Culture, and Communications department at NYU, and is currently working towards a PhD in the History and Sociology of Science department at the University of Pennsylvania. His research areas include media refusal and resistance to technology, ideologies that develop in response to technological change, and the ways in which technology factors into ethical philosophy – particularly in regards of the way in which Jewish philosophers have written about ethics and technology. Using the moniker “The Luddbrarian,” Loeb writes at the blog Librarian Shipwreck, and is a frequent contributor to The b2 Review Digital Studies section.

    Back to the essay