You can learn a lot about your society’s relationship to technology by looking at its streets. Are the roads filled with personal automobiles or trolley-cars, bike lanes or occupied parking spaces, are there navigable sidewalks or is this the sort of place where a car is a requirement, does a subway rumble beneath the street or is the only sound the honking of cars stuck in traffic, are the people standing on the corner waiting for the bus or for the car they just booked through an app, or is it some kind of strange combination of many of these things simultaneously? The roadways we traverse on a regular basis can come to seem quite banal in their familiarity, yet they capture a complex tale of past decisions, current priorities, as well as a range of competing visions of the future.
Our streets not only provide us with a literal path by which to get where we are going, they also represent an essential space in which debates about where we are going as a society play out. All of which is to say, as we hurtle down the road towards the future, it is important to pay attention to the fight for control of the steering wheel, and it’s worth paying attention to the sort of vehicle in which we find ourselves.
In Road to Nowhere: What Silicon Valley Gets Wrong about the Future of Transportation, Paris Marx analyzes the social forces that have been responsible for making our roads (and by extension our cities, towns, and suburbs) function the way they do, while providing particular emphasis on the groups and individuals trying to determine what the roads of the future will look like. It is a cutting assessment that examines the ways in which tech companies are seeking to take over the streets, and sidewalks, as well as the space above and below them: with gig-economy drivers, self-driving cars, new tunnels, delivery robots, and much else. To the extent that technological solutions are frequently touted as the only possible response to complex social/political/economic problems, Marx moves beyond the flashy headlines to consider what those technological solutions actually look like when the proverbial rubber hits the road. In Road to Nowhere the streets and sidewalks appear as sites of political contestation, and Marx delivers an urgent warning against surrendering those spaces to big tech. After all, as Marx documents, the lords of the information superhighway are leaving plenty of flaming debris along the literal highways.
The primary focus of Road to Nowhere is on the particular vision of mobility being put forth by contemporary tech companies, but Marx takes care to explore the industries and interests that had been enforcing their view of mobility long before anyone had ever held a smartphone. As Marx explains, the street and the city were not always the possession of the personal automobile, indeed the automobile was at one time “the dominant technology that ‘disrupted’ our society” (10). The introduction of the automobile saw these vehicles careening down streets that were once shared by many other groups, and as automobiles left destruction in their wake, the push for safety was one that was won by ostensibly protecting pedestrians by handing the streets over to the automobile. Marx connects the rise of the personal automobile to “a much longer trend of elites remaking the city to serve their interests” (11), and emphasizes how policies favoring automobiles undermined other ways of moving about cities (including walking and streetcars). As the personal automobile grew in popularity, and mass production made it a product available not only to the wealthy, physical spaces were further transformed such that an automobile became less and less of a luxury and more and more of a need. From the interstate highway system to the growth of suburbs to under-investment in public transit to the development of a popular mythos connecting the car to freedom—Marx argues that the auto-oriented society is not the inevitable result flowing from the introduction of the automobile, but the result of policies and priorities that gradually remade streets and cities in the automobile’s image.
Even as the automobile established its dominance in the mid-twentieth century, a new sort of technology began to appear that promised (and threatened) to further remake society: the computer. Pivoting for a moment away from the automobile, Marx considers the ideological foundations of many tech companies, with their blend of techno-utopian hopefulness and anti-government sentiment wherein “faith was also put in technology itself as the means to address social and economic challenges” (44). While the mythology of Silicon Valley often lauds the rebellious geek, hacking away in a garage, Marx highlights the ways in which Silicon Valley (and the computing industry more generally) owes its early success to a massive influx of government money. Cold War military funding was very good—indeed, essential—for the nascent computing sector. Despite the significance of government backing, Silicon Valley became a hotbed for an ideology that sneered at democratic institutions while elevating the computer (and its advocates) as the bringer(s) of societal change. Thus, the very existence of complex social/political/economic problems became evidence of the failures of democracy and proof of the need for high-tech solutions—this was not only an ahistorical and narrow worldview, but one wherein a group of mostly-wealthy, mostly-white, mostly-cis-male tech lovers saw themselves as the saviors society had been waiting for. And while this worldview was reified in various gadgets, apps, and platforms “as tech companies seek to extend their footprint into the physical world” this same ideology—alongside an agenda that places “growth, profits, and power ahead of the common good”—is what undergirds Silicon Valley’s mobility project (62).
One of the challenges in wrestling with tech companies’ visions is to not be swept away by the shiny high-tech vision of the future they disseminate. And one area where this can be particularly difficult is when it comes to electric cars. After all, amongst the climate conscious, the electric car appears as an essential solution in the fight against climate change. Yet, beyond the fact that “electric vehicles are not a new invention” (64), the electric car appears as an almost perfect example of the ways in which tech companies attempt to advance a seemingly progressive vision of the future while further entrenching the status quo. Much of the green messaging around electric vehicles “narrowly focuses on tailpipe emissions, ignoring the harms that pervades the supply chain and the unsustainable nature of auto-oriented development” (71). Too often the electric car appears as a way for individuals of means to feel that they are doing their part to “personal responsibility” their way out of climate change, even as the continued focus on the personal automobile blocks the transition towards public transit that is needed. Furthermore, the shift towards electric vehicles does not end destructive extraction, it just shifts the extraction from fossil fuels to minerals like copper, nickel, cobalt, lithium, and coltan. The electric car risks being a way of preserving auto-centric society, and this “does not solve how the existing transportation system fuels the climate crisis and the destruction of local environments all around the world” (88).
If personal ownership of a car is such a problem, perhaps the solution is to simply have an app on your phone that lets you summon a vehicle (complete with a driver) when you need one, right? Not so fast. Companies like Uber sold themselves to the public on a promise of making cars available when needed, especially for urban dwellers who did not necessarily have a car of their own. The pitch was one of increased mobility, where those in need of a ride could easily hire one, while cash-strapped car owners could have a new opportunity to earn a few extra bucks driving in the evenings. Far from solving congestion, empowering drivers, and increasing everyone’s mobility, “the Uber model adds vehicles to the road and creates more traffic, especially since the app incentivizes drivers to be active during peak times when traffic is already backed up” (99). Despite claims that their app based services would solve a host of issues, Uber (and its ilk) have added to urban congestion, failed to provide their drivers with a stable income, and have not truly increased the mobility options for underserved communities.
If gig-drivers wind up being such an issue, why not try to construct a world where drivers are not necessary? And thus, perhaps few ideas related to the future of mobility have as firm a grasp on the popular imagination as the idea of the self-driving car. A fantasy that seems straight out of science fiction. Albeit, with good reason. After all, what a science fiction writer can dream up, and what a special effects team can mock up for a movie, face serious obstacles in the real world. The story of tech companies and autonomous vehicles is one of grandiose hype (that often generates numerous glowing headlines), followed by significantly diminished plans once the challenges of introducing self-driving cars are recognized. While much of the infrastructure we encounter is built with automobiles in mind, autonomous cars require a variety of other sorts of not-currently existing infrastructure. Just as “automobiles required a social reconstruction in addition to a physical reconstruction, so too will autonomous vehicles” (125), and this will entail transforming infrastructure and habits that have been built up over decades. Attempts to introduce autonomous vehicles have revealed the clash between the tech company vision of the world and the complexities of the actually existing world—which is a major reason why many tech companies are quietly backing away from the exuberance with which they once hyped autonomous cars.
Well, if the already existing roads are such a challenge, why not think abstractly? Instead of looking at the road, look above the road and below the road! Thus, plans such as Boring’s proposed tunnels, and ideas about “flying cars,” seek to get around many of the challenges the tech industry is encountering in the streets by attempting to capitalize on seemingly unused space. At first glance, such ideas may seem like clear examples of the sort of “out of the box thinking” for which tech companies are famed, yet “the span of time between the initial bold claims of prominent tech figures and the general realization that they are fraudulent appears to be shrinking” (159). And once more, in contrast to the original framing that seeks to treat new tunnels and flying cars as emancipatory routes, what becomes clear is that these are just another area in which wealthy tech elites are fantasizing about ways of avoiding getting stuck in traffic with the hoi polloi.
Much of the history of the automobile that Marx recounts, involves pedestrians being deprived of more and more space, and this is a story that continues as new battles for the sidewalk intensify. As with other tech company interventions in mobility, micromobility solutions that cover sidewalks in scooters and bikes that are rentable via app, present themselves with a veneer of green accessibility. Yet littering cities with cheap bikes and scooters that wear out quickly while clogging the sidewalks, turn out to be just another service “designed to benefit the company” without genuinely assessing the mobility needs of particular communities (166). Besides, all of those sidewalk scooters are also finding that they need to compete for space with swarms of delivery robots that make sidewalks more difficult to use.
From the electric car to the app summoned chauffeur to the autonomous car to the flying car, tech companies have no shortage of high-tech ideas for the future of mobility. And yet, “the truth is that when we look at the world that is actually being created by the tech industry’s interventions, we find that the bold promises are in fact a cover for a society that is both more unequal and one where that inequality is even more fundamentally built into the infrastructure and services we interact with every single day” (185). While the built environment is filled with genuine mobility issues, the solutions put forward by tech companies ignore the complexity of how these issues came about in favor of techno-fixes designed to favor tech companies’ bottom lines while simultaneously feeding them new data streams to capitalize. The gleaming city envisioned by tech elites and their companies may be broadcast to all, but these cities are playgrounds for the wealthy tech elite, not for the rest of us.
The hope that tech companies will come along and sort everything out with some sort of nifty high-tech program speaks to a lack of faith in societies’ ability to tackle the complex issues they face. Yet, to make mobility work for everyone, what is essential is not to flee from politics, but to truly address politics. The tech companies are working to reshape our streets and cities to better fit their needs, but this demands that people counter by insisting that their streets and cities be made to actually meet people’s needs. Instead of looking to cities with roads clogged with Ubers and sidewalks blocked by broken scooters, we need to be paying attention to the cities that have devoted resources (and space) to pedestrians while improving and expanding public transit. The point is not to reject technology but to reject the tech companies’ narrow definition of what technology is and how it can be used, “we need to utilize technology where it can serve us, while ensuring power remains firmly in the hands of a democratic public” (223).
After all, “better futures are possible, but they will not be delivered through technological advancement alone” (225). We can no longer sit idle in the passenger seat, we need to take the wheel, and the wheels.
***
Contrary to its somewhat playful title, Road to Nowhere lays out a very clear case that Silicon Valley’s vision of the future of mobility is in fact a road to somewhere—the problem is that it’s not a good somewhere. While the excited pronouncements of tech CEOs (and the oft-uncritical coverage of those pronouncements) may evoke images of gleaming high tech utopias, a more critical and grounded assessment of these pipedreams reveals them to be unrealistic fantasies mixed with ideas that are designed to primarily meet the needs of tech CEOs over the genuine mobility needs of most people. As Paris Marx makes clear throughout the chapters of Road to Nowhere, it is essential to stop taking the plans of tech companies at face value and to instead do the discomforting work of facing up to the realities of these plans. The way our streets and cities have been built certainly present a range of very real problems to solve, but in the choice of which problems to address it makes a difference whether the challenges being considered are those facing a minimum-wage worker or a billionaire mogul furious about sitting in traffic. Or, to put it somewhat differently, there are flying cars in the movie Blade Runner, but that does not mean we should attempt to build that world.
Road to Nowhere: Silicon Valley and the Future of Mobility provides a thoughtful analysis and impassioned denunciation of Silicon Valley’s mobility efforts up to this point, and pivots from this consideration of the past and the present to cast doubt on Silicon Valley’s future efforts. Throughout the book, Marx writes with the same punchy eloquence that has made Marx such a lively host of the Tech Won’t Save Us podcast. And while Marx has staked out an important space in the world of contemporary tech critique thanks to that podcast, this book makes it clear that Marx is not only a dynamic interviewer of other critics, but a vital critic in their own right. With its wide-ranging analysis, and clear consideration of the route we find ourselves on unless we change course, Road to Nowhere presents an important read for those concerned with where Silicon Valley is driving us.
The structure of the book provides a clear argument that briskly builds momentum, and even as the chapters focus on certain specific topics they flow seamlessly from one to the next. Having started by providing a quick history of the auto-centric city, and the roots of Silicon Valley’s ideology, Marx’s chapters follow a clear path through mobility issues. If the problem is pollution, why not electric cars? If the problem is individual cars, even electric ones, why not make it easy to summon someone else’s car? If the problem is the treatment of the drivers of those cars, why not cars without drivers? If autonomous vehicles are unrealistic because of already existing infrastructure, why not wholly new infrastructure? If creating wholly new infrastructure (below and above ground) is more difficult than it may seem, what about flooding cities with cheap bikes? Part of what makes Road to Nowhere’s critique of Silicon Valley’s ideas so successful is that Marx does not get bogged down in just one of Silicon Valley’s areas of interest, and instead provides a critique that captures that it is not only a matter of Silicon Valley’s response to this or that problem, but that the issues is the way that Silicon Valley frames problems and envisions solutions. To the extent that the auto-centric world is reflective of a world that was remade in the shape of the automobile, Silicon Valley is currently hard at work attempting to remake the world in its own shape, and as Marx makes clear the needs of Silicon Valley companies and the needs of people trying to get around are not the same.
At the core of Marx’s analysis is a sense that the worldview of Silicon Valley is one that is no longer so easily confined to certain geographical boundaries in California. As the tech companies have been permitted to present themselves as the shiny saviors of society, that ideology has often overwhelmed faith in democratic solutions. Marx notes that “as the neoliberal political system gave up on bold policies in favor of managing a worsening status quo, they left the door open to techno-utopians to fill the void” (5). When people no longer believe that a democratic society can even maintain the bridges and roads, it opens up a space in which tech companies can drive into town and announce an ambitious project to remake the roads. Marx further argues, “too often, governments stand back and allow the tech industry to roll out whatever ideas its executives and engineers can dream up,” this belief if undergirded by a sense that “whatever tech companies want is inevitable…and that neither governments, traditional companies, nor even the public should stand in their way” (178). Part of the danger of this sense of inevitability is that it cedes the future of mobility to the tech companies, robbing the municipalities both of initiative and of the responsibility to meet the mobility needs of the people who live there. Granted, as the many failures Marx documents show, just because a tech company says that it will do something does not necessarily mean that it will be able to do it.
Published by Verso Books and written in a clear comprehensive voice, Road to Nowhere stands as an intervention into broad discussions about the future of mobility, particularly those currently taking place on the political left. Thus, even as many readers are likely to cheer at Marx’s skewering of Musk, it is likely that many of those same readers will chafe at the book’s refusal to treat electric cars as a solution. Sure, it’s one thing to lambast Elon Musk (and by extension Tesla), but to critique electric cars as such? Here Marx makes it very clear that we cannot be taken in by too neat techno-fixes, whether they are touted by a specific company (such as Tesla), or whether they are made about a certain class of technologies (electric cars). As Marx makes clear, all of the minerals in those electric cars come from somewhere, and what’s more the issues that we face (in terms of mobility and environmental ones) are not simply the result of one particular technology (such as the gas-powered car) but the way in which we have built our societies around certain technologies and the infrastructure that those technologies require. Therefore, the matter of mobility is about which questions we are willing to ask, and recognizing that we need to be asking a different set of questions.
Road to Nowhere is at its best when Marx does this work by moving past the particular tech companies to consider the deeper matters of the underlying technologies. Certainly, readers of the book will find plenty of consideration of Tesla and Uber (alongside their famous leaders), but the strength of Road to Nowhere is that the book does not act as though the problem is simply Tesla or Uber. Rather, Marx considers the way in which the problem forces us to think about automobiles themselves, about the long history of automobiles, and about the ways in which so much physical infrastructure has been built to prioritize the use of automobiles. This is, obviously, not to give Uber or Tesla a pass—but Marx does the essential work of emphasizing that this isn’t just about a handful of tech companies and their bombastic CEOs, this is a question about the ways in which societies orient themselves around particular sets of technologies. And Marx’s response is not a call for a return to some romanticized pastoral landscape, but is instead an argument in favor of placing the needs of people above the needs of technologies (and the people selling those technologies). Much of our built environment has been constructed around the automobile, what if we started building that environment around the needs of the human being?
The challenge of what it would mean to construct our cities around the needs of people, rather than the needs of profit (or the needs of machines), is not a new question. And while Marx briefly considers some past figures who have wrestled with this matter—such as Jane Jacobs and Murray Bookchin—it might have been worthwhile to spend a little more time engaging more fully with past critics. At risk of becoming too much of a caricature of myself as a reviewer, it does seem like an unfortunate missed opportunity in a book about technology and cities not to engage with the prominent technological critic Lewis Mumford whose oeuvre includes numerous books specifically on the topic of technology and cities (he won the National Book Award for his volume The City in History). And these matters of cities, speed, and vehicles have been topics with which many other critics of technology engaged in the twentieth century. Indeed, the rise of the auto-centric society has had its critics all along the way, and it could have been fascinating to engage with more of those figures. Marx certainly makes a strong case for the ways in which Silicon Valley’s designs on the city are informed by its particular ideology, but engaging more closely with earlier critics of technology could have opened up other spaces for considering broader problems about ideologies surrounding technology that predate Silicon Valley. Of course, it is unfair to criticize an author for the book they did not write, and the intention is not to take away from Marx’s important book—but contemporary criticism of technology has much to gain not just from the history of technology but from the history of technological criticism.
Road to Nowhere is a challenging book in the best sense of that word, for it discomforts the reader and pushes them to see the world around them in a new light. Marx achieves this particularly well by refusing to be taken in by easy solutions, and by recognizing that even as techno-fixes may be the standard offering from Silicon Valley, that a belief in such fixes permeates beyond just the pitches by tech firms. Nevertheless, Marx is also clear in recognizing that even as many of our problems flow from and have been exacerbated by technology, that technology needs to be seen as part of the solution. And here, Marx is deft at considering the way in which technology represents a much more robust and wide-ranging category than the too simplistic version that it is often reduced to when conversations turn to “tech.” Thus, the matter is nothing so ridiculous as conversations about being “pro-technology” or “anti-technology” but recognizing “that technology is not the primary driver in creating fairer and more equitable cities and transportation systems” what is necessary is “deeper and more fundamental change to give people more power over the decision that are made about their communities” (8). The matter is not just about technology (as such), but about the value systems embedded in particular sorts of technologies, and recognizing that certain sets of technologies are going to be better for achieving particular social goals. After all, “the technologies unleashed by Silicon Valley are not neutral,” (179) though the same is also very much true of the technologies that were unleashed before Silicon Valley. Constructing a different world thus requires us to consider not only how we can remake that world, but how we can remake our technologies. As Marx wonderfully puts it, “when we assume that technology can only develop in one way, we accept the power of the people who control that process, but there is no guarantee that their ideal world is one that truly works for everyone” (179).
You can learn a lot about your society’s relationship to technology by looking at its streets. And Road to Nowhere is a powerful reminder, that those streets do not have to look the way they do, and that we have a role to play in determining what future those streets are taking us towards.
_____
Zachary Loeb earned his MSIS from the University of Texas at Austin, an MA from the Media, Culture, and Communications department at NYU, and is currently a PhD candidate in the History and Sociology of Science department at the University of Pennsylvania. Loeb works at the intersection of the history of technology and disaster studies, and his research focusses on the ways that complex technological systems amplify risk, as well as the history of technological doom-saying. He is working on a dissertation on Y2K. Loeb writes at the blog Librarianshipwreck, and is a frequent contributor to The b2o Review Digital Studies section.
Shoshana Zuboff’s The Age ofSurveillance Capitalism begins badly: the author’s house burns down. Her home is struck by lightning, it takes Zuboff a few minutes to realize the enormity of the conflagration happening all around her and escape. The book, written after the fire goes out, is a warning about the enormity of the changes kindled while we slept. Zuboff describes a world in which autonomy, agency, and privacy–the walls of her house–are under threat by a corporate apparatus that records everything in order to control behavior. That act of monitoring and recording inaugurates a new era in the development of capitalism that Zuboff believes is destructive of both individual liberty and democratic institutions.
Surveillance Capitalism is the alarm to all of us to get out of the house, lest it burn down all around us. In making this warning however, Zuboff discounts the long history of surveillance outside the middle class enclaves of Europe and the United States and assumes that protecting the privacy of individuals in that same location will solve the problem of surveillance for the Rest.
The house functions as a metaphor throughout the book, first as a warning about how difficult it is to recognize a radical remaking of our world as it is happening: this change is akin to a lightning strike. The second is as an indicator of the kind of world we inhabit: it is a world that could be enhancing of life, instead it treats life as a resource to be extracted. The third uses the idea of house as protection to solve the other two problems.
Zuboff contrasts an early moment of the digitally connected world, an internet of things that was on a closed circuit within one house, to the current moment, where the same devices are wired to the companies that make them. For Zuboff, that difference demonstrates the exponential changes that happened in between the early promise of the internet and its current malformation. Surveillance Capital argues that from the connective potential of the early Internet has come the current dystopian state of affairs, where human behavior is monitored by companies in order to nudge that behavior toward predetermined ends. In this way, Surveillance Capitalism reverses an earlier moment of connectivity boosterism, exemplified by the title of Thomas Friedman’s popular 2005 book, The World is Flat, which celebrated technologically-produced globalization.[1] The decades from the mid to late 2000s witnessed a significant critique of the flat world hypothesis, which could be summed up as an argument for both the vast unevenness of the world, and for the continuous remaking of global tropes into local and varied meanings. Yet, here we are again it seems in 2020, except instead of celebrating flatness, we are sounding the flat alarm.
The book’s very dimensions–it is a doorstop, on purpose–act as an inoculation against the thinness and flatness Zuboff diagnoses as predominant features of our world. Zuboff argues that these features are unprecedented, that they mark an extreme deviation from capitalism as it has been. They therefore require both a new name and new analytic tools. The name
“surveillance capitalism” describes information-gathering enterprises that are unprecedented in human history, and that information, Zuboff writes, is used to predict “our futures for the sake of others’ gain, not ours” (11). As tech companies increasingly use our data to steer behavior towards products and advertising, our ability to experience a deep interiority where we can exercise autonomous choice shrinks. Importantly for Zuboff, these companies collect not just data willingly giving, but the data exhaust that we often unknowingly and unintentionally emit as we move through a world mediated by our devices. Behavioral nudges mark for Zuboff the ultimate endpoint for a capitalism gone awry, a capitalism drives humans to abandon free will in favor of being governed by corporations that use aggregate data about individual interactions to determine future human action.
Zuboff’s flat alarm usefully takes the reader through the philosophical underpinnings of behaviorism, following the work of B.F. Skinner, a psychologist working at Harvard in the mid-twentieth century who believed adjusting human behavior was a matter of changing external environments through positive and negative stimuli, or reinforcements. Zuboff argues that behaviorist attitudes toward the world, considered outré in their time, have moved to the heart of Silicon Valley philosophies of disruption, where they meet a particular kind of mode of capital accumulation driven by the logics of venture, neutrality, and macho meritocracies. The result is a kind of ideology of tools and of making humans into tools, that Zuboff terms instrumentarianism, at once driven to produce companies that are profitable for venture capitalists and investors and to treat human beings as sources of data to be turned toward profitability. Widespread surveillance is a necessary feature of this new world order because it is through that observation of every detail of human life that these companies can amass the data they need to turn a profit by predicting and ultimately controlling, or tuning, human behavior.
Zuboff identifies key figures in the development of surveillance capitalism, including the aforementioned Skinner. Her particular mode of critique tends to focus on CEOs, and Zuboff reads their pronouncements as signs of the legacy of behaviorism in the C-Suites of contemporary firms. Zuboff also spends several chapters situating the critics of these surveillance capitalists as those who need to raise the flat world alarm. She compares this need to both her personal experience with the house fire and the experience of thinkers such as Hanah Arendt writing on totalitarianism. Here, she draws an explicit critique that conjoins totalitarianism and surveillance capital. Zuboff argues that just as totalitarianism was unthinkable as it was unfolding, so too does surveillance capitalism seem an impossible future given how we like to think about human behavior and its governance. Zuboff’s argument here is highly persuasive, since she is suggesting that the critics will always come to realize what it is they are critiquing just before it is too late to do anything about it. She also argues that behaviorism is in some sense the inverse of state-governed totalitarianism, since while totalitarianism attempted to discipline humans from the inside out, surveillance capitalism is agnostic when it comes to interiority–it only deals in and tries to engineer surface effects. For all this ‘neutrality’ over and against belief, it is equally oppressive, because it aims at social domination.
Previous reviews have provided an overview of the chapters in this book; I will not repeat the exercise, except to say that the introduction nicely lays out her overall argument and could be used effectively to broach the topic of surveillance for many audiences. The chapters outlining B.F. Skinner’s imprint on behaviorist ideologies are also useful to provide historical context to the current age, as is the general story of Google’s turn toward profitability as told in Part I. And, yet, the promise of these earlier chapters–particularly the nice turn of phrase, the “‘behavioral means of production” yield in the latter chapters to an impoverished account of our options and of the contradictions at work within tech companies. These lacunae are due at least in part to Zuboff’s choice of revolutionary subject–the middle class consumer.
Toward the end of Surveillance Capitalism, Zuboff rebuilds her house, this time with thicker walls. She uses her house’s regeneration to argue for a philosophical concept she calls the “right to sanctuary,” based largely on the writings of Gaston Bachelard, whose Poetics of Space describes for Zuboff how the shelter of home shapes “many of our most fundamental ways of making sense of experience” (477). Zuboff believes that surveillance capitalists want to bring down all these walls, for the sake of opening up our every action to collection and our every impulse to guidance from above. One might pause here and wonder whether the breaking down of walls is not fundamental to capitalism from the beginning, rather than an aberration of the current age. In other words, does the age of surveillance mark such a radical break from the general thrust of capital’s need to open up new markets and exploit new raw materials? Or, more to the point, for whom does it signify a radical aberration? Posing this question would bring into focus the need to interrogate the complicitness of the very categories of autonomy, agency, and privacy in the extension of capitalism across geographies, and to historicize the production of interiority within that same frame.
Against the contemporary tendency toward effacing the interior life of families and individuals, Zuboff offers sanctuary as the right to protection from surveillance. In this moment, that protection needs thick walls. For Zuboff, those walls need to be built by young people–one gets the sense that she is speaking across these sections to her own children and those of her children’s generation. The problem with describing sanctuary in this way is that it narrows the scope for both understanding the stakes of surveillance and recognizing where the battles for control over data will be fought.
As a broadside, Surveillance Capitalism works through a combination of rhetoric and evidence. Zuboff hopes that a younger generation will fight the watchers for control over their own data. Yet, by addressing largely a well-off, college-educated, and young audience, Zuboff restricts the people who are being asked to take up the cause, and fails to ask the difficult question of what it would take to build a house with thicker walls for everyone.
A persistent concern while reading this book is whether its analysis can encompass otherwheres. The populations that are most at risk under surveillance capitalism include immigrants, minorities, and workers, both within and outside the United States. The framework of data exhaust and its use to predict and govern behavior does not quite illuminate the uses of data collection to track border crossers, “predict” crime, and monitor worker movements inside warehouses. These relationships require an analysis that can get at the overlap between corporate and government surveillance, which Surveillance Capitalism studiously avoids. The book begins with an analysis of a system of exploitation based on turning data into profits, and argues that the new mode of production makes the motor of capitalism shift from products to information, a point well established by previous literature. Given this analysis, it astonishing that the last section of the book returns to a defense of individual rights, without stopping to question whether the ‘hive’ forms of organization that Zuboff finds in the logics of surveillance capital may have been a cooptation of radical kinds of social organizing arranged against a different model of exploitation. Leaderless movements like Occupy should be considered fully when describing hives, along with contemporary initiatives like tech worker cooperatives and technical alternatives like local mesh networks. The possibility that these radical forms of social organization may be subject to cooptation by the actors Zuboff describes never appears in the book. Instead, Zuboff appears to mistranslate theories of the subject that locate agency above or below the level of the individual to political acquiescence to a program of total social control. Without taking the step considering the political potential in ‘hive-like’ social organization, Zuboff’s corrective falls back on notions of individual rights and protections and is unable to imagine a new kind of collective action that moves beyond both individualism and behaviorism. This failure, for instance, skews Zuboff’s arguments toward the familiar ground of data protection as a solution rather than toward the more radical stances of refusal, which question data collection in the first place.
Zuboff’s world is flat. It is a world in which there are Big Others that suck up an undifferentiated public’s data, Others whose objective is to mold our behavior and steal our free will. In this version of flatness, what was once described positively is now described negatively, as if we had collectively turned a rosy-colored smooth world flat black. Yet, how collective is this experience? How will it play out if the solutions we provide rely on bracketing out the question of what kinds of people and communities are afforded the chance to build thicker walls? This calls forth a deeper issue than simply that of a lack of inclusion of other voices in Zuboff’s account. After all, perhaps fixing the surveillance issue through the kinds of rights to sanctuary that Zuboff suggests would also fix the issue for those who are not usually conceived of as mainstream consumers.
Except, historical examples ranging from Simone Browne’s explication of surveillance and slavery in Dark Matters to Achille Mbembe’s articulation of necropolitcs teach us that consumer protection is a thin filament on which to hang protection for all from overweaning surveillance apparati–corporate or otherwise. One could easily imagine a world where the privacy rights of well-heeled Americans are protected, but those of others continue to be violated. To reference one pertinent example, companies who are banking on monetizing data through a contractual relationship where individuals sell the data that they themselves own are simultaneously banking on those who need to sell their data to make money. In other words, as legal scholar Stacy-Ann Elvy notes (2017), in a personal data economy low-income consumers will be incentivized to sell their data without much concern for the conditions of sale, even while those who are well-off will have the means to avoid these incentives, resulting in the illusion of individual control and uneven access to privacy determined by degrees of socioeconomic vulnerability. These individuals will also be exposed to a greater degree of risk that their information will not stay secure.
Simone Browne demonstrates that what we understand as surveillance was developed on and through black bodies, and that these populations of slaves and ex-slaves have developed strategies of avoiding detection, which she calls dark sousveillance. As Browne notes, “routing the study of contemporary surveillance” through the histories of “black enslavement and captivity opens up the possibility for fugitive acts of escape” even while it shows that the normative surveillance of white bodies was built on long histories of experimentations with black bodies (Browne 2015, 164). Achille Mbembe’s scholarship on necropolitics was developed through the insight that some life becomes killable, or in Jasbir Puar’s (2017) memorable phrasing, maimable, at the same time that other life is propagated. Mbembe proposes “necropolitcs” to describe “death worlds” where “death” not life, “is the space where freedom and negotiation happen” where “vast populations are subjected to conditions of life conferring on them the status of living dead” (Mbembe 2003, 40). The right to sanctuary appears to short circuit the spaces where life has already been configured as available for expropriation through perpetual wounding. Crucial to both Browne and Mbembe’s arguments is the insight that the study of the uneven harms of surveillance concomitantly surfaces the tactics of opposition and the archives of the world that provide alternative models of refuge outside the contractual property relationship evoked across the pages of Surveillance Capitalism.
All those considered outside the ambit of individualized rights, including those in territories marked by extrajudicial measures, those deemed illegal, those perennially under threat, those who while at work are unprotected, those in unseen workplaces, and those simply unable to exercise rights to privacy due to law or circumstance, have little place in Zuboff’s analysis. One only has to think of Kashmir, and the access that people with no ties to this place will now have to building houses there, to begin to grasp the contested politics of home-building.[2] Without an acknowledgement of the limits of both the critique of surveillance capitalism and the agents of its proposed solutions, it seems this otherwise promising book will reach the usual audiences and have the usual effect of shoring up some peoples’ and places’ rights even while making the rest of the world and its populations available for experiments in data appropriation.
_____
Sareeta Amrute is Associate Professor of Anthropology at the University of Washington. Her scholarship focuses on contemporary capitalism and ways of working, and particularly on the ways race and class are revisited and remade in sites of new economy work, such as coding and software economies. She is the author of the book Encoding Race, Encoding Class: Indian IT Workers in Berlin (Duke University Press, 2016) and recently published the article “Of Techno-Ethics and Techno-Affects” in Feminist Review.
[1] Friedman (2005) attributes this phrase to Nandan Nilekani, then Co-Chair, of Indian Tech company Infosys (and subsequently Chair of the Unique Identification Authority of India).
[2] Until 2019, Articles 370 and 35A of the Indian Constitution granted the territories of Jammu and Kashmir special status, which allowed the state to keep on it’s books laws restricting who could buy land and property in Kashmir by allowing the territories to define who counted as a permanent resident.. After the abrogation of Article 370, rumors swirled that the rich from Delhi and elsewhere would now be able to purchase holiday homes in the area. See e.g. Devansh Sharma, “All You Need to Know about Buying Property in Jammu and Kashmir“; Parvaiz Bukhari, “Myth No 1 about Article 370: It Prevents Indians from Buying Land in Kashmir.”
_____
Works Cited
Browne, Simone. 2015. Dark Matters: On the Surveillance of Blackness. Durham, NC: Duke University Press.
Tim Wu’s brilliant new book analyses in detail one specific aspect and cause of the dominance of big companies in general and big tech companies in particular: the current unwillingness to modernize antitrust law to deal with concentration in the provision of key Internet services. Wu is a professor at Columbia Law School, and a contributing opinion writer for the New York Times. He is best known for his work on Net Neutrality theory. He is author of the books The Master Switch and The Attention Merchants, along with Network Neutrality, Broadband Discrimination, and other works. In 2013 he was named one of America’s 100 Most Influential Lawyers, and in 2017 he was named to the American Academy of Arts and Sciences.
What are the consequences of allowing unrestricted growth of concentrated private power, and abandoning most curbs on anticompetitive conduct? As Wu masterfully reminds us:
We have managed to recreate both the economics and politics of a century ago – the first Gilded Age – and remain in grave danger of repeating more of the signature errors of the twentieth century. As that era has taught us, extreme economic concentration yields gross inequality and material suffering, feeding an appetite for nationalistic and extremist leadership. Yet, as if blind to the greatest lessons of the last century, we are going down the same path. If we learned one thing from the Gilded Age, it should have been this: The road to fascism and dictatorship is paved with failures of economic policy to serve the needs of the general public. (14)
While increasing concentration, and its negative effects on social equity, is a general phenomenon, it is particularly concerning for what regards the Internet: “Most visible in our daily lives is the great power of the tech platforms, especially Google, Facebook, and Amazon, who have gained extraordinary power over our lives. With this centralization of private power has come a renewed concentration of wealth, and a wide gap between the rich and poor” (15). These trends have very real political effects: “The concentration of wealth and power has helped transform and radicalize electoral politics. As in the Gilded Age, a disaffected and declining middle class has come to support radically anti-corporate and nationalist candidates, catering to a discontent that transcends party lines” (15). “What we must realize is that, once again, we face what Louis Brandeis called the ‘Curse of Bigness,’ which, as he warned, represents a profound threat to democracy itself. What else can one say about a time when we simply accept that industry will have far greater influence over elections and lawmaking than mere citizens?” (15). And, I would add, what have we come to when some advocate that corporations should have veto power over public policies that affect all of us?
Surely it is, or should be, obvious that current extreme levels of concentration are not compatible with the premises of social and economic equity, free competition, or democracy. And that “the classic antidote to bigness – the antitrust and other antimonopoly laws – might be recovered and updated to face the challenges of our times” (16). Those who doubt these propositions should read Wu’s book carefully, because he shows that they are true. My only suggestion for improvement would be to add a more detailed explanation of how network effects interact with economies of scale to favour concentration in the ICT industry in general, and in telecommunications and the Internet in particular. But this topic is well explained in other works.
As Wu points out, antitrust law must not be restricted (as it is at present in the USA) “to deal with one very narrow type of harm: higher prices to consumers” (17). On the contrary, “It needs better tools to assess new forms of market power, to assess macroeconomic arguments, and to take seriously the link between industrial concentration and political influence” (18). The same has been said by other scholars (e.g. here, here, here and here), by a newspaper, an advocacy group, a commission of the European Parliament, a group of European industries, a well-known academic, and even by a plutocrat who benefitted from the current regime.
Do we have a choice? Can we continue to pretend that we don’t need to adapt antitrust law to rein in the excessive power of the Internet giants? No: “The alternative is not appealing. Over the twentieth century, nations that failed to control private power and attend to the economic needs of their citizens faced the rise of strongmen who promised their citizens a more immediate deliverance from economic woes” (18). (I would argue that any resemblance to the election of US President Trump, to the British vote to leave the European Union, and to the rise of so-called populist parties in several European countries [e.g. Hungary, Italy, Poland, Sweden] is not coincidental).
Chapter One of Wu’s book, “The Monopolization Movement,” provides historical background, reminding us that from the late nineteenth through the early twentieth century, dominant, sector-specific monopolies emerged and were thought to be an appropriate way to structure economic activity. In the USA, in the early decades of the twentieth century, under the Trust Movement, essentially every area of major industrial activity was controlled or influenced by a single man (but not the same man for each area), e.g. Rockefeller and Morgan. “In the same way that Silicon Valley’s Peter Thiel today argues that monopoly ‘drives progress’ and that ‘competition is for losers,’ adherents to the Trust Movement thought Adam Smith’s fierce competition had no place in a modern, industrialized economy” (26). This system rapidly proved to be dysfunctional: “There was a new divide between the giant corporation and its workers, leading to strikes, violence, and a constant threat of class warfare” (30). Popular resistance mobilized in both Europe and the USA, and it led to the adoption of the first antitrust laws.
Chapter Two, “The Right to Live, and Not Merely to Exist,” reminds us that US Supreme Court Justice Louis Brandeis “really cared about … the economic conditions under which life is lived, and the effects of the economy on one’s character and on the nation’s soul” (33). The chapter outlines Brandeis’ career and what motivated him to combat monopolies.
In Chapter Three, “The Trustbuster,” Wu explains how the 1901 assassination of US President McKinley, a devout supporter of unrestricted laissez-faire capitalism (“let well enough alone”, reminiscent of today’s calls for government to “do not harm” through regulation, and to “don’t fix it if it isn’t broken”), resulted in a fundamental change in US economic policy, when Theodore Roosevelt succeeded him. Roosevelt’s “determination that the public was ruler over the corporation, and not vice versa, would make him the single most important advocate of a political antitrust law.” (47). He took on the great US monopolists of the time by enforcing the antitrust laws. “To Roosevelt, economic policy did not form an exception to popular rule, and he viewed the seizure of economic policy by Wall Street and trust management as a serious corruption of the democratic system. He also understood, as we should today, that ignoring economic misery and refusing to give the public what they wanted would drive a demand for more extreme solutions, like Marxist or anarchist revolution” (49). Subsequent US presidents and authorities continued to be “trust busters”, through the 1990s. At the time, it was understood that antitrust was not just an economic issue, but also a political issue: “power that controls the economy should be in the hands of elected representatives of the people, not in the hands of an industrial oligarchy” (54, citing Justice William Douglas). As we all know, “Increased industrial concentration predictably yields increased influence over political outcomes for corporations and business interests, as opposed to citizens or the public” (55). Wu goes on to explain why and how concentration exacerbates the influence of private companies on public policies and undermines democracy (that is, the rule of the people, by the people, for the people). And he outlines why and how Standard Oil was broken up (as opposed to becoming a government-regulated monopoly). The chapter then explains why very large companies might experience disecomonies of scale, that is, reduced efficiency. So very large companies compensate for their inefficiency by developing and exploiting “a different kind of advantages having less to do with efficiencies of operation, and more to do with its ability to wield economic and political power, by itself or conjunction with others. In other words, a firm may not actually become more efficient as it gets larger, but may become better at raising prices or keeping out competitors” (71). Wu explains how this is done in practice. The rest of this chapter summarizes the impact of the US presidential election of 1912 on US antitrust actions.
Chapter Four, “Peak Antitrust and the Chicago School,” explains how, during the decades after World War II, strong antitrust laws were viewed as an essential component of democracy; and how the European Community (which later became the European Union) adopted antitrust laws modelled on those of the USA. However, in the mid-1960s, scholars at the University of Chicago (in particular Robert Bork) developed the theory that antitrust measures were meant only to protect consumer welfare, and thus no antitrust actions could be taken unless there was evidence that consumers were being harmed, that is, that a dominant company was raising prices. Harm to competitors or suppliers was no longer sufficient for antitrust enforcement. As Wu shows, this “was really laissez-faire reincarnated.”
Chapter Five, “The Last of the Big Cases,” discusses two of the last really large US antitrust case. The first was breakup of the regulated de facto telephone monopoly, AT&T, which was initiated in 1974. The second was the case against Microsoft, which started in 1998 and ended in 2001 with a settlement that many consider to be a negative turning point in US antitrust enforcement. (A third big case, the 1969-1982 case against IBM, is discussed in Chapter Six.)
Chapter Six, “Chicago Triumphant,” documents how the US Supreme Court adopted Bork’s “consumer welfare” theory of antitrust, leading to weak enforcement. As a consequence, “In the United States, there have been no trustbusting or ‘big cases’ for nearly twenty years: no cases targeting an industry-spanning monopolist or super-monopolist, seeking the goal of breakup” (110). Thus, “In a run that lasted some two decades, American industry reached levels of industry concentration arguably unseen since the original Trust era. A full 75 percent of industries witnessed increased concentration from the years 1997 to 2012” (115). Wu gives concrete examples: the old AT&T monopoly, which had been broken up, has reconstituted itself; there are only three large US airlines; there are three regional monopolies for cable TV; etc. But the greatest failure “was surely that which allowed the almost entirely uninhibited consolidation of the tech industry into a new class of monopolists” (118).
Chapter Seven, “The Rise of the Tech Trusts,” explains how the Internet morphed from a very competitive environment into one dominated by large companies that buy up any threatening competitor. “When a dominant firm buys a nascent challenger, alarm bells are supposed to ring. Yet both American and European regulators found themselves unable to find anything wrong with the takeover [of Instagram by Facebook]” (122).
The Conclusion, “A Neo-Brandeisian Agenda,” outlines Wu’s thoughts on how to address current issues regarding dominant market power. These include renewing the well known practice of reviewing mergers; opening up the merger review process to public comment; renewing the practice of bringing major antitrust actions against the biggest companies; breaking up the biggest monopolies, adopting the market investigation law and practices of the United Kingdom; recognizing that the goal of antitrust is not just to protect consumers against high prices, but also to protect competition per se, that is to protect competitors, suppliers, and democracy itself. “By providing checks on monopoly and limiting private concentration of economic power, the antitrust law can maintain and support a different economic structure than the one we have now. It can give humans a fighting chance against corporations, and free the political process from invisible government. But to turn the ship, as the leaders of the Progressive era did, will require an acute sensitivity to the dangers of the current path, the growing threats to the Constitutional order, and the potential of rebuilding a nation that actually lives up to its greatest ideals” (139).
In other words, something is rotten in the state of the Internet: it has “collection and exploitation of personal data”; it has “recently been used to erode privacy and to increase the concentration of economic power, leading to increasing income inequalities”; it has led to “erosion of the press, leading to erosion of democracy.” These developments are due to the fact that “US policies that ostensibly promote the free flow of information around the world, the right of all people to connect to the Internet, and free speech, are in reality policies that have, by design, furthered the geo-economic and geo-political goals of the US, including its military goals, its imperialist tendencies, and the interests of large private companies”; and to the fact that “vibrant government institutions deliberately transferred power to US corporations in order to further US geo-economical and geo-political goals.”
Wu’s call for action is not just opportune, but necessary and important; at the same time, it is not sufficient.
God made the sun so that animals could learn arithmetic – without the succession of days and nights, one supposes, we should not have thought of numbers. The sight of day and night, months and years, has created knowledge of number, and given us the conception of time, and hence came philosophy. This is the greatest boon we owe to sight.
– Plato, Timaeus
The term “computational capital” understands the rise of capitalism as the first digital culture with universalizing aspirations and capabilities, and recognizes contemporary culture, bound as it is to electronic digital computing, as something like Digital Culture 2.0. Rather than seeing this shift from Digital Culture 1.0 to Digital Culture 2.0 strictly as a break, we might consider it as one result of an overall intensification in the practices of quantification. Capitalism, says Nick Dyer-Witheford (2012), was already a digital computer and shifts in the quantity of quantities lead to shifts in qualities. If capitalism was a digital computer from the get-go, then “the invisible hand”—as the non-subjective, social summation of the individualized practices of the pursuit of private (quantitative) gain thought to result in (often unknown and unintended) public good within capitalism—is an early, if incomplete, expression of the computational unconscious. With the broadening and deepening of the imperative toward quantification and rational calculus posited then presupposed during the early modern period by the expansionist program of Capital, the process of the assignation of a number to all qualitative variables—that is, the thinking in numbers (discernible in the commodity-form itself, whereby every use-value was also an encoded as an exchange-value)—entered into our machines and our minds. This penetration of the digital, rendering early on the brutal and precise calculus of the dimensions of cargo-holds in slave ships and the sparse economic accounts of ship ledgers of the Middle Passage, double entry bookkeeping, the rationalization of production and wages in the assembly line, and more recently, cameras and modern computing, leaves no stone unturned. Today, as could be well known from everyday observation if not necessarily from media theory, computational calculus arguably underpins nearly all productive activity and, particularly significant for this argument, those activities that together constitute the command-control apparatus of the world system and which stretch from writing to image-making and, therefore, to thought.[1] The contention here is not simply that capitalism is on a continuum with modern computation, but rather that computation, though characteristic of certain forms of thought, is also the unthought of modern thought. The content indifferent calculus of computational capital ordains the material-symbolic and the psycho-social even in the absence of a conscious, subjective awareness of its operations. As the domain of the unthought that organizes thought, the computational unconscious is structured like a language, a computer language that is also and inexorably an economic calculus.
The computational unconscious allows us to propose that much contemporary consciousness (aka “virtuosity” in post-Fordist parlance) is a computational effect—in short, a form of artificial intelligence. A large part of what “we” are has been conscripted, as thought and other allied metabolic processes are functionalized in the service of the iron clad movements of code. While “iron clad” is now a metaphor and “code” is less the factory code and more computer code, understanding that the logic of industrial machinery and the bureaucratic structures of the corporation and the state have been abstracted and absorbed by discrete state machines to the point where in some quarters “code is law” will allow us to pursue the surprising corollary that all the structural inequalities endemic to capitalist production—categories that often appear under variants of the analog signs of race, class, gender, sexuality, nation, etc., are also deposited and thus operationally disappeared into our machines.
Put simply, and, in deference to contemporary attention spans, too soon, our machines are racial formations. They are also technologies of gender and sexuality.[2] Computational capital is thus also racial capitalism, the longue durée digitization of racialization and, not in any way incidentally, of regimes of gender and sexuality. In other words inequality and structural violence inherent in capitalism also inhere in the logistics of computation and consequently in the real-time organization of semiosis, which is to say, our practices and our thought. The servility of consciousness, remunerated or not, aware of its underlying operating system or not, is organized in relation not just to sociality understood as interpersonal interaction, but to digital logics of capitalization and machine-technics. For this reason, the political analysis of postmodern and, indeed, posthuman inequality must examine the materiality of the computational unconscious. That, at least, is the hypothesis, for if it is the function of computers to automate thinking, and if dominant thought is the thought of domination, then what exactly has been automated?
Already in the 1850s the worker appeared to Marx as a “conscious organ” in the “vast automaton” of the industrial machine, and by the time he wrote the first volume of Capital Marx was able to comment on the worker’s new labor of “watching the machine with his eyes and correcting its mistakes with his hands” (Marx 1867: 496, 502). Marx’s prescient observation with respect to the emergent role of visuality in capitalist production, along with his understanding that the operation of industrial machinery posits and presupposes the operation of other industrial machinery, suggests what was already implicit if not fully generalized in the analysis: that Dr. Ure’s notion, cited by Marx, of the machine as a “vast automaton,” was scalable—smaller machines, larger machines, entire factories could be thus conceived, and with the increasing scale and ubiquity of industrial machines, the notion could well describe the industrial complex as a whole. Historically considered, “watching the machine with his eyes and correcting the mistakes with his hands” thus appears as an early description of what information workers such as you and I do on our screens. To extrapolate: distributed computation and its integration with industrial process and the totality of social processes suggest that not only has society as a whole become a vast automaton profiting from the metabolism of its conscious organs, but further that the confrontation or interface with the machine at the local level (“where we are”) is an isolated and phenomenal experience that is not equivalent to the perspective of the automaton or, under capitalism, that of Capital. Given that here, while we might still be speaking about intelligence, we are not necessarily speaking about subjects in the strict sense, we might replace Althusser’s relation of S-s—Big Subject (God, the State, etc) to small subject (“you” who are interpellated with and in ideology)—with AI-ai— Big Artificial Intelligence (the world system as organized by computational capital) and “you” Little Artificial Intelligence (as organized by the same). Here subjugation is not necessarily intersubjective, and does not require recognition. The AI does not speak your language even if it is your operating system. With this in mind we may at once understand that the space-time regimes of subjectivity (point-perspective, linear time, realism, individuality, discourse function, etc.) that once were part of the digital armature of “the human,” have been profitably shattered, and that the fragments have been multiplied and redeployed under the requisites of new management. We might wager that these outmoded templates or protocols may still also meaningfully refer to a register of meaning and conceptualization that can take the measure of historical change, if only for some kind of species remainder whose value is simultaneously immeasurable, unknown and hanging in the balance.
Ironically perhaps, given the progress narratives attached to technical advances and the attendant advances in capital accumulation, Marx’s hypothesis in Capital Chapter 15, “Machinery and Large-Scale Industry,” that “it would be possible to write a whole history of the inventions made since 1830 for the purpose of providing capital with weapons against working class revolt” (1867, 563), casts an interesting light on the history of computing and its creation-imposition of new protocols. Not only have the incredible innovations of workers been abstracted and absorbed by machinery, but so also have their myriad antagonisms toward capitalist domination. Machinic perfection meant the imposition of continuity and the removal of “the hand of man” by fixed capital, in other words, both the absorption of know-how and the foreclosure of forms of disruption via automation (Marx 1867, 502).
Dialectically understood, subjectivity, while a force of subjugation in some respects, also had its own arsenal of anti-capitalist sensibilities. As a way of talking about non-conformity, anti-sociality and the high price of conformity and its discontents, the unconscious still has its uses, despite its unavoidable and perhaps nostalgic invocation of a future that has itself been foreclosed. The conscious organ does not entirely grasp the cybernetic organism of which it is a part; nor does it fully grasp the rationale of its subjugation. If the unconscious was machinic, it is now computational, and if it is computational it is also locked in a struggle with capitalism. If what underlies perceptual and cognitive experience is the automaton, the vast AI, what I will be referring to as The Computer, which is the totalizing integration of global practice through informatic processes, then from the standpoint of production we constitute its unconscious. However, as we are ourselves unaware of our own constitution, the Unconscious of producers is their/our specific relation to what Paolo Virno acerbically calls, in what can only be a lamentation of history’s perverse irony, “the communism of capital” (2004, 110). If the revolution killed its father (Marx) and married its mother (Capitalism), it may be worth considering the revolutionary prospects of an analysis of this unconscious.
Introduction: The Computational Unconscious
Beginning with the insight that the rise of capitalism marks the onset of the first universalizing digital culture, this essay, and the book of which it is chapter one, develops the insights of The Cinematic Mode of Production (Beller 2006) in an effort to render the violent digital subsumption by computational racial capital that the (former) “humans” and their (excluded) ilk are collectively undergoing in a manner generative of sites of counter-power—of, let me just say it without explaining it, derivatives of counter-power, or, Derivative Communism. To this end, the following section offers a reformulation of Marx’s formula for capital, Money-Commodity-Money’ (M-C-M’), that accounts for distributed production in the social factory, and by doing so hopes to direct attention to zones where capitalist valorization might be prevented or refused. Prevented or refused not only to break a system which itself functions by breaking the bonds of solidarity and mutual trust that formerly were among the conditions that made a life worth living, but also to posit the redistribution of our own power towards ends that for me are still best described by the word communist (or perhaps meta-communist but that too is for another time). This thinking, political in intention, speculative in execution and concrete in its engagement, also proposes a revaluation of the aesthetic as an interface that sensualizes information. As such, the aesthetic is both programmed, and programming—a privileged site (and indeed mode) of confrontation in the digital apartheid of the contemporary.
Along these lines, and similar to the analysis pursued in The Cinematic Mode of Production, I endeavor to de-fetishize a platform—computation itself—one that can only be properly understood when grasped as a means of production embedded in the bios. While computation is often thought of as being the thing accomplished by hardware churning through a program (the programmatic quantum movements of a discrete state machine), it is important to recognize that the universal Turing machine was (and remains) media indifferent only in theory and is thus justly conceived of as an abstract machine in the realm of ideas and indeed of the ruling ideas. However, it is an abstract machine that, like all abstractions, evolves out of concrete circumstances and practices; which is to say that the universal Turing Machine is itself an abstraction subject to historical-materialist critique. Furthermore, Turing Machines iterate themselves on the living, on life, reorganizing its practices. One might situate the emergence and function of the universal Turing machine as perhaps among the most important abstract machines in the last century, save perhaps that of capital itself. However, both their ranking and even their separability is here what we seek to put into question.
Without a doubt, the computational process, like the capitalist process, has a corrosive effect on ontological precepts, accomplishing a far-reaching liquidation of tradition that includes metaphysical assumptions regarding the character of essence, being, authenticity and presence. And without a doubt, computation has been built even as it has been discovered. The paradigm of computation marks an inflection point in human history that reaches along temporal and spatial axes: both into the future and back into the past, out to the cosmos and into the sub-atomic. At any known scale, from plank time (10^-44 seconds) to yottaseconds (10^24 seconds), and from 10^-35 to 10^27 meters, computation, conceptualization and sense-making (sensation) have become inseparable. Computation is part of the historicity of the senses. Just ask that baby using an iPad.
The slight displacement of the ontology of computation implicit in saying that it has been built as much as discovered (that computation has a history even if it now puts history itself at risk) allows us to glimpse, if only from what Laura Mulvey calls “the half-light of the imaginary” (1975, 7)—the general antagonism is feminized when the apparatus of capitalization has overcome the symbolic—that computation is not, so far as we can know, the way of the universe per se, but rather the way of the universe as it has become intelligible to us vis-à-vis our machines. The understanding, from a standpoint recognized as science, that computation has fully colonized the knowable cosmos (and is indeed one with knowing) is a humbling insight, significant in that it allows us to propose that seeing the universe as computation, as, in short, simulable, if not itself a simulation (the computational effect of an informatic universe), may be no more than the old anthropocentrism now automated by apparatuses. We see what we can see with the senses we have—autopoesis. The universe as it appears to us is figured by—that is, it is a figuration of—computation. That’s what our computers tell us. We build machines that discern that the universe functions in accord with their self-same logic. The recursivity effects the God trick.
Parametrically translating this account of cosmic emergence into the domain of history, reveals a disturbing allegiance of computational consciousness organized by the computational unconscious, to what Silvia Federici calls the system of global apartheid. Historicizing computational emergence pits its colonial logic directly against what Fred Moten and Stefano Harney identify as “the general antagonism” (2013, 10) (itself the reparative antithesis, or better perhaps the reverse subsumption of the general intellect as subsumed by capital). The procedural universalization of computation is a cosmology that attributes and indeed enforces a sovereignty tantamount to divinity and externalities be damned. Dissident, fugitive planning and black study – a studied refusal of optimization, a refusal of computational colonialism — may offer a way out of the current geo-(post-)political and its computational orthodoxy.
Computational Idolatry and Multiversality
In the new idolatry cathetcted to inexorable computational emergence, the universe is itself currently imagined as a computer. Here’s the seductive sound of the current theology from a conference sponsored by the sovereign state of NYU:
As computers become progressively faster and more powerful, they’ve gained the impressive capacity to simulate increasingly realistic environments. Which raises a question familiar to aficionados of The Matrix—might life and the world as we know it be a simulation on a super advanced computer? “Digital physicists” have developed this idea well beyond the sci-fi possibilities, suggesting a new scientific paradigm in which computation is not just a tool for approximating reality but is also the basis of reality itself. In place of elementary particles, think bits; in place of fundamental laws of physics, think computer algorithms. (Scientific American 2011)
Science fiction, in the form of “the Matrix,” is here used to figure a “reality” organized by simulation, but then this reality is quickly dismissed as something science has moved well beyond. However, it would not be illogical here to propose that “reality” is itself a science fiction—a fiction whose current author is no longer the novel or Hollywood but science. It is in a way no surprise that, consistent with “digital physics,” MIT physicist, Max Tegmark, claims that consciousness is a state of matter: Consciousness as a phenomenon of information storage and retrieval, is a property of matter described by the term “computronium.” Humans represent a rather low level of complexity. In the neo-Hegelian narrative in which the philosopher—scientist reveals the working out of world—or, rather, cosmic—spirit, one might say that it is as science fiction—one of the persistent fictions licensed by science—that “reality itself” exists at all. We should emphasize that the trouble here is not so much with “reality,” the trouble here is with “itself.” To the extent that we recognize that poesis (making) has been extended to our machines and it is through our machines that we think and perceive, we may recognize that reality is itself a product of their operations. The world begins to look very much like the tools we use to perceive it to the point that Reality itself is thus a simulation, as are we—a conclusion that concurs with the notion of a computational universe, but that seems to (conveniently) elide the immediate (colonial) history of its emergence. The emergence of the tools of perception is taken as universal, or, in the language of a quantum astrophysics that posits four levels of multiverses: multiversal. In brief, the total enclosure by computation of observer and observed is either reality itself becoming self-aware, or tautological, waxing ideological, liquidating as it does historical agency by means of the suddenly a priori stochastic processes of cosmic automation.
Well! If total cosmic automation, then no mistakes, so we may as well take our time-bound chances and wager on fugitive negation in the precise form of a rejection of informatic totalitarianism. Let us sound the sedimented dead labor inherent in the world-system, its emergent computational armature and its iconic self-representations. Let us not forget that those machines are made out of embodied participation in capitalist digitization, no matter how disappeared those bodies may now seem. Marx says, “Consciousness is… from the very beginning a social product and remains so for as long as men exist at all” (Tucker 1978, 178). The inescapable sociality and historicity of knowledge, in short, its political ontology, follows from this—at least so long as humans “exist at all.”
The notion of a computational cosmos, though not universally or even widely consented to by scientific consciousness, suggests that we respire in an aporiatic space—in the null set (itself a sign) found precisely at the intersection of a conclusion reached by Gödel in mathematics (Hofstadter 1979)—that there is no sufficiently powerful logical system that is internally closed such that logical statements cannot be formulated that can neither be proved nor disproved—and a different conclusion reached by Maturana and Varela (1992), and also Niklas Luhmann (1989), that a system’s self-knowing, its autopoesis, knows no outside; it can know only in its own terms and thus knows only itself. In Gödel’s view, systems are ineluctably open, there is no closure, complete self-knowledge is impossible and thus there is always an outside or a beyond, while in the latter group’s view, our philosophy, our politics and apparently our fate is wedded to a system that can know no outside since it may only render an outside in its own terms, unless, or perhaps, even if/as that encounter is catastrophic.
Let’s observe the following: 1) there must be an outside or a beyond (Gödel); 2) we cannot know it (Maturana and Varela); 3) and yet…. In short, we don’t know ourselves and all we know is ourselves. One way out of this aporia is to say that we cannot know the outside and remain what we are. Enter history: Multiversal Cosmic Knoweldge, circa 2017, despite its awesome power, turns out to be pretty local. If we embrace the two admittedly humbling insights regarding epistemic limits—on the one hand, that even at the limits of computationally—informed knowledge (our autopoesis) all we can know is ourselves, along with Gödel’s insight that any “ourselves” whatsoever that is identified with what we can know is systemically excluded from being All—then it as axiomatic that nothing (in all valences of that term) fully escapes computation—for us. Nothing is excluded from what we can know except that which is beyond the horizon of our knowledge, which for us is precisely nothing. This is tantamount to saying that rational epistemology is no longer fully separable from the history of computing—at least for any us who are, willingly or not, participant in contemporary abstraction. I am going to skip a rather lengthy digression about fugitive nothing as precisely that bivalent point of inflection that escapes the computational models of consciousness and the cosmos, and just offer its conclusion as the next step in my discussion: We may think we think—algorithmically, computationally, autonomously, or howsoever—but the historically materialized digital infrastructure of the socius thinks in and through us as well. Or, as Marx put it, “The real subject remains outside the mind and independent of it—that is to say, so long as the mind adopts a purely speculative, purely theoretical attitude. Hence the subject, society, must always be envisaged as the premises of conception even when the theoretical method is employed” (Marx: vol. 28, 38-39).[3]
This “subject, society” in Marx’s terms, is present even in its purported absence—it is inextricable from and indeed overdetermines theory and, thus, thought: in other words, language, narrative, textuality, ideology, digitality, cosmic consciousness. This absent structure informs Althusser’s Lacanian-Marxist analysis of Ideology (and of “the ideology of no ideology,” 1977) as the ideological moment par excellance: an analog way of saying “reality” is simulation) as well as his beguiling (because at once necessary and self-negating) possibility of a subjectless scientific discourse. This non-narrative, unsymbolizeable absent structure akin to the Lacanian “Real” also informs Jameson’s concept of the political unconscious as the black-boxed formal processor of said absent structure, indicated in his work by the term “History” with a capital “H” (1981). We will take up Althusser and Jameson in due time (but not in this paper). For now, however, for the purposes of our mediological investigation, it is important to pursue the thought that precisely this functional overdetermination, which already informed Marx’s analysis of the historicity of the senses in the 1844 manuscripts, extends into the development of the senses and the psyche. As Jameson put it in The Political Unconscious thirty-five years ago: “That the structure of the psyche is historical and has a history, is… as difficult for us to grasp as that the senses are not themselves natural organs but rather the result of a long process of differentiation even within human history”(1981, 62).
The evidence for the accuracy of this claim, built from Marx’s notion that “the forming of the five senses requires the history of the world down to the present” has been increasing. There is a host of work on the inseparability of technics and the so-called human (from Mauss to Simondon, Deleuze and Guattari, and Bernard Stiegler) that increasingly makes it possible to understand and even believe that the human, along with consciousness, the psyche, the senses and, consequently, the unconscious are historical formations. My own essay “The Unconscious of the Unconscious” from The Cinematic Mode of Production traces Lacan’s use of “montage,” “the cut,” the gap, objet a, photography and other optical tropes and argues (a bit too insistently perhaps) that the unconscious of the unconscious is cinema, and that a scrambling of linguistic functions by the intensifying instrumental circulation of ambient images (images that I now understand as derivatives of a larger calculus) instantiates the presumably organic but actually equally technical cinematic black box known as the unconscious.[iv] Psychoanalysis is the institutionalization of a managerial technique for emergent linguistic dysfunction (think literary modernism) precipitated by the onslaught of the visible.
More recently, and in a way that suggests that the computational aspects of historical materialist critique are not as distant from the Lacanian Real as one might think, Lydia Liu’s The Freudian Robot (2010) shows convincingly that Lacan modeled the theory of the unconscious from information theory and cybernetic theory. Liu understands that Lacan’s emphasis on the importance of structure and the compulsion to repeat is explicitly addressed to “the exigencies of chance, randomness, and stochastic processes in general” (2010, 176). She combs Lacan’s writings for evidence that they are informed by information theory and provides us with some smoking guns including the following:
By itself, the play of the symbol represents and organizes, independently of the peculiarities of its human support, this something which is called the subject. The human subject doesn’t foment this game, he takes his place in it, and plays the role of the little pluses and minuses in it. He himself is an element in the chain which, as soon as it is unwound, organizes itself in accordance with laws. Hence the subject is always on several levels, caught up in the crisscrossing of networks. (quoted in Liu 2010, 176)
Liu argues that “the crisscrossing of networks” alludes not so much to linguistic networks but to communication networks, and precisely references the information theory that Lacan read, particularly that of George Gilbaud, the author of What is Cybernetics?. She writes that, “For Lacan, ‘the primordial couple of plus and minus’ or the game of even and odd should precede linguistic considerations and is what enables the symbolic order.”
“You can play heads or tails by yourself,” says Lacan, “but from the point of view of speech, you aren’t playing by yourself – there is already the articulation of three signs comprising a win or a loss and this articulation prefigures the very meaning of the result. In other words, if there is no question, there is no game, if there is no structure, there is no question. The question is constituted, organized by the structure” (quoted in Liu 2010, 179). Liu comments that “[t]his notion of symbolic structure, consistent with game theory, [has] important bearings on Lacan’s paradoxically non-linguistic view of language and the symbolic order.”
Let us not distract ourselves here with the question of whether or not game theory and statistical analysis represent discovery or invention. Heisenberg, Schrödinger, and information theory formalized the statistical basis that one way or another became a global (if not also multiversal) episteme. Norbert Wiener, another father, this time of cybernetics, defined statistics as “the science of distribution” (Weiner 1989, 8). We should pause here to reflect that, given that cybernetic research in the West was driven by military and, later, industrial applications, that is, applications deemed essential for the development of capitalism and the capitalist way of life, such a statement calls for a properly dialectical analysis. Distribution is inseparable from production under capitalism, and statistics is the science of this distribution. Indeed, we would want to make such a thesis resonate with the analysis of logistics recently undertaken by Moten and Harney and, following them, link the analysis of instrumental distribution to the Middle Passage, as the signal early modern consequence of the convergence of rationalization and containerization—precisely the “science” of distribution worked out in the French slave ship Adelaide or the British ship Brookes. For the moment, we underscore the historicity of the “science of distribution” and thus its historical emergence as socio-symbolic system of organization and control. Keeping this emergence clearly in mind helps us to understand that mathematical models quite literally inform the articulation of History and the unconscious—not only homologously as paradigms in intellectual history, but materially, as ways of organizing social production in all domains. Whether logistical, optical or informatic, the technics of mathematical concepts, which is to say programs, orchestrate meaning and constitute the unconscious.
Perhaps more elusive even than this historicity of the unconscious grasped in terms of a digitally encoded matrix of materiality and epistemology that constitutes the unthought of subjective emergence, may be that the notion that the “subject, society” extends into our machines. Vilém Flusser, in Towards a Philosophy of Photography, tells us,
Apparatuses were invented to simulate specific thought processes. Only now (following the invention of the computer), and as it were in hindsight, it is becoming clear what kind of thought processes we are dealing with in the case of all apparatuses. That is: thinking expressed in numbers. All apparatuses (not just computers) are calculating machines and in this sense “artificial intelligences,” the camera included, even if their inventors were not able to account for this. In all apparatuses (including the camera) thinking in numbers overrides linear, historical thinking. (Flusser 2000, 31)
This process of thinking in numbers, and indeed the generalized conversion of multiple forms of thought and practice to an increasingly unified systems language of numeric processing, by capital markets, by apparatuses, by digital computers requires further investigation. And now that the edifice of computation—the fixed capital dedicated to computation that either recognizes itself as such or may be recognized as such—has achieved a consolidated sedimentation of human labor at least equivalent to that required to build a large nation (a superpower) from the ground up, we are in a position to ask in what way has capital-logic and the logic of private property, which as Marx points out is not the cause but the effect of alienated wage- (and thus quantified) labor, structured computational paradigms? In what way has that “subject, society” unconsciously structured not just thought, but machine-thought? Thinking, expressed in numbers, materialized first by means of commodities and then in apparatuses capable of automating this thought. Is computation what we’ve been up to all along without knowing it? Flusser suggests as much through his notion that 1) the camera is a black box that is a programme, and, 2) that the photograph or technical image produces a “magical” relation to the world in as much as people understand the photograph as a window rather than as information organized by concepts. This amounts to the technical image as itself a program for the bios and suggests that the world has long been unconsciously organized by computation vis-à-vis the camera. As Flusser has it, cameras have organized society in a feedback loop that works towards the perfection of cameras. If the computational processes inherent in photography are themselves an extension of capital logic’s universal digitization (an argument I made in TheCinematic Mode of Production and extended in The Message is Murder), then that calculus has been doing its work in the visual reorganization of everyday life for almost two centuries.
Put another way, thinking expressed in numbers (the principles of optics and chemistry) materialized in machines automates thought (thinking expressed in numbers) as program. The program of say, the camera, functions as a historically produced version of what Katherine Hayles has recently called “nonconscious cognition” (Hayles 2016). Though locally perhaps no more self-aware than the sediment sorting process of a riverbed (another of Hayles’s computational examples) the camera nonetheless affects purportedly conscious beings from the domain known as the unconscious, as, to give but one shining example, feminist film theory clearly shows: The function of the camera’s program organizes the psycho-dynamics of the spectator in a way that at once structures film form through market feedback, gratifies the (white-identified) male ego and normalizes the violence of heteropatriarchy, and does so at a profit. Now that so much human time has gone into developing cameras, computer hardware and programming, such that hardware and programming are inextricable from the day to day and indeed nano-second to nano-second organization of life on planet earth (and not only in the form of cameras), we can ask, very pointedly, which aspects of computer function, from any to all, can be said to be conditioned not only by sexual difference but more generally still, by structural inequality and the logistics of racialization? Which computational functions perpetuate and enforce these historically worked up, highly ramified social differences ? Structural and now infra-structural inequalities include social injustices—what could be thought of as and in a certain sense are algorithmic racism, sexism and homophobia, and also programmatically unequal access to the many things that sustain life, and legitimize murder (both long and short forms, executed by, for example, carceral societies, settler colonialism, police brutality and drone strikes), and catastrophes both unnatural (toxic mine-tailings, coltan wars) and purportedly natural (hurricanes, droughts, famines, ambient environmental toxicity). The urgency of such questions resulting from the near automation of geo-political emergence along with a vast conscription of agents is only exacerbated as we recognize that we are obliged to rent or otherwise pay tribute (in the form of attention, subscription, student debt) to the rentier capitalists of the infrastructure of the algorithm in order to access portions of the general intellect from its proprietors whenever we want to participate in thinking.
For it must never be assumed that technology (even the abstract machine) is value-neutral, that it merely exists in some uninterested ideal place and is then utilized either for good or for ill by free men (it would be “men” in such a discourse). Rather, the machine, like Ariella Azoulay’s understanding of photography, has a political ontology—it is a social relation, and an ongoing one whose meaning is, as Azoulay says of the photograph, never at an end (2012, 25). Now that representation has been subsumed by machines, has become machinic (overcoded as Deleuze and Guattari would say) everything that appears, appears in and through the machine, as a machine. For the present (and as Plato already recognized by putting it at the center of the Republic), even the Sun is political. Going back to my opening, the cosmos is merely a collection of billions of suns—an infinite politics.
But really, this political ontology of knowledge, machines, consciousness, praxis should be obvious. How could technology, which of course includes the technologies of knowledge, be anything other than social and historical, the product of social relations? How could these be other than the accumulation, objectification and sedminentation of subjectivities that are themselves an historical product? The historicity of knowledge and perception seems inescapable, if not fully intelligible, particularly now, when it is increasingly clear that it is the programmatic automation of thought itself that has been embedded in our apparatuses. The programming and overdetermination of “choice,” of options, by a rationality that was itself embedded in the interested circumstances of life and continuously “learns” vis-à-vis the feedback life provides has become ubiquitous and indeed inexorable (I dismiss “Object Oriented Ontology” and its desperate effort to erase white-boy subjectivity thusly: there are no ontological objects, only instrumental epistemic horizons). To universalize contemporary subjectivity by erasing its conditions of possibility is to naturalize history; it is therefore to depoliticize it and therefore to recapitulate its violence in the present.
The short answer then regarding digital universality is that technology (and thus perception, thought and knowledge) can only be separated from the social and historical—that is, from racial capitalism—by eliminating both the social and historical (society and history) through its ownoperations. While computers, if taken as a separate constituency along with a few of their biotic avatars, and then pressed for an answer, might once have agreed with Margaret Thatcher’s view that “there is no such thing as society,” one would be hard-pressed to claim that this post-sociological (and post-Birmingham) “discovery” is a neutral result. Thatcher’s observation, that “the problem with socialism is that you eventually run out of other people’s money,” while admittedly pithy, if condescending, classist and deadly, subordinates social needs to existing property-relations and their financial calculus at the ontological level. She smugly valorizes the status quo by positing capitalism as an untranscendable horizon since the social product is by definition always already “other people’s money.” But neoliberalism has required some revisioning of late (which is a polite way of saying that fascism has needed some updating): the newish but by now firmly-established term “social media” tells us something more about the parasitic relation that the cold calculus this mathematical universe of numbers has to the bios. To preserve global digital apartheid requires social media, the process(ing) of society itself cybernetically-interfaced with the logistics of racial-capitalist computation. This relation, a means of digital expropriation aimed to profitably exploit an equally significant global aspiration towards planetary communicativity and democratization, has become the preeminent engine of capitalist growth. Society, at first seemingly negated by computation and capitalism, is now directly posited as a source of wealth, for what is now explicitly computational capital and actually computational racial capital. The attention economy, immaterial labor, neuropower, semio-capitalism: all of these terms, despite their differences, mean in effect that society, as a deterritorialized factory, is no longer disappeared as an economic object; it disappears only as a full beneficiary of the dominant economy which is now parasitical on its metabolism. The social revolution in planetary communicativity is being farmed and harvested by computational capitalism.
Dialectics of the Human-Machine
For biologists it has become au courant when speaking of humans to speak also of the second genome—one must consider not just the 26 chromosomes of the human genome that replicate what was thought of as the human being as an autonomous life-form, but the genetic information and epigenetic functionality of all the symbiotic bacteria and other organisms without which there are no humans. Pursuant to this thought, we might ascribe ourselves a third genome: information. No good scientist today believes that human beings are free standing forms, even if most (or really almost all) do not make the critique of humanity or even individuality through a framework that understands these categories as historically emergent interfaces of capitalist exchange. However, to avoid naturalizing the laws of capitalism as simply an expression of the higher (Hegalian) laws of energetics and informatics (in which, for example ATP can be thought to function as “capital”), this sense of “our” embeddedness in the ecosystem of the bios must be extended to that of the materiality of our historical societies, and particularly to their systems of mediation and representational practices of knowledge formation—including the operations of textuality, visuality, data visualization and money—which, with convergence today, means precisely, computation.
If we want to understand the emergence of computation (and of the anthropocene), we must attend to the transformations and disappearances of life forms—of forms of life in the largest sense. And we must do so in spite of the fact that the sedimentation of the history of computation would neutralize certain aspects of human aspiration and of humanity—including, ultimately, even the referent of that latter sign—by means of law, culture, walls, drones, derivatives, what have you. The biosynthetic process of computation and human being gives rise to post-humanism only to reveal that there were never any humans here in the first place: We have never been human—we know this now. “Humanity,” as a protracted example of maiconaissance—as a problem of what could be called the humanizing-machine or, better perhaps, the human-machine, is on the wane.
Naming the human-machine, is of course a way of talking about the conquest, about colonialism, slavery, imperialism, and the racializing, sex-gender norm-enforcing regimes of the last 500 years of capitalism that created the ideological legitimation of its unprecedented violence in the so-called humanistic values it spat out. Aimé Césaire said it very clearly when he posed the scathing question in Discourse on Colonialism: “Civilization and Colonization?” (1972). “The human-machine” names precisely the mechanics of a humanism that at once resulted from and were deployed to do the work of humanizing planet Earth for the quantitative accountings of capital while at the same time divesting a large part of the planetary population of any claims to the human. Following David Golumbia, in The Cultural Logic of Computation (2009), we might look to Hobbes, automata and the component parts of the Leviathan for “human” emergence as a formation of capital. For so many, humanism was in effect more than just another name for violence, oppression, rape, enslavement and genocide—it was precisely a means to violence. “Humanity” as symptom of The Invisible Hand, AI’s avatar. Thus it is possible to see the end of humanism as a result of decolonization struggles, a kind of triumph. The colonized have outlasted the humans. But so have the capitalists.
This is another place where recalling the dialectic is particularly useful. Enlightenment Humanism was a platform for the linear time of industrialization and the French revolution with “the human” as an operating system, a meta-ISA emerging in historical movement, one that developed a set of ontological claims which functioned in accord with the early period of capitalist digitality. The period was characterized by the institutionalization of relative equality (Cedric Robinson does not hesitate to point out that the precondition of the French Revolution was colonial slavery), privacy, property. Not only were its achievements and horrors inseparable the imposition of logics of numerical equivalence, they were powered by the labor of the peoples of Earth, by the labor-power of disparate peoples, imported as sugar and spices, stolen as slaves, music and art, owned as objective wealth in the form of lands, armies, edifices and capital, and owned again as subjective wealth in the form of cultural refinement, aesthetic sensibility, bourgeois interiority—in short, colonial labor, enclosed by accountants and the whip, was expatriated as profit, while industrial labor, also expropriated, was itself sustained by these endeavors. The accumulation of the wealth of the world and of self-possession for some was organized and legitimated by humanism, even as those worlded by the growth of this wealth struggled passionately, desultorily, existentially, partially and at times absolutely against its oppressive powers of objectification and quantification. Humanism was colonial software, and the colonized were the outsourced content providers—the first content providers—recruited to support the platform of so-called universal man. This platform humanism is not so much a metaphor; rather it is the tendency that is unveiled by the present platform post-humanism of computational racial capital. The anatomy of man is the key to the anatomy of the ape, as Marx so eloquently put the telos of man. Is the anatomy of computation the key to the anatomy of “man”?
So the end of humanism, which in a narrow (white, Euro-American, technocratic) view seems to arrive as a result of the rise of cyber-technologies, must also be seen as having been long willed and indeed brought about by the decolonizing struggles against humanism’s self-contradictory and, from the point of view of its own self-proclaimed values, specious organization. Making this claim is consistent with Césaire’s insight that people of the third world built the European metropoles. Today’s disappearance of the human might mean for the colonizers who invested so heavily in their humanisms, that Dr. Moreau’s vivisectioned cyber-chickens are coming home to roost. Fatally, it seems, since Global North immigration policy, internment centers, border walls, police forces give the lie to any pretense of humanism. It might be gleaned that the revolution against the humans has also been impacted by our machines. However, the POTUSian defeat of the so-called humans is double-edged to say the least. The dialectic of posthuman abundance on the one hand and the posthuman abundance of dispossession on the other has no truck with humanity. Today’s mainstream futurologists mostly see “the singularity” and apocalypse. Critics of the posthuman with commitments to anti-racist world-making have clearly understood the dominant discourse on the posthuman as not the end of the white liberal human subject but precisely, when in the hands of those not committed to an anti-racist and decolonial project as a means for its perpetuation—a way of extending the unmarked, transcendental, sovereign, subject (of Hobbes, Descartes, C.B. Macpherson)—effectively the white male sovereign who was in possession of a body rather than forced to be a body. Sovereignty itself must change (in order, as Guiseppe Lampedusa taught us, to remain the same), for if one sees production and innovation on the side of labor, then capital’s need to contain labors’ increasing self-organization has driven it into a position where the human has become an impediment to its continued expansion. Human rights, though at times also a means to further expropriation, are today in the way.
Let’s say that it is global labor that is shaking off the yoke of the human from without, as much as it the digital machines that are devouring it from within. The dialectic of computational racial capital devours the human as a way of revolutionizing the productive forces. Weapon-makers, states, and banks, along with Hollywood and student debt, invoke the human only as a skeuomorph—an allusion to an old technology that helps facilitate adoption of the new. Put another way, the human has become a barrier to production, it is no longer a sustainable form. The human, and those (human and otherwise) falling under the paradigm’s dominion, must be stripped, cut, bundled, reconfigured in derivative forms. All hail the dividual. Again, female and racialized bodies and subjects have long endured this now universal fragmentation and forced recomposition and very likely dividuality may also describe a precapitalist, pre-colonial interface with the social. However we are obliged to point out that this, the current dissolution of the human into the infrastructure of the world-system, is double-edged, neither fully positive, nor fully negative—the result of the dialectics of struggles for liberation distributed around the planet. As a sign of the times, posthumanism may be, as has been remarked about capitalism itself, among those simultaneously best and worst things to ever happen in history. On the one hand, the disappearance of presumably ontological protections and legitimating status for some (including the promise of rights never granted to most), on the other, the disappearance of a modality of dehumanization and exclusion that legitimated and normalized white supremacist patriarchy by allowing its values to masquerade as universals. However, it is difficult to maintain optimism of the will when we see that that which is coming, that which is already upon us may also be as bad or worse, in absolute numbers, is already worse, for unprecedented billions of concrete individuals. Frankly, in a world where the cognitive-linguistic functions of the species have themselves been captured by the ambient capitalist computation of social media and indeed of capitalized computational social relations, of what use is a theory of dispossession to the dispossessed?
For those of us who may consider ourselves thinkers, it is our burden—in a real sense, our debt, living and ancestral—to make theory relevant to those who haunt it. Anything less is betrayal. The emergence of the universal value form (as money, the general form of wealth) with its human face (as white-maleness, the general form of humanity) clearly inveighs against the possibility of extrinsic valuation since the very notion of universal valuation is posited from within this economy. What Cedric Robinson shows in his extraordinary Black Marxism (1983) is that capitalism itself is a white mythology. The history of racialization and capitalization are inseparable, and the treatment of capital as a pure abstraction deracinates its origins and functions – both its conditions of possibility as well as its operations—including those of the internal critique of capitalism that has been the basis of much of the Marxist tradition. Both capitalism and its negation as Marxism have proceeded through a disavowal of racialization. The quantitative exchange of equivalents, circulating as exchange values without qualities, are the real abstractions that give rise to philosophy, science, and white liberal humanism wedded to the notion of the objective. Therefore, when it comes to values, there is no degree zero, only perhaps nodal points of bounded equilibrium. To claim neutrality for an early digital machine, say, money, that is, to argue that money as a medium is value-neutral because it embodies what has (in many respects correctly, but in a qualified way) been termed “the universal value form,” would be to miss the entire system of leveraged exploitation that sustains the money-system. In an isolated instance, money as the product of capital might be used for good (building shelters for the homeless) or for ill (purchasing Caterpillar bulldozers) or both (building shelters using Caterpillar machines), but not to see that the capitalist-system sustains itself through militarized and policed expropriation and large-scale, long-term universal degradation is to engage in mere delusional, utopianism and self-interested (might one even say psychotic?) naysaying.
Will the apologists calmly bear witness to the sacrifice of billions of human beings so that the invisible hand may placidly unfurl its/their abstractions in Kubrikian sublimity? 2001’s (Kubrick 1968) cold longshot of the species lifespan as an instance of a cosmic program is not so distant from the endemic violence of postmodern—and, indeed, post-human—fascism he depicted in A Clockwork Orange (Kubrick 1971). Arguably, 2001 rendered the cosmology of early Posthuman Fascism while A Clockwork Orange portrayed its psychology. Both films explored the aesthetics of programming. For the individual and for the species, what we beheld in these two films was the annihilation of our agency (at the level of the individual and of the species) —and it was eerily seductive, Benjamin’s self-destruction as an aesthetic pleasure of the highest order taken to cosmic proportions and raised to the level of Art (1969).
So what of the remainders of those who may remain? Here, in the face of the annihilation of remaindered life (to borrow a powerfully dialectical term from Neferti Tadiar, 2016) by various iterations of techné, we are posing the following question: how are computers and digital computing, as universals, themselves an iteration of long-standing historical inequality, violence, and murder, and what are the entry points for an understanding of computation-society in which our currently pre-historic (in Marx’s sense of the term) conditions of computation might be assessed and overcome? This question of technical overdetermination is not a matter of a Kittlerian-style anti-humanism in which “media determine our situation,” nor is it a matter of the post-Kittlerian, seemingly user-friendly repurposing of dialectical materialism which in the beer-drinking tradition of “good-German” idealism, offers us the poorly historicized, neo-liberal idea of “cultural techniques” courtesy of Cornelia Vismann and Bernhard Siegert (Vismann 2013, 83-93; Siegert 2013, 48-65). This latter is a conveniently deracinated way of conceptualizing the distributed agency of everything techno-human without having to register the abiding fundamental antagonisms, the life and death struggle, in anything. Rather, the question I want to pose about computing is one capable of both foregrounding and interrogating violence, assigning responsibility, making changes, and demanding reparations. The challenge upon us is to decolonize computing. Has the waning not just of affect (of a certain type) but of history itself brought us into a supposedly post-historical space? Can we see that what we once called history, and is now no longer, really has been pre-history, stages of pre-history? What would it mean to say in earnest “What’s past is prologue?”[6] If the human has never been and should never be, if there has been this accumulation of negative entropy first via linear time and then via its disruption, then what? Postmodernism, posthumanism, Flusser’s post-historical, and Berardi’s After the Future notwithstanding, can we take the measure of history?
I would like to conclude this essay with a few examples of techno-humanist dehumanization. In 1889, Herman Hollerith patented the punchcard system and mechanical tabulator that was used in the 1890 censuses in Germany, England, Italy, Russia, Austria, Canada, France, Norway, Puerto Rico, Cuba, and the Philippines. A national census, which normally took eight to ten years now took a single year. The subsequent invention of the plugboard control panel in 1906 allowed for tabulators to perform multiple sorts in whatever sequence was selected without having to be rebuild the tabulators—an early form of programming. Hollerith’s Tabulating Machine Company merged with three other companies in 1911 to become the Computing Tabulating Recording Company, which renamed itself IBM in 1924.
While the census opens a rich field of inquiry that includes questions of statistics, computing, and state power that are increasingly relevant today (particularly taking into account the ever-presence of the NSA), for now I only want to extract two points: 1) humans became the fodder for statistical machines and 2) as Vince Rafael has shown regarding the Philippine census and as Edwin Black has shown with respect to the holocaust, the development of this technology was inseparable from racialization and genocide (Rafael 2000; Black 2001)
Rafael shows that coupled to photographic techniques, the census at once “discerned” and imposed a racializing schema that welded historical “progress” to ever-whiter waves of colonization, from Malay migration to Spanish Colonialism to U.S. Imperialism (2000) Racial fantasy meets white mythology meets World Spirit. For his part, Edwin Black (2001) writes:
Only after Jews were identified—a massive and complex task that Hitler wanted done immediately—could they be targeted for efficient asset confiscation, ghettoization, deportation, enslaved labor, and, ultimately, annihilation. It was a cross-tabulation and organizational challenge so monumental, it called for a computer. Of course, in the 1930s no computer existed.
But IBM’s Hollerith punch card technology did exist. Aided by the company’s custom-designed and constantly updated Hollerith systems, Hitler was able to automate his persecution of the Jews. Historians have always been amazed at the speed and accuracy with which the Nazis were able to identify and locate European Jewry. Until now, the pieces of this puzzle have never been fully assembled. The fact is, IBM technology was used to organize nearly everything in Germany and then Nazi Europe, from the identification of the Jews in censuses, registrations, and ancestral tracing programs to the running of railroads and organizing of concentration camp slave labor.
IBM and its German subsidiary custom-designed complex solutions, one by one, anticipating the Reich’s needs. They did not merely sell the machines and walk away. Instead, IBM leased these machines for high fees and became the sole source of the billions of punch cards Hitler needed (Black 2001).
The sorting of populations and individuals by forms of social difference including “race,” ability and sexual preference (Jews, Roma, homosexuals, people deemed mentally or physically handicapped) for the purposes of sending people who failed to meet Nazi eugenic criteria off to concentration camps to be dispossessed, humiliated, tortured and killed, means that some aspects of computer technology—here, the Search Engine—emerged from this particular social necessity sometimes called Nazism (Black 2001). The Philippine-American War, in which Americans killed between 1/10th and 1/6th of the population of the Philippines, and the Nazi-administered holocaust are but two world historical events that are part of the meaning of early computational automation. We might say that computers bear the legacy of imperialism and fascism—it is inscribed in their operating systems.
The mechanisms, as well as the social meaning of computation, were refined in its concrete applications. The process of abstraction hid the violence of abstraction, even as it integrated the result with economic and political protocols and directly effected certain behaviors. It is a well-known fact that Claude Shannon’s landmark paper, “A Mathematical Theory of Communication,” proposed a general theory of communication that was content-indifferent (1948, 379-423). This seminal work created a statistical, mathematical model of communication while simultaneously consigning any and all specific content to irrelevance as regards the transmission method itself. Like use-value under the management of the commodity form, the message became only a supplement to the exchange value of the code. Elsewhere I have more to say about the fact that some of the statistical information Shannon derived about letter frequency in English used as its ur-text, Jefferson The Virginian (1948), the first volume of Dumas Malone’s monumental six volume study of Jefferson, famously interrogated by Annette Gordon-Reed in her Thomas Jefferson and Sally Hemmings: An American Controversy for its suppression of information regarding Jefferson’s relation to slavery (1997).[7] My point here is that the rules for content indifference were themselves derived from a particular content and that the language used as a standard referent was a specific deployment of language. The representative linguistic sample did not represent the whole of language, but language that belongs to a particular mode of sociality and racialized enfranchisement. Shannon’s deprivileging of the referent of the logos as referent, and his attention only to the signifiers, was an intensification of the slippage of signifier from signified (“We, the people…”) already noted in linguistics and functionally operative in the elision of slavery in Jefferson’s biography, to say nothing of the same text’s elision of slave-narrative and African-American speech. Shannon brilliantly and successfully developed a re-conceptualization of language as code (sign system) and now as mathematical code (numerical system) that no doubt found another of its logical (and material) conclusions (at least with respect to metaphysics) in post-structuralist theory and deconstruction, with the placing of the referent under erasure. This recession of the real (of being, the subject, and experience—in short, the signified) from codification allowed Shannon’s mathematical abstraction of rules for the transmission of any message whatsoever to become the industry standard even as they also meant, quite literally, the dehumanization of communication—its severance from a people’s history.
In a 1987 interview, Shannon was quoted as saying “I can visualize a time in the future when we will be to robots as dogs are to humans…. I’m rooting for the machines!” (1971). If humans are the robot’s companion species, they (or is it we?) need a manifesto. The difficulty is that the labor of our “being” such that it is/was is encrypted in their function. And “we” have never been “one.”
Tara McPherson has brilliantly argued that the modularity achieved in the development of UNIX has its analogue in racial segregation. Modularity and encapsulation, necessary to the writing of UNIX code that still underpins contemporary operating systems were emergent general socio-technical forms, what we might call technologies, abstract machines, or real abstractions. “I am not arguing that programmers creating UNIX at Bell Labs and at Berkeley were consciously encoding new modes of racism and racial understanding into digital systems,” McPherson argues, “The emergence of covert racism and its rhetoric of colorblindness are not so much intentional as systemic. Computation is a primary delivery method of these new systems and it seems at best naïve to imagine that cultural and computational operating systems don’t mutually infect one another.” (in Nakamura 2012, 30-31; italics in original)
This is the computational unconscious at work—the dialectical inscription and re-inscription of sociality and machine architecture that then becomes the substrate for the next generation of consciousness, ad infinitum. In a recent unpublished paper entitled “The Lorem Ipsum Project,” Alana Ramjit (2014) examines industry standards for the now-digital imaging of speech and graphic images. These include Kodak’s “Shirley cards” for standard skin tone (white), the Harvard Sentences for standard audio (white), the “Indian Head Test Pattern” for standard broadcast image (white fetishism), and “Lenna,” an image of Lena Soderberg taken from Playboy magazine (white patriarchal unconscious) that has become the reference standard image for the development of graphics processing. Each of these examples testifies to an absorption of the socio-historical at every step of mediological and computational refinement.
More recently, as Chris Vitale, brought out in a powerful presentation on machine learning and neural networks given at Pratt Institute in 2016, Facebook’s machine has produced “Deep Face,” an image of the minimally recognizable human face. However, this ur-human face, purported to be, the minimally recognizable form of the human face turns out to be a white guy. This is a case in point of the extension of colonial relations into machine function. Given the racialization of poverty in the system of global apartheid (Federici 2012), we have on our hands (or, rather, in our machines) a new modality of automated genocide. Fascism and genocide have new mediations and may not just have adapted to new media but may have merged. Of course, the terms and names of genocidal regeimes change, but the consequences persist. Just yesterday it was called neo-liberal democracy. Today it’s called the end of neo-liberalism. The current world-wide crisis in migration is one of the symptoms of the genocidal tendencies of the most recent coalescence of the “practically” automated logistics of race, nation and class. Today racism is at once a symptom of the computational unconscious, an operation of non-conscious cognition, and still just the garden variety self-serving murderous stupidity that is the legacy of slavery, settler colonialism and colonialism.
Thus we may observe that the statistical methods utilized by IBM to find Jews in the Shtetl are operative in Weiner’s anti-aircraft cybernetics as well as in Israel’s Iron Dome missile defense system. But, the prevailing view, even if it is not one of pure mathematical abstraction, in which computational process has its essence without reference to any concrete whatever, can be found in what follows. As an article entitled “Traces of Israel’s Iron Dome can be found in Tech Startups” for Bloomberg News almost giddily reports:
The Israeli-engineered Iron Dome is a complex tapestry of machinery, software and computer algorithms capable of intercepting and destroying rockets midair. An offshoot of the missile-defense technology can also be used to sell you furniture. (Coppola 2014)[8]
Not only is war good computer business, it’s good for computerized business. It is ironic that te is likened to a tapestry and now used to sell textiles – almost as if it were haunted by Lisa Nakamura’s recent findings regarding the (forgotten) role of Navajo women weavers in the making of early transistor’s for Silicon Valley legend and founding father, as well as infamous eugenicist, William Shockley’s company Fairchild.[9] The article goes on to confess that the latest consumer spin-offs that facilitate the real-time imaging of couches in your living room capable of driving sales on the domestic fronts exist thanks to the U. S. financial support for Zionism and its militarized settler colonialism in Palestine. “We have American-backed apartheid and genocide to thank for being able to visualize a green moderne couch in our very own living room before we click “Buy now.”” (Okay, this is not really a quotation, but it could have been.)
Census, statistics, informatics, cryptography, war machines, industry standards, markets—all management techniques for the organization of otherwise unruly humans, sub-humans, posthumans and nonhumans by capitalist society. The ethos of content indifference, along with the encryption of social difference as both mode and means of systemic functionality is sustainable only so long as derivative human beings are themselves rendered as content providers, body and soul. But it is not only tech spinoffs from the racist war dividends we should be tracking. Wendy Chun (2004, 26-51) has shown in utterly convincing ways that the gendered history of the development of computer programming at ENIAC in which male mathematicians instructed female programmers to physically make the electronic connections (and remove any bugs) echoes into the present experiences of sovereignty enjoyed by users who have, in many respects, become programmers (even if most of us have little or no idea how programming works, or even that we are programming).
Chun notes that “during World War II almost all computers were young women with some background in mathematics. Not only were women available for work then, they were also considered to be better, more conscientious computers, presumably because they were better at repetitious, clerical tasks” (Chun 2004, 33)One could say that programming became programming and software became software when commands shifted from commanding a “girl” to commanding a machine. Clearly this puts the gender of the commander in question.
Chun suggests that the augmentation of our power through the command-control functions of computation is a result of what she calls the “Yes sir” of the feminized operator—that is, of servile labor (2004). Indeed, in the ENIAC and other early machines the execution of the operator’s order was to be carried out by the “wren” or the “slave.” For the desensitized, this information may seem incidental, a mere development or advance beyond the instrumentum vocale (the “speaking tool” i.e., a roman term for “slave”) in which even the communicative capacities of the slave are totally subordinated to the master. Here we must struggle to pose the larger question: what are the implications for this gendered and racialized form of power exercised in the interface? What is its relation to gender oppression, to slavery? Is this mode of command-control over bodies and extended to the machine a universal form of empowerment, one to which all (posthuman) bodies might aspire, or is it a mode of subjectification built in the footprint of domination in such a way that it replicates the beliefs, practices and consequences of “prior” orders of whiteness and masculinity in unconscious but nonetheless murderous ways.[10] Is the computer the realization of the power of a transcendental subject, or of the subject whose transcendence was built upon a historically developed version of racial masculinity based upon slavery and gender violence?
Andrew Norman Wilson’s scandalizing film Workers Leaving the Googleplex (2011), the making of which got him fired from Google, depicts lower class, mostly of color workers leaving the Google Mountain View campus during off hours. These workers are the book scanners, and shared neither the spaces nor the perks with Google white collar workers, had different parking lots, entrances and drove a different class of vehicles. Wilson also has curated and developed a set of images that show the condom-clad fingers (black, brown, female) of workers next to partially scanned book pages. He considers these mis-scans new forms of documentary evidence. While digitization and computation may seem to have transcended certain humanistic questions, it is imperative that we understand that its posthumanism is also radically untranscendent, grounded as it is on the living legacies of oppression, and, in the last instance, on the radical dispossession of billions. These billions are disappeared, literally utilized as a surface of inscription for everyday transmissions. The dispossessed are the substrate of the codification process by the sovereign operators commanding their screens. The digitized, rewritable screen pixels are just the visible top-side (virtualized surface) of bodies dispossessed by capital’s digital algorithms on the bottom-side where, arguably, other metaphysics still pertain. Not Hegel’s world spirit—whether in the form of Kurzweil’s singularity or Tegmark’s computronium—but rather Marx’s imperative towards a ruthless critique of everything existing can begin to explain how and why the current computational eco-system is co-functional with the unprecedented dispossession wrought by racial computational capitalism and its system of global apartheid. Racial capitalism’s programs continue to function on the backs of those consigned to servitude. Data-visualization, whether in the form of selfie, global map, digitized classic or downloadable sound of the Big Bang, is powered by this elision. It is, shall we say, inescapably local to planet earth, fundamentally historical in relation to species emergence, inexorably complicit with the deferral of justice.
The Global South, with its now world-wide distribution, is endemic to the geopolitics of computational racial capital—it is one of its extraordinary products. The computronics that organize the flow of capital through its materials and signs also organize the consciousness of capital and with it the cosmological erasure of the Global South. Thus the computational unconscious names a vast aspect of global function that still requires analysis. And thus we sneak up on the two principle meanings of the concept of the computational unconscious. On the one hand, we have the problematic residue of amortized consciousness (and the praxis thereof) that has gone into the making of contemporary infrastructure—meaning to say, the structural repression and forgetting that is endemic to the very essence of our technological buildout. On the other hand, we have the organization of everyday life taking place on the basis of this amortization, that is, on the basis of a dehistoricized, deracinated relation to both concrete and abstract machines that function by virtue of the fact that intelligible history has been shorn off of them and its legibility purged from their operating systems. Put simply, we have forgetting, the radical disappearance and expunging from memory, of the historical conditions of possibility of what is. As a consequence, we have the organization of social practice and futurity (or lack thereof) on the basis of this encoded absence. The capture of the general intellect means also the management of the general antagonism. Never has it been truer that memory requires forgetting – the exponential growth in memory storage means also an exponential growth in systematic forgetting – the withering away of the analogue. As a thought experiment, one might imagine a vast and empty vestibule, a James Ingo Freed global holocaust memorial of unprecedented scale, containing all the oceans and lands real and virtual, and dedicated to all the forgotten names of the colonized, the enslaved, the encamped, the statisticized, the read, written and rendered, in the history of computational calculus—of computer memory. These too, and the anthropocene itself, are the sedimented traces that remain among the constituents of the computational unconscious.
_____
Jonathan Beller is Professor of Humanities and Media Studies and Director of the Graduate Program in Media Studies at Pratt Institute. His books include The Cinematic Mode of Production: Attention Economy and the Society of the Spectacle (2006); Acquiring Eyes: Philippine Visuality, Nationalist Struggle, and the World-Media System (2006); and The Message Is Murder: Substrates of Computational Capital (2017). He is a member of the Social Text editorial collective..
[1]A reviewer of this essay for b2o: An Online Journal notes, “the phrase ‘digital computer’ suggests something like the Turing machine, part of which is characterized by a second-order process of symbolization—the marks on Turing’s tape can stand for anything, & the machine processing the tape does not ‘know’ what the marks ‘mean.’” It is precisely such content indifferent processing that the term “exchange value,” severed as it is of all qualities, indicates.
[2] It should be noted that the reverse is also true: that race and gender can be considered and/as technologies. See Chun (2012), de Lauretis (1987).
[3] To insist on first causes or a priori consciousness in the form of God or Truth or Reality is to confront Marx’s earlier acerbic statement against a form of abstraction that eliminates the moment of knowing from the known in The Economic and Philosophic Manuscripts of 1844,
Who begot the first man and nature as a whole? I can only answer you: Your question is itself a product of abstraction. Ask yourself how you arrived at that question. Ask yourself it that question is not posed from a standpoint to which I cannot reply, because it is a perverse one. Ask yourself whether such a progression exists for a rational mind. When you ask about the creation of nature and man you are abstracting in so doing from man and nature. You postulate them as non-existent and yet you want me to prove them to you as existing. Now I say give up your abstraction and you will give up your question. Or, if you want to hold onto your abstraction, then be consistent, and if you think of man and nature as non-existent, then think of yourself as non-existent, for you too are surely man and nature. Don’t think, don’t ask me, for as soon as you think and ask, your abstraction from the existence of nature and man has no meaning. Or are you such an egoist that you postulate everything as nothing and yet want yourself to be?” (Tucker 1978, 92)
[4] If one takes the derivative of computational process at a particular point in space-time one gets an image. If one integrates the images over the variables of space and time, one gets a calculated exploit, a pathway for value-extraction. The image is a moment in this process, the summation of images is the movement of the process.
[5] See Harney and Moten (2013). See also Browne (2015), especially 43-50.
[6] In practical terms, the Alternative Informatics Association, in the announcement for their Internet Ungovernance Forum puts things as follows:
We think that Internet’s problems do not originate from technology alone, that none of these problems are independent of the political, social and economic contexts within which Internet and other digital infrastructures are integrated. We want to re-structure Internet as the basic infrastructure of our society, cities, education, heathcare, business, media, communication, culture and daily activities. This is the purpose for which we organize this forum.
The significance of creating solidarity networks for a free and equal Internet has also emerged in the process of the event’s organization. Pioneered by Alternative Informatics Association, the event has gained support from many prestigious organizations worldwide in the field. In this two-day event, fundamental topics are decided to be ‘Surveillance, Censorship and Freedom of Expression, Alternative Media, Net Neutrality, Digital Divide, governance and technical solutions’. Draft of the event’s schedule can be reached at https://iuf.alternatifbilisim.org/index-tr.html#program (Fidaner, 2014).
[8] Coppola writes that “Israel owes much of its technological prowess to the country’s near—constant state of war. The nation spent $15.2 billion, or roughly 6 percent of gross domestic product, on defense last year, according to data from the International Institute of Strategic Studies, a U.K. think-tank. That’s double the proportion of defense spending to GDP for the U.S., a longtime Israeli ally. If there’s one thing the U.S. Congress can agree on these days, it’s continued support for Israel’s defense technology. Legislators approved $225 million in emergency spending for Iron Dome on Aug. 1, and President Barack Obama signed it into law three days later.”
Black, Edwin. 2001. IBM and the Holocaust: The Strategic Alliance between Nazi Germany and America’s Most Powerful Corporation. New York: Crown Publishers.
Browne, Simone. 2015. Dark Matters: On the Surveillance of Blackness. Durham: Duke University Press.
Césaire, Aimé. 1972. Discourse on Colonialism New York: Monthly Review Press.
Coppola, Gabrielle. 2014. “Traces of Israel’s Iron Dome Can Be Found in Tech Startups.” Bloomberg News (Aug 11).
Chun, Wendy Hui Kyong. 2004. “On Software, or the Persistence of Visual Knowledge,” Grey Room 18, Winter: 26-51.
Chun, Wendy Hui Kyong. 2012. In Nakamura and Chow-White (2012). 38-69.
De Lauretis, Teresa. 1987. Technologies of Gender: Essays on Theory, Film, and Fiction Bloomington, IN: Indiana University Press.
This is a seminal and important book, which should be studied carefully by anyone interested in the evolution of society in light of the pervasive impact of the Internet. In a nutshell, the book documents how and why the Internet turned from a means to improve our lives into what appears to be a frightening dystopia driven by the collection and exploitation of personal data, data that most of us willingly hand over with little or no care for the consequences. “In our digital frenzy to share snapshots and updates, to text and videochat with friends and lovers … we are exposing ourselves‒rendering ourselves virtually transparent to anyone with rudimentary technological capabilities” (page 13 of the hardcover edition).
The book meets its goals (25) of tracing the emergence of a new architecture of power relations; to document its effects on our lives; and to explore how to resist and disobey (but this last rather succinctly). As the author correctly says (28), metaphors matter, and we need to re-examine them closely, in particular the so-called free flow of data.
As the author cogently points out, quoting Media Studies scholar Siva Vaidhyanathan, we “assumed digitization would level the commercial playing field in wealthy economies and invite new competition into markets that had always had high barriers to entry.” We “imagined a rapid spread of education and critical thinking once we surmounted the millennium-old problems of information scarcity and maldistribution” (169).
“But the digital realm does not so much give us access to truth as it constitutes a new way for power to circulate throughout society” (22). “In our digital age, social media companies engage in surveillance, data brokers sell personal information, tech companies govern our expression of political views, and intelligence agencies free-ride off e-commerce. … corporations and governments [are enabled] to identify and cajole, to stimulate our consumption and shape our desires, to manipulate us politically, to watch, surveil, detect, predict, and, for some, punish. In the process, the traditional limits placed on the state and on governing are being eviscerated, as we turn more and more into marketized malleable subjects who, willingly or unwillingly, allow ourselves to be nudged, recommended, tracked, diagnosed, and predicted by a blurred amalgam of governmental and commercial initiative” (187).
“The collapse of the classic divide between the state and society, between the public and private sphere, is particular debilitating and disarming. The reason is that the boundaries of the state had always been imagined in order to limit them” (208). “What is emerging in the place of separate spheres [of government and private industry] is a single behemoth of a data market: a colossal market for personal data” (198). “Knots of statelike power: that is what we face. A tenticular amalgam of public and private institutions … Economy, society, and private life melt into a giant data market for everyone to trade, mine, analyze, and target” (215). “This is all the more troubling because the combinations we face today are so powerful” (210).
As a consequence, “Digital exposure is restructuring the self … The new digital age … is having profound effects on our analogue selves. … it is radically transforming our subjectivity‒even for those, perhaps even more, who believe they have nothing to fear” (232). “Mortification of the self, in our digital world, happens when subjects voluntarily cede their private attachments and their personal privacy, when they give up their protected personal space, cease monitoring their exposure on the Internet, let go of their personal data, and expose their intimate lives” (233).
As the book points out, quoting Software Freedom Law Center founder Eben Moglen, it is justifiable to ask whether “any form of democratic self-government, anywhere, is consistent with the kind of massive, pervasive, surveillance into which the United States government has led not only its people but the world” (254). “This is a different form of despotism, one that might take hold only in a democracy: one in which people loose the will to resist and surrender with broken spirit” (255).
The book opens with an unnumbered chapter that masterfully reminds us of the digital society we live in: a world in which both private companies and government intelligence services (also known as spies) read our e-mails and monitor our web browsing. Just think of “the telltale advertisements popping up on the ribbon of our search screen, reminding us of immediately past Google or Bing queries. We’ve received the betraying e-mails in our spam folders” (2). As the book says, quoting journalist Yasha Levine, social media has become “a massive surveillance operation that intercepts and analyses terabytes of data to build and update complex psychological profiles on hundreds of millions of people all over the world‒all of it in real time” (7). “At practically no cost, the government has complete access to people’s digital selves” (10).
We provide all this data willingly (13), because we have no choice and/or because we “wish to share our lives with loved ones and friends” (14). We crave digital connections and recognition and “Our digital cravings are matched only by the drive and ambition of those who are watching” (14). “Today, the drive to know everything, everywhere, at every moment is breathtaking” (15).
But “there remain a number of us who continue to resist. And there are many more who are ambivalent about the loss of privacy or anonymity, who are deeply concerned or hesitant. There are some who anxiously warn us about the dangers and encourage us to maintain reserve” (13).
“And yet, even when we hesitate or are ambivalent, it seems there is simply no other way to get things done in the new digital age” (14), be it airline tickets, hotel reservations, buying goods, booking entertainment. “We make ourselves virtually transparent for everyone to see, and in so doing, we allow ourselves to be shaped in unprecedented ways, intentionally or wittingly … we are transformed and shaped into digital subjects” (14). “It’s not so much a question of choice as a feeling of necessity” (19). “For adolescents and young adults especially, it is practically impossible to have a social life, to have friends, to meet up, to go on dates, unless we are negotiating the various forms of social media and mobile technology” (18).
Most have become dulled by blind faith in markets, the neoliberal mantra (better to let private companies run things than the government), fear of terrorism‒dulled into believing that, if we have nothing to hide, then there is nothing to fear (19). Even though private companies, and governments, know far more about us than a totalitarian regime such as that of East Germany “could ever have dreamed” (20).
“We face today, in advanced liberal democracies, a radical new form of power in a completely altered landscape of political and social possibilities” (17). “Those who govern, advertise, and police are dealing with a primary resource‒personal data‒that is being handed out for free, given away in abundance, for nothing” (18).
According to the book “There is no conspiracy here, nothing untoward.” But the author probably did not have access to Shawn M. Powers and Michael Jablonski’s The Real Cyberwar: The Political Economy of Internet Freedom (2015), published around the same time as Harcourt’s book, which shows that actually the current situation was created, or at least facilitated, by deliberate actions of the US government (which were open, not secret), resulting in what the book calls, quoting journalist James Bamford, “a surveillance-industrial empire” (27).
The observations and conclusions outlined above are meticulously justified, with numerous references, in the numbered chapters of the book. Chapter 1 explains how analogies of the current surveillance regime to Orwell’s 1984 are imperfect because, unlike in Orwell’s imagined world, today most people desire to provide their personal data and do so voluntarily (35). “That is primarily how surveillance works today in liberal democracies: through the simplest desires, curated and recommended to us” (47).
Chapter 2 explains how the current regime is not really a surveillance state in the classical sense of the term: it is a surveillance society because it is based on the collaboration of government, the private sector, and people themselves (65, 78-79). Some believe that government surveillance can prevent or reduce terrorist attacks (55-56), never mind that it might violate constitutional rights (56-57), or be ineffective, or that terrorist attacks in liberal democracies have resulted in far fewer fatalities than, say, traffic accidents or opiod overdose.
Chapter 3 explains how the current regime is not actually an instantiation of Jeremy Bentham’s Panopticon, because we are not surveilled in order to be punished‒on the contrary, we expose ourselves in order to obtain something we want (90), and we don’t necessarily realize the extent to which we are being surveilled (91). As the book puts it, Google strives “to help people get what they want” by collecting and processing as much personal data as possible (103).
Chapter 4 explains how narcissism drives the willing exposure of personal data (111). “We take pleasure in watching [our friends], ‘following’ them, ‘sharing’ their information‒even while we are, unwittingly, sharing our every keyboard stroke” (114). “We love watching others and stalking their digital traces” (117).
Yet opacity is the rule for corporations‒as the book says, quoting Frank Pasquale (124-125), “Internet companies collect more and more data on their users but fight regulations that would let those same users exercise some control over the resulting digital dossiers.” In this context, it is worth noting the recent proposals, analyzed here, here, and here, to the World Trade Organization that would go in the direction favored by dominant corporations.
The book explains in summary fashion the importance of big data (137-140). For an additional discussion, with extensive references, see sections 1 of my submission to the Working Group on Enhanced Cooperation. As the book correctly notes, “In the nineteenth century, it was the government that generated data … But now we have all become our own publicists. The production of data has become democratized” (140).
Chapter 5 explains how big data, and its analysis, is fundamentally different from the statistics that were collected, analyzed, and published in the past by governments. The goal of statistics is to understand and possibly predict the behavior of some group of people who share some characteristics (e.g. they live in a particular geographical area, or are of the same age). The goal of big data is to target and predict individuals (158, 161-163).
Chapter 6 explains how we have come to accept the loss of privacy and control of our personal data (166-167). A change in outlook, largely driven by an exaggerated faith in free enterprise (168 and 176), “has made it easier to commodify privacy, and, gradually, to eviscerate it” (170). “Privacy has become a form of private property” (176).
The book documents well the changes in the US Supreme Court’s views of privacy, which have moved from defending a human right to balancing privacy with national security and commercial interests (172-175). Curiously, the book does not mention the watershed Smith vs. Maryland case, in which the US Supreme Court held that telephone metadata is not protected by the right to privacy, nor the US Electronic Communications Privacy Act, under which many e-mails are not protected either.
The book mentions the incestuous ties between the intelligence community, telecommunications companies, multinational companies, and military leadership that have facilitated the implementation of the current surveillance regime (178); these ties are exposed and explained in greater detail in Powers and Jablonski’s The Real Cyberwar. This chapter ends with an excellent explanation of how digital surveillance records are in no way comparable to the old-fashioned paper files that were collected in the past (181).
Chapter 7 explores the emerging dystopia, engendered by the fact that “The digital economy has torn down the conventional boundaries between governing, commerce, and private life” (187). In a trend that should be frightening, private companies now exercise censorship (191), practice data mining on scales that are hard to imagine (194), control worker performance by means beyond the dreams of any Tayorlist (196), and even aspire to “predict consumer preferences better than consumers themselves can” (198).
The size of the data brokerage market is huge and data on individuals is increasingly used to make decision about them, e.g. whether they can obtain a loan (198-208). “Practically none of these scores [calculated from personal data] are revealed to us, and their accuracy is often haphazard” (205). As noted above, we face an interdependent web of private and public interests that collect, analyze, refine, and exploit our personal data‒without any meaningful supervision or regulation.
Chapter 8 explains how digital interactions are reconfiguring our self-images, our subjectivity. We know, albeit at times only implicitly, that we are being surveilled and this likely affects the behavior of many (218). Being deprived of privacy affects us, much as would being deprived of property (229). We have voluntarily given up much of our privacy, believing either that we have no choice but to accept surveillance, or that the surveillance is in our interests (233). So it is our society as a whole that has created, and nurtures, the surveillance regime that we live in.
As shown in Chapter 9, that regime is a form of digital incarceration. We are surveilled even more closely than are people obliged by court order to wear electronic tracking devices (237). Perhaps a future smart watch will even administer sedatives (or whatever) when it detects, by analyzing our body functions and comparing with profiles downloaded from the cloud, that we would be better off being sedated (237). Or perhaps such a watch will be hijacked by malware controlled by an intelligence service or by criminals, thus turning a seemingly free choice into involuntary constraints (243, 247).
Chapter 10 show in detail how, as already noted, the current surveillance regime is not compatible with democracy. The book cites Tocqueville to remind us that democracy can become despotic, and result is a situation where “people lose the will to resist and surrender with broken spirit” (255). The book summarily presents well-known data regarding the low voter turnouts in the United States, a topic covered in full detail in Robert McChesney’s Digital Disconnect: How Capitalism is Turning the Internet Against Democracy (2014) which explains how the Internet is having a negative effect on democracy. Yet “it remains the case that the digital transparency and punishment issues are largely invisible to democratic theory and practice” (216).
So, what is to be done? Chapter 11 extols the revelations made by Edward Snowden and those published by Julian Assange (WikiLeaks). It mentions various useful self-help tools, such as “I Fight Surveillance” and “Security in a Box” (270-271). While those tools are useful, they are not at present used pervasively and thus don’t really affect the current surveillance regime. We need more emphasis on making the tools available and on convincing more people to use them.
As the book correctly says, an effective measure would be to carry the privatization model to its logical extreme (274): since personal data is valuable, those who use it should pay us for it. As already noted, the industry that is thriving from the exploitation of our personal data is well aware of this potential threat, and has worked hard to attempt to obtain binding international norms, in the World Trade Organization, that would enshrine the “free flow of data”, where “free” in the sense of freedom of information is used as a Trojan Horse for the real objective, which is “free” in the sense of no cost and no compensation for those the true owners of the data, we the people. As the book correctly mentions, civil society organizations have resisted this trend and made proposals that go in the opposite direction (276), including a proposal to enshrine the necessary and proportionate principles in international law.
Chapter 12 concludes the book by pointing out, albeit very succinctly, that mass resistance is necessary, and that it need not be organized in traditional ways: it can be leaderless, diffuse, and pervasive (281). In this context, I refer to the work of the JustNet Coalition and of the fledgling Internet Social Forum (see also here and here).
Again, this book is essential reading for anybody who is concerned about the current state of the digital world, and the direction in which it is moving.
Like other books by Milton Mueller, Will the Internet Fragment? is a must-read for anybody who is seriously interested in the development of Internet governance and its likely effects on other walks of life. This is true because, and not despite, the fact that it is a tract that does not present an unbiased view. On the contrary, it advocates a certain approach, namely a utopian form of governance which Mueller refers to as “popular sovereignty in cyberspace”.
Mueller, Professor of Information Security and Privacy at Georgia Tech, is an internationally prominent scholar specializing in the political economy of information and communication. The author of seven books and scores of journal articles, his work informs not only public policy but also science and technology studies, law, economics, communications, and international studies. His books Networks and States: The Global Politics of Internet Governance (MIT Press, 2010) and Ruling the Root: Internet Governance and the Taming of Cyberspace (MIT Press, 2002) are acclaimed scholarly accounts of the global governance regime emerging around the Internet.
Most of Will the Internet Fragment? consists of a rigorous analysis of what has been commonly referred to as “fragmentation,” showing that very different technological and legal phenomena have been conflated in ways that do not favour productive discussions. That so-called “fragmentation” is usually defined as the contrary of the desired situation in which “every device on the Internet should be able to exchange data packets with any other device that is was willing to receive them” (p. 6 of the book, citing Vint Cerf). But. as Mueller correctly points out, not all end-points of the Internet can reach all other end-points at all times, and there may be very good reasons for that (e.g. corporate firewalls, temporary network outages, etc.). Mueller then shows how network effects (the fact that the usefulness of a network increases as it becomes larger) will tend to prevent or counter fragmentation: a subset of the network is less useful than is the whole. He also shows how network effects can prevent the creation of alternative networks: once everybody is using a given network, why switch to an alternative that few are using? As Mueller aptly points out (pp. 63-66), the slowness of the transition to IPv6 is due to this type of network effect.
The key contribution of this book is that it clearly identifies the real question of interest to whose who are concerned about the governance of the Internet and its impact on much of our lives. That question (which might have been a better subtitle) is: “to what extent, if any, should Internet policies be aligned with national borders?” (See in particular pp. 71, 73, 107, 126 and 145). Mueller’s answer is basically “as little as possible, because supra-national governance by the Internet community is preferable”. This answer is presumably motivated by Mueller’s view that “ institutions shift power from states to society” (p. 116), which implies that “society” has little power in modern states. But (at least ideally) states should be the expression of a society (as Mueller acknowledges on pp. 124 and 136), so it would have been helpful if Mueller had elaborated on the ways (and there are many) in which he believes states do not reflect society and in the ways in which so-called multi-stakeholder models would not be worse and would not result in a denial of democracy.
Before commenting on Mueller’s proposal for supra-national governance, it is worth commenting on some areas where a more extensive discussion would have been warranted. We note, however, that the book the book is part of a series that is deliberately intended to be short and accessible to a lay public. So Mueller had a 30,000 word limit and tried to keep things written in a way that non-specialists and non-scholars could access. This no doubt largely explains why he didn’t cover certain topics in more depth.
Be that as it may, the discussion would have been improved by being placed in the long-term context of the steady decrease in national sovereignty that started in 1648, when sovereigns agreed in the Treaty of Westphalia to refrain from interfering in the religious affairs of foreign states, , and that accelerated in the 20th century. And by being placed in the short-term context of the dominance by the USA as a state (which Mueller acknowledges in passing on p. 12), and US companies, of key aspects of the Internet and its governance. Mueller is deeply aware of the issues and has discussed them in his other books, in particular Ruling the Root and Networks and States, so it would have been nice to see the topic treated here, with references to the end of the Cold War and what appears to be re-emergence of some sort of equivalent international tension (albeit not for the same reasons and with different effects at least for what concerns cyberspace). It would also have been preferable to include at least some mention of the literature on the negative economic and social effects of current Internet governance arrangements.
It is telling that, in Will the Internet Fragment?, Mueller starts his account with the 2014 NetMundial event, without mentioning that it took place in the context of the outcomes of the World Summit of the Information Society (WSIS, whose genesis, dynamics, and outcomes Mueller well analyzed in Networks and States), and without mentioning that the outcome document of the 2015 UN WSIS+10 Review reaffirmed the WSIS outcomes and merely noted that Brazil had organized NetMundial, which was, in context, an explicit refusal to note (much less to endorse) the NetMundial outcome document.
The UN’s reaffirmation of the WSIS outcomes is significant because, as Mueller correctly notes, the real question that underpins all current discussions of Internet governance is “what is the role of states?,” and the Tunis Agenda states: “Policy authority for Internet-related public policy issues is the sovereign right of States. They have rights and responsibilities for international Internet-related public policy issues.”
Mueller correctly identifies and discusses the positive externalities created by the Internet (pp. 44-48). It would have been better if he had noted that there are also negative externalities, in particular regarding security (see section 2.8 of my June 2017 submission to ITU’s CWG-Internet), and that the role of states includes internalizing such externalities, as well as preventing anti-competitive behavior.
It is also telling the Mueller never explicitly mentions a principle that is no longer seriously disputed, and that was explicitly enunciated in the formal outcome of the WSIS+10 Review, namely that offline law applies equally online. Mueller does mention some issues related to jurisdiction, but he does not place those in the context of the fundamental principle that cyberspace is subject to the same laws as the rest of the world: as Mueller himself acknowledges (p. 145), allegations of cybercrime are judged by regular courts, not cyber-courts, and if you are convicted you will pay a real fine or be sent to a real prison, not to a cyber-prison. But national jurisdiction is not just about security (p. 74 ff.), it is also about legal certainty for commercial dealings, such as enforcement of contracts. There are an increasing number of activities that depend on the Internet, but that also depend on the existence of known legal regimes that can be enforced in national courts.
And what about the tension between globalization and other values such as solidarity and cultural diversity? As Mueller correctly notes (p. 10), the Internet is globalization on steroids. Yet cultural values differ around the world (p. 125). How can we get the benefits of both an unfragmented Internet and local cultural diversity (as opposed to the current trend to impose US values on the rest of the world)?
While dealing with these issues in more depth would have complicated the discussion, it also would have made it more valuable, because the call for direct rule of the Internet by and for Internet users must either be reconciled with the principle that offline law applies equally online, or be combined with a reasoned argument for the abandonment of that principle. As Mueller so aptly puts it (p. 11): “Internet governance is hard … also because of the mismatch between its global scope and the political and legal institutions for responding to societal problems.”
Since most laws, and almost all enforcement mechanisms are national, the influence of states on the Internet is inevitable. Recall that the idea of enforceable rules (laws) dates back to at least 1700 BC and has formed an essential part of all civilizations in history. Mueller correctly posits on p. 125 that a justification for territorial sovereignty is to restrict violence (only the state can legitimately exercise it), and wonders why, in that case, the entire world does not have a single government. But he fails to note that, historically, at times much of the world was subject to a single government (think of the Roman Empire, the Mongol Empire, the Holy Roman Empire, the British Empire), and he does not explore the possibility of expanding the existing international order (treaties, UN agencies, etc.) to become a legitimate democratic world governance (which of course it is not, in part because the US does not want it to become one). For example, a concrete step in the direction of using existing governance systems has recently been proposed by Microsoft: a Digital Geneva Convention.
Mueller explains why national borders interfere with certain aspects of certain Internet activities (pp. 104, 106), but national borders interfere with many activities. Yet we accept them because there doesn’t appear to be any “least worst” alternative. Mueller does acknowledge that states have power, and rightly calls for states to limit their exercise of power to their own jurisdiction (p. 148). But he posits that such power “carries much less weight than one would think” (p. 150), without justifying that far-reaching statement. Indeed, Mueller admits that “it is difficult to conceive of an alternative” (p. 73), but does not delve into the details sufficiently to show convincingly how the solution that he sketches would not result in greater power by dominant private companies (and even corpotocracy or corporatism), increasing income inequality, and a denial of democracy. For example, without the power of state in the form of consumer protection measures, how can one ensure that private intermediaries would “moderate content based on user preferences and reports” (p. 147) as opposed to moderating content so as to maximize their profits? Mueller assumes that there would be a sufficient level of competition, resulting in self-correcting forces and accountability (p. 129); but current trends are just the opposite: we see increasing concentration and domination in many aspects of the Internet (see section 2.11 of my June 2017 submission to ITU’s CWG-Internet) and some competition law authorities have found that some abuse of dominance has taken place.
It seems to me that Mueller too easily concludes that “a state-centric approach to global governance cannot easily co-exist with a multistakeholder regime” (p. 117), without first exploring the nuances of multi-stakeholder regimes and the ways that they could interface with existing institutions, which include intergovernmental bodies as well as states. As I have stated elsewhere: “The current arrangement for global governance is arguably similar to that of feudal Europe, whereby multiple arrangements of decision-making, including the Church, cities ruled by merchant-citizens, kingdoms, empires and guilds co-existed with little agreement as to which actor was actually in charge over a given territory or subject matter. It was in this tangled system that the nation-state system gained legitimacy precisely because it offered a clear hierarchy of authority for addressing issues of the commons and provision of public goods.”
Which brings us to another key point that Mueller does not consider in any depth: if the Internet is a global public good, then its governance must take into account the views and needs of all the world’s citizens, not just those that are privileged enough to have access at present. But Mueller’s solution would restrict policy-making to those who are willing and able to participate in various so-called multi-stakeholder forums (apparently Mueller does not envisage a vast increase in participation and representation in these; p. 120). Apart from the fact that that group is not a community in any real sense (a point acknowledged on p. 139), it comprises, at present, only about half of humanity, and even much of that half would not be able to participate because discussions take place primarily in English, and require significant technical knowledge and significant time commitments.
Mueller’s path for the future appears to me to be a modern version of the International Ad Hoc Committee (IAHC), but Mueller would probably disagree, since he is of the view that the IAHC was driven by intergovernmental organizations. In any case, the IAHC work failed to be seminal because of the unilateral intervention of the US government, well described in Ruling the Root, which resulted in the creation of ICANN, thus sparking discussions of Internet governance in WSIS and elsewhere. While Mueller is surely correct when he states that new governance methods are needed (p. 127), it seems a bit facile to conclude that “the nation-state is the wrong unit” and that it would be better to rely largely on “global Internet governance institutions rooted in non-state actors” (p. 129), without explaining how such institutions would be democratic and representative of all of the word’s citizens.
Mueller correctly notes (p. 150) that, historically, there have major changes in sovereignty: emergence and falls of empires, creation of new nations, changes in national borders, etc. But he fails to note that most of those changes were the result of significant violence and use of force. If, as he hopes, the “Internet community” is to assert sovereignty and displace the existing sovereignty of states, how will it do so? Through real violence? Through cyber-violence? Through civil disobedience (e.g. migrating to bitcoin, or implementing strong encryption no matter what governments think)? By resisting efforts to move discussions into the World Trade Organization? Or by persuading states to relinquish power willingly? It would have been good if Mueller had addressed, at least summarily, such questions.
Before concluding, I note a number of more-or-less minor errors that might lead readers to imprecise understandings of important events and issues. For example, p. 37 states that “the US and the Internet technical community created a global institution, ICANN”: in reality, the leaders of the Internet technical community obeyed the unilateral diktat of the US government (at first somewhat reluctantly and later willingly) and created a California non-profit company, ICANN. And ICANN is not insulated from jurisdictional differences; it is fully subject to US laws and US courts. The discussion on pp. 37-41 fails to take into account the fact that a significant portion of the DNS, the ccTLDs, is already aligned with national borders, and that there are non-national telephone numbers; the real differences between the DNS and telephone numbers are that most URLs are non-national, whereas few telephone numbers are non-national; that national telephone numbers are given only to residents of the corresponding country; and that there is an international real-time mechanism for resolving URLs that everybody uses, whereas each telephone operator has to set up its own resolving mechanism for telephone numbers. Page 47 states that OSI was “developed by Europe-centered international organizations”, whereas actually it was developed by private companies from both the USA (including AT&T, Digital Equipment Corporation, Hewlett-Packard, etc.) and Europe working within global standards organizations (IEC, ISO, and ITU), who all happen to have secretariats in Geneva, Switzerland; whereas the Internet was initially developed and funded by an arm of the US Department of Defence and the foundation of the WWW was initially developed in a European intergovernmental organization. Page 100 states that “The ITU has been trying to displace or replace ICANN since its inception in 1998”; whereas a correct statement would be “While some states have called for the ITU to displace or replace ICANN since its inception in 1998, such proposals have never gained significant support and appear to have faded away recently.” Not everybody thinks that the IANA transition was a success (p. 117), nor that it is an appropriate model for the future (pp. 132-135; 136-137), and it is worth noting that ICANN successfully withstood many challenges (p. 100) while it had a formal link to the US government; it remains to be seen how ICANN will fare now that it is independent of the US government. ICANN and the RIR’s do not have a “‘transnational’ jurisdiction created through private contracts” (p. 117); they are private entities subject to national law and the private contracts in question are also subject to national law (and enforced by national authorities, even if disputes are resolved by international arbitration). I doubt that it is a “small step from community to nation” (p. 142), and it is not obvious why anti-capitalist movements (which tend to be internationalist) would “end up empowering territorial states and reinforcing alignment” (p. 147), when it is capitalist movements that rely on the power of territorial states to enforce national laws, for example regarding intellectual property rights.
Despite these minor quibbles, this book, and its references (albeit not as extensive as one would have hoped), will be a valuable starting point for future discussions of internet alignment and/or “fragmentation.” Surely there will be much future discussion, and many more analyses and calls for action, regarding what may well be one of the most important issues that humanity now faces: the transition from the industrial era to the information era and the disruptions arising from that transition.
This talk was delivered at Virginia Commonwealth University today as part of a seminar co-sponsored by the Departments of English and Sociology and the Media, Art, and Text PhD Program. The slides are also available here.
Thank you very much for inviting me here to speak today. I’m particularly pleased to be speaking to those from Sociology and those from the English and those from the Media, Art, and Text departments, and I hope my talk can walk the line between and among disciplines and methods – or piss everyone off in equal measure. Either way.
This is the last public talk I’ll deliver in 2016, and I confess I am relieved (I am exhausted!) as well as honored to be here. But when I finish this talk, my work for the year isn’t done. No rest for the wicked – ever, but particularly in the freelance economy.
As I have done for the past six years, I will spend the rest of November and December publishing my review of what I deem the “Top Ed-Tech Trends” of the year. It’s an intense research project that usually tops out at about 75,000 words, written over the course of four to six weeks. I pick ten trends and themes in order to closely at the recent past, the near-term history of education technology. Because of the amount of information that is published about ed-tech – the amount of information, its irrelevance, its incoherence, its lack of context – it can be quite challenging to keep up with what is really happening in ed-tech. And just as importantly, what is not happening.
So that’s what I try to do. And I’ll boast right here – no shame in that – no one else does as in-depth or thorough job as me, certainly no one who is entirely independent from venture capital, corporate or institutional backing, or philanthropic funding. (Of course, if you look for those education technology writers who are independent from venture capital, corporate or institutional backing, or philanthropic funding, there is pretty much only me.)
The stories that I write about the “Top Ed-Tech Trends” are the antithesis of most articles you’ll see about education technology that invoke “top” and “trends.” For me, still framing my work that way – “top trends” – is a purposeful rhetorical move to shed light, to subvert, to offer a sly commentary of sorts on the shallowness of what passes as journalism, criticism, analysis. I’m not interested in making quickly thrown-together lists and bullet points. I’m not interested in publishing clickbait. I am interested nevertheless in the stories – shallow or sweeping – that we tell and spread about technology and education technology, about the future of education technology, about our technological future.
Let me be clear, I am not a futurist – even though I’m often described as “ed-tech’s Cassandra.” The tagline of my website is “the history of the future of education,” and I’m much more interested in chronicling the predictions that others make, have made about the future of education than I am writing predictions of my own.
One of my favorites: “Books will soon be obsolete in schools,” Thomas Edison said in 1913. Any day now. Any day now.
Here are a couple of more recent predictions:
“In fifty years, there will be only ten institutions in the world delivering higher education and Udacity has a shot at being one of them.” – that’s Sebastian Thrun, best known perhaps for his work at Google on the self-driving car and as a co-founder of the MOOC (massive open online course) startup Udacity. The quotation is from 2012.
And from 2013, by Harvard Business School professor, author of the book The Innovator’s Dilemma, and popularizer of the phrase “disruptive innovation,” Clayton Christensen: “In fifteen years from now, half of US universities may be in bankruptcy. In the end I’m excited to see that happen. So pray for Harvard Business School if you wouldn’t mind.”
Pray for Harvard Business School. No. I don’t think so.
Both of these predictions are fantasy. Nightmarish, yes. But fantasy. Fantasy about a future of education. It’s a powerful story, but not a prediction made based on data or modeling or quantitative research into the growing (or shrinking) higher education sector. Indeed, according to the latest statistics from the Department of Education – now granted, this is from the 2012–2013 academic year – there are 4726 degree-granting postsecondary institutions in the United States. A 46% increase since 1980. There are, according to another source (non-governmental and less reliable, I think), over 25,000 universities in the world. This number is increasing year-over-year as well. So to predict that the vast vast majority of these schools (save Harvard, of course) will go away in the next decade or so or that they’ll be bankrupt or replaced by Silicon Valley’s version of online training is simply wishful thinking – dangerous, wishful thinking from two prominent figures who will benefit greatly if this particular fantasy comes true (and not just because they’ll get to claim that they predicted this future).
Here’s my “take home” point: if you repeat this fantasy, these predictions often enough, if you repeat it in front of powerful investors, university administrators, politicians, journalists, then the fantasy becomes factualized. (Not factual. Not true. But “truthy,” to borrow from Stephen Colbert’s notion of “truthiness.”) So you repeat the fantasy in order to direct and to control the future. Because this is key: the fantasy then becomes the basis for decision-making.
Fantasy. Fortune-telling. Or as capitalism prefers to call it “market research.”
“Market research” involves fantastic stories of future markets. These predictions are often accompanied with a press release touting the size that this or that market will soon grow to – how many billions of dollars schools will spend on computers by 2020, how many billions of dollars of virtual reality gear schools will buy by 2025, how many billions of dollars of schools will spend on robot tutors by 2030, how many billions of dollars will companies spend on online training by 2035, how big will coding bootcamp market will be by 2040, and so on. The markets, according to the press releases, are always growing. Fantasy.
In 2011, the analyst firm Gartner predicted that annual tablet shipments would exceed 300 million units by 2015. Half of those, the firm said, would be iPads. IDC estimates that the total number of shipments in 2015 was actually around 207 million units. Apple sold just 50 million iPads. That’s not even the best worst Gartner prediction. In October of 2006, Gartner said that Apple’s “best bet for long-term success is to quit the hardware business and license the Mac to Dell.” Less than three months later, Apple introduced the iPhone. The very next day, Apple shares hit $97.80, an all-time high for the company. By 2012 – yes, thanks to its hardware business – Apple’s stock had risen to the point that the company was worth a record-breaking $624 billion.
But somehow, folks – including many, many in education and education technology – still pay attention to Gartner. They still pay Gartner a lot of money for consulting and forecasting services.
People find comfort in these predictions, in these fantasies. Why?
Gartner is perhaps best known for its “Hype Cycle,” a proprietary graphic presentation that claims to show how emerging technologies will be adopted.
According to Gartner, technologies go through five stages: first, there is a “technology trigger.” As the new technology emerges, a lot of attention is paid to it in the press. Eventually it reaches the second stage: the “peak of inflated expectations.” So many promises have been made about this technological breakthrough. Then, the third stage: the “trough of disillusionment.” Interest wanes. Experiments fail. Promises are broken. As the technology matures, the hype picks up again, more slowly – this is the “slope of enlightenment.” Eventually the new technology becomes mainstream – the “plateau of productivity.”
It’s not that hard to identify significant problems with the Hype Cycle, least of which being it’s not a cycle. It’s a curve. It’s not a particularly scientific model. It demands that technologies always move forward along it.
Gartner says its methodology is proprietary – which is code for “hidden from scrutiny.” Gartner says, rather vaguely, that it relies on scenarios and surveys and pattern recognition to place technologies on the line. But most of the time when Gartner uses the word “methodology,” it is trying to signify “science,” and what it really means is “expensive reports you should buy to help you make better business decisions.”
Can it really help you make better business decisions? It’s just a curve with some technologies plotted along it. The Hype Cycle doesn’t help explain why technologies move from one stage to another. It doesn’t account for technological precursors – new technologies rarely appear out of nowhere – or political or social changes that might prompt or preclude adoption. And at the end it is simply too optimistic, unreasonably so, I’d argue. No matter how dumb or useless a new technology is, according to the Hype Cycle at least, it will eventually become widely adopted. Where would you plot the Segway, for example? (In 2008, ever hopeful, Gartner insisted that “This thing certainly isn’t dead and maybe it will yet blossom.” Maybe it will, Gartner. Maybe it will.)
And maybe this gets to the heart as to why I’m not a futurist. I don’t share this belief in an increasingly technological future; I don’t believe that more technology means the world gets “more better.” I don’t believe that more technology means that education gets “more better.”
Every year since 2004, the New Media Consortium, a non-profit organization that advocates for new media and new technologies in education, has issued its own forecasting report, the Horizon Report, naming a handful of technologies that, as the name suggests, it contends are “on the horizon.”
Unlike Gartner, the New Media Consortium is fairly transparent about how this process works. The organization invites various “experts” to participate in the advisory board that, throughout the course of each year, works on assembling its list of emerging technologies. The process relies on the Delphi method, whittling down a long list of trends and technologies by a process of ranking and voting until six key trends, six emerging technologies remain.
Disclosure/disclaimer: I am a folklorist by training. The last time I took a class on “methods” was, like, 1998. And admittedly I never learned about the Delphi method – what the New Media Consortium uses for this research project – until I became a scholar of education technology looking into the Horizon Report. As a folklorist, of course, I did catch the reference to the Oracle of Delphi.
Like so much of computer technology, the roots of the Delphi method are in the military, developed during the Cold War to forecast technological developments that the military might use and that the military might have to respond to. The military wanted better predictive capabilities. But – and here’s the catch – it wanted to identify technology trends without being caught up in theory. It wanted to identify technology trends without developing models. How do you do that? You gather experts. You get those experts to consensus.
So here is the consensus from the past twelve years of the Horizon Report for higher education. These are the technologies it has identified that are between one and five years from mainstream adoption:
It’s pretty easy, as with the Gartner Hype Cycle, to look at these predictions and note that they are almost all wrong in some way or another.
Some are wrong because, say, the timeline is a bit off. The Horizon Report said in 2010 that “open content” was less than a year away from widespread adoption. I think we’re still inching towards that goal – admittedly “open textbooks” have seen a big push at the federal and at some state levels in the last year or so.
Some of these predictions are just plain wrong. Virtual worlds in 2007, for example.
And some are wrong because, to borrow a phrase from the theoretical physicist Wolfgang Pauli, they’re “not even wrong.” Take “collaborative learning,” for example, which this year’s K–12 report posits as a mid-term trend. Like, how would you argue against “collaborative learning” as occurring – now or some day – in classrooms? As a prediction about the future, it is not even wrong.
But wrong or right – that’s not really the problem. Or rather, it’s not the only problem even if it is the easiest critique to make. I’m not terribly concerned about the accuracy of the predictions about the future of education technology that the Horizon Report has made over the last decade. But I do wonder how these stories influence decision-making across campuses.
What might these predictions – this history of the future – tell us about the wishful thinking surrounding education technology and about the direction that the people the New Media Consortium views as “experts” want the future to take. What can we learn about the future by looking at the history of our imagining about education’s future. What role does powerful ed-tech storytelling (also known as marketing) play in shaping that future? Because remember: to predict the future is to control it – to attempt to control the story, to attempt to control what comes to pass.
It’s both convenient and troubling then these forward-looking reports act as though they have no history of their own; they purposefully minimize or erase their own past. Each year – and I think this is what irks me most – the NMC fails to looks back at what it had predicted just the year before. It never revisits older predictions. It never mentions that they even exist. Gartner too removes technologies from the Hype Cycle each year with no explanation for what happened, no explanation as to why trends suddenly appear and disappear and reappear. These reports only look forward, with no history to ground their direction in.
I understand why these sorts of reports exist, I do. I recognize that they are rhetorically useful to certain people in certain positions making certain claims about “what to do” in the future. You can write in a proposal that, “According to Gartner… blah blah blah.” Or “The Horizon Reports indicates that this is one of the most important trends in coming years, and that is why we need to commit significant resources – money and staff – to this initiative.” But then, let’s be honest, these reports aren’t about forecasting a future. They’re about justifying expenditures.
“The best way to predict the future is to invent it,” computer scientist Alan Kay once famously said. I’d wager that the easiest way is just to make stuff up and issue a press release. I mean, really. You don’t even need the pretense of a methodology. Nobody is going to remember what you predicted. Nobody is going to remember if your prediction was right or wrong. Nobody – certainly not the technology press, which is often painfully unaware of any history, near-term or long ago – is going to call you to task. This is particularly true if you make your prediction vague – like “within our lifetime” – or set your target date just far enough in the future – “In fifty years, there will be only ten institutions in the world delivering higher education and Udacity has a shot at being one of them.”
Let’s consider: is there something about the field of computer science in particular – and its ideological underpinnings – that makes it more prone to encourage, embrace, espouse these sorts of predictions? Is there something about Americans’ faith in science and technology, about our belief in technological progress as a signal of socio-economic or political progress, that makes us more susceptible to take these predictions at face value? Is there something about our fears and uncertainties – and not just now, days before this Presidential Election where we are obsessed with polls, refreshing Nate Silver’s website obsessively – that makes us prone to seek comfort, reassurance, certainty from those who can claim that they know what the future will hold?
“Software is eating the world,” investor Marc Andreessen pronounced in a Wall Street Journal op-ed in 2011. “Over the next 10 years,” he wrote, “I expect many more industries to be disrupted by software, with new world-beating Silicon Valley companies doing the disruption in more cases than not.” Buy stock in technology companies was really the underlying message of Andreessen’s op-ed; this isn’t another tech bubble, he wanted to reinsure investors. But many in Silicon Valley have interpreted this pronouncement – “software is eating the world” – as an affirmation and an inevitability. I hear it repeated all the time – “software is eating the world” – as though, once again, repeating things makes them true or makes them profound.
If we believe that, indeed, “software is eating the world,” that we are living in a moment of extraordinary technological change, that we must – according to Gartner or the Horizon Report – be ever-vigilant about emerging technologies, that these technologies are contributing to uncertainty, to disruption, then it seems likely that we will demand a change in turn to our educational institutions (to lots of institutions, but let’s just focus on education). This is why this sort of forecasting is so important for us to scrutinize – to do so quantitatively and qualitatively, to look at methods and at theory, to ask who’s telling the story and who’s spreading the story, to listen for counter-narratives.
This technological change, according to some of the most popular stories, is happening faster than ever before. It is creating an unprecedented explosion in the production of information. New information technologies, so we’re told, must therefore change how we learn – change what we need to know, how we know, how we create and share knowledge. Because of the pace of change and the scale of change and the locus of change (that is, “Silicon Valley” not “The Ivory Tower”) – again, so we’re told – our institutions, our public institutions can no longer keep up. These institutions will soon be outmoded, irrelevant. Again – “In fifty years, there will be only ten institutions in the world delivering higher education and Udacity has a shot at being one of them.”
These forecasting reports, these predictions about the future make themselves necessary through this powerful refrain, insisting that technological change is creating so much uncertainty that decision-makers need to be ever vigilant, ever attentive to new products.
As Neil Postman and others have cautioned us, technologies tend to become mythic – unassailable, God-given, natural, irrefutable, absolute. So it is predicted. So it is written. Techno-scripture, to which we hand over a certain level of control – to the technologies themselves, sure, but just as importantly to the industries and the ideologies behind them. Take, for example, the founding editor of the technology trade magazine Wired, Kevin Kelly. His 2010 book was called What Technology Wants, as though technology is a living being with desires and drives; the title of his 2016 book, The Inevitable. We humans, in this framework, have no choice. The future – a certain flavor of technological future – is pre-ordained. Inevitable.
I’ll repeat: I am not a futurist. I don’t make predictions. But I can look at the past and at the present in order to dissect stories about the future.
So is the pace of technological change accelerating? Is society adopting technologies faster than it’s ever done before? Perhaps it feels like it. It certainly makes for a good headline, a good stump speech, a good keynote, a good marketing claim, a good myth. But the claim starts to fall apart under scrutiny.
This graph comes from an article in the online publication Vox that includes a couple of those darling made-to-go-viral videos of young children using “old” technologies like rotary phones and portable cassette players – highly clickable, highly sharable stuff. The visual argument in the graph: the number of years it takes for one quarter of the US population to adopt a new technology has been shrinking with each new innovation.
But the data is flawed. Some of the dates given for these inventions are questionable at best, if not outright inaccurate. If nothing else, it’s not so easy to pinpoint the exact moment, the exact year when a new technology came into being. There often are competing claims as to who invented a technology and when, for example, and there are early prototypes that may or may not “count.” James Clerk Maxwell did publish A Treatise on Electricity and Magnetism in 1873. Alexander Graham Bell made his famous telephone call to his assistant in 1876. Guglielmo Marconi did file his patent for radio in 1897. John Logie Baird demonstrated a working television system in 1926. The MITS Altair 8800, an early personal computer that came as a kit you had to assemble, was released in 1975. But Martin Cooper, a Motorola exec, made the first mobile telephone call in 1973, not 1983. And the Internet? The first ARPANET link was established between UCLA and the Stanford Research Institute in 1969. The Internet was not invented in 1991.
So we can reorganize the bar graph. But it’s still got problems.
The Internet did become more privatized, more commercialized around that date – 1991 – and thanks to companies like AOL, a version of it became more accessible to more people. But if you’re looking at when technologies became accessible to people, you can’t use 1873 as your date for electricity, you can’t use 1876 as your year for the telephone, and you can’t use 1926 as your year for the television. It took years for the infrastructure of electricity and telephony to be built, for access to become widespread; and subsequent technologies, let’s remember, have simply piggy-backed on these existing networks. Our Internet service providers today are likely telephone and TV companies; our houses are already wired for new WiFi-enabled products and predictions.
Economic historians who are interested in these sorts of comparisons of technologies and their effects typically set the threshold at 50% – that is, how long does it take after a technology is commercialized (not simply “invented”) for half the population to adopt it. This way, you’re not only looking at the economic behaviors of the wealthy, the early-adopters, the city-dwellers, and so on (but to be clear, you are still looking at a particular demographic – the privileged half.)
And that changes the graph again:
How many years do you think it’ll be before half of US households have a smart watch? A drone? A 3D printer? Virtual reality goggles? A self-driving car? Will they? Will it be fewer years than 9? I mean, it would have to be if, indeed, “technology” is speeding up and we are adopting new technologies faster than ever before.
Some of us might adopt technology products quickly, to be sure. Some of us might eagerly buy every new Apple gadget that’s released. But we can’t claim that the pace of technological change is speeding up just because we personally go out and buy a new iPhone every time Apple tells us the old model is obsolete. Removing the headphone jack from the latest iPhone does not mean “technology changing faster than ever,” nor does showing how headphones have changed since the 1970s. None of this is really a reflection of the pace of change; it’s a reflection of our disposable income and a ideology of obsolescence.
Some economic historians like Robert J. Gordon actually contend that we’re not in a period of great technological innovation at all; instead, we find ourselves in a period of technological stagnation. The changes brought about by the development of information technologies in the last 40 years or so pale in comparison, Gordon argues (and this is from his recent book The Rise and Fall of American Growth: The US Standard of Living Since the Civil War), to those “great inventions” that powered massive economic growth and tremendous social change in the period from 1870 to 1970 – namely electricity, sanitation, chemicals and pharmaceuticals, the internal combustion engine, and mass communication. But that doesn’t jibe with “software is eating the world,” does it?
Let’s return briefly to those Horizon Report predictions again. They certainly reflect this belief that technology must be speeding up. Every year, there’s something new. There has to be. That’s the purpose of the report. The horizon is always “out there,” off in the distance.
But if you squint, you can see each year’s report also reflects a decided lack of technological change. Every year, something is repeated – perhaps rephrased. And look at the predictions about mobile computing:
2006 – the phones in their pockets
2007 – the phones in their pockets
2008 – oh crap, we don’t have enough bandwidth for the phones in their pockets
2009 – the phones in their pockets
2010 – the phones in their pockets
2011 – the phones in their pockets
2012 – the phones too big for their pockets
2013 – the apps on the phones too big for their pockets
2015 – the phones in their pockets
2016 – the phones in their pockets
This hardly makes the case for technological speeding up, for technology changing faster than it’s ever changed before. But that’s the story that people tell nevertheless. Why?
I pay attention to this story, as someone who studies education and education technology, because I think these sorts of predictions, these assessments about the present and the future, frequently serve to define, disrupt, destabilize our institutions. This is particularly pertinent to our schools which are already caught between a boundedness to the past – replicating scholarship, cultural capital, for example – and the demands they bend to the future – preparing students for civic, economic, social relations yet to be determined.
But I also pay attention to these sorts of stories because there’s that part of me that is horrified at the stuff – predictions – that people pass off as true or as inevitable.
“65% of today’s students will be employed in jobs that don’t exist yet.” I hear this statistic cited all the time. And it’s important, rhetorically, that it’s a statistic – that gives the appearance of being scientific. Why 65%? Why not 72% or 53%? How could we even know such a thing? Some people cite this as a figure from the Department of Labor. It is not. I can’t find its origin – but it must be true: a futurist said it in a keynote, and the video was posted to the Internet.
The statistic is particularly amusing when quoted alongside one of the many predictions we’ve been inundated with lately about the coming automation of work. In 2014, The Economist asserted that “nearly half of American jobs could be automated in a decade or two.”“Before the end of this century,” Wired Magazine’s Kevin Kelly announced earlier this year, “70 percent of today’s occupations will be replaced by automation.”
Therefore the task for schools – and I hope you can start to see where these different predictions start to converge – is to prepare students for a highly technological future, a future that has been almost entirely severed from the systems and processes and practices and institutions of the past. And if schools cannot conform to this particular future, then “In fifty years, there will be only ten institutions in the world delivering higher education and Udacity has a shot at being one of them.”
Now, I don’t believe that there’s anything inevitable about the future. I don’t believe that Moore’s Law – that the number of transistors on an integrated circuit doubles every two years and therefore computers are always exponentially smaller and faster – is actually a law. I don’t believe that robots will take, let alone need take, all our jobs. I don’t believe that YouTube has been rendered school irrevocably out-of-date. I don’t believe that technologies are changing so quickly that we should hand over our institutions to entrepreneurs, privatize our public sphere for techno-plutocrats.
I don’t believe that we should cheer Elon Musk’s plans to abandon this planet and colonize Mars – he’s predicted he’ll do so by 2026. I believe we stay and we fight. I believe we need to recognize this as an ego-driven escapist evangelism.
I believe we need to recognize that predicting the future is a form of evangelism as well. Sure gets couched in terms of science, it is underwritten by global capitalism. But it’s a story – a story that then takes on these mythic proportions, insisting that it is unassailable, unverifiable, but true.
The best way to invent the future is to issue a press release. The best way to resist this future is to recognize that, once you poke at the methodology and the ideology that underpins it, a press release is all that it is.
Image credits: 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28. And a special thanks to Tressie McMillan Cottom and David Golumbia for organizing this talk. And to Mike Caulfield for always helping me hash out these ideas.
_____
Audrey Watters is a writer who focuses on education technology – the relationship between politics, pedagogy, business, culture, and ed-tech. She has worked in the education field for over 15 years: teaching, researching, organizing, and project-managing. Although she was two chapters into her dissertation (on a topic completely unrelated to ed-tech), she decided to abandon academia, and she now happily fulfills the one job recommended to her by a junior high aptitude test: freelance writer. Her stories have appeared on NPR/KQED’s education technology blog MindShift, in the data section of O’Reilly Radar, on Inside Higher Ed, in The School Library Journal, in The Atlantic, on ReadWriteWeb, and Edutopia. She is the author of the recent book The Monsters of Education Technology (Smashwords, 2014) and working on a book calledTeaching Machines. She maintains the widely-read Hack Education blog, and writes frequently for The b2 Review Digital Studies magazine on digital technology and education.
There are some games where a single player wins, games where a group of players wins, and then there are games where all of the players can share equally in defeat. Yet regardless of the way winners and losers are apportioned, there is something disconcerting about a game where the rules change significantly when one is within sight of victory. Suddenly the strategy that had previously assured success now promises defeat and the confused players are forced to reconsider all of the seemingly right decisions that have now brought them to an impending loss. It may be a trifle silly to talk of winners and losers in the Anthropocene, with its bleak herald climate change, but the epoch in which humans have become a geological force is one in which the strategies that propelled certain societies towards victory no longer seem like such wise tactics. With victory seeming less and less certain it is easy to assume defeat is inevitable.
“Let’s not despair” is the retort McKenzie Wark offers on the first page of Molecular Red: Theory for the Anthropocene. The book approaches the Anthropocene as both a challenge and an opportunity, not for seeing who can pen the grimmest apocalyptic dirge but for developing new forms of critical theory. Prevailing responses to the Anthropocene – ranging from faith in new technology, to confidence in the market, to hopes for accountability, to despairing of technology – all strike Wark as insufficient, what he deems necessary are theories (which will hopefully lead to solutions) that recognize the ways in which the aforementioned solutions are entangled with each other. For Wark the coming crumbling of the American system was foreshadowed by the collapse of the Soviet system – and thus Molecular Red looks back at Soviet history to consider what other routes could have been taken there, before he switches his focus back to the United States to search for today’s alternate routes. Molecular Red reads aspects of Soviet history through the lens of “what if?” in order to consider contemporary questions from the perspective “what now?” As he writes: “[t]here is no other world, but it can’t be this one” (xxi).
Molecular Red is an engaging and interesting read that introduces its readers to a raft of under-read thinkers – and its counsel against despair is worth heeding. And yet, by the book’s end, it is easy to come away with a sense that while it is true that “there is no other world” that it will, alas, almost certainly be exactly this one.
Before Wark introduces individual writers and theorists he first unveils the main character of his book: “the Carbon Liberation Front” (xiv). In Wark’s estimation the Carbon Liberation Front (CLF from this point forward) represents the truly victorious liberation movement of the past centuries. And what this liberation movement has accomplished is the freeing of – as the name suggests – carbon, an element which has been burnt up by humans in pursuit of energy with the result being an atmosphere filled with heat-trapping carbon dioxide. “The Anthropocene runs on carbon” (xv), and seeing as the scientists who coined the term “Anthropocene” used it to mark the period wherein glacial ice cores began to show a concentration of green house gases, such as CO2 and Ch4 – the CLF appears as a force one cannot ignore.
Turning to Soviet history, Wark works to rescue Lenin’s rival Alexander Bogdanov from being relegated to a place as a mere footnote. Yet, Wark’s purpose is not to simply emphasize that Lenin and Bogdanov had different ideas regarding what the Bolsheviks should have done, what is of significance in Bogdanov is not questions of tactics but matters of theory. In particular Wark highlights Bogdanov’s ideas of “proletkult” and “tektology” while also drawing upon Bogdanov’s view of nature – he conceived of this “elusive category” as “simply that which labor encounters” (4, italics in original text). Bogdanov’s tektology was to be “a new way of organizing knowledge” while proletkult was to be “a new practice of culture” – as Wark explains “Bogdanov is not really trying to write philosophy so much a to hack it, to repurpose it for something other than the making of more philosophy” (13). Tektology was an attempt to bring together the lived experience of the proletariat along with philosophy and science – to create an active materialism “based on the social production of human existence” (18) and this production sees Nature as the realm within which laboring takes place. Or, as Wark eloquently puts it, tektology “is a way of organizing knowledge for difficult times…and perhaps also for the strange times likely to come in the twenty-first century” (40). Proletkult (which was an actual movement for some time) sought “to change labor, by merging art and work; to change everyday life…and to change affect” (35) – its goal was not to create proletarian culture but to provide a proletarian “point of view.” Deeply knowledgeable about science, himself a sort of science-fiction author (he wrote a quasi-utopian novel set on Mars called Red Star), and hopeful that technological advances would make workers more like engineers and artists, Bogdanov strikes Wark as “not the present writing about the future, but the past writing to the future” (59). Wark suggests that “perhaps Bogdanov is the point to which to return” (59) hence Wark’s touting of tektology, proletkult and Bogdanov’s view of nature.
While Wark makes it clear that Bogdanov’s ideas did have some impact in Soviet Russia, their effect was far less than what it could have been – and thus Bogdanov’s ideas remain an interesting case of “what if?” Yet, in the figure of Andrey Platonov, Wark finds an example of an individual whose writings reached towards proletkult. Wark sees Platonov as “the great writer of our planet of slums” (68). The fiction written by Platonov, his “(anti)novellas” as Wark calls them, are largely the tales of committed and well-meaning communists whose efforts come to naught. For Platonov’s characters failure is a constant companion, they struggle against nature in the name of utopianism and find that they simply must keep struggling. In Platonov’s work one finds a continual questioning of communism’s authoritarian turn from below, his “Marxism is an ascetic one, based on the experience of sub-proletarian everyday life” (104). And while Platonov’s tales are short on happy endings, Wark detects hope amidst the powerlessness, as long as life goes on, for “if one can keep living then everything is still possible” (80). Such is the type of anti-cynicism that makes Platonov’s Marxism worth considering – it finds the glimmer of utopia on the horizon even if it never seems to draw closer.
From the cold of the Soviet winter, Wark moves to the birthplace of the Californian Ideology – an ideology which Wark suggests has won the day: “it has no outside, and it is accelerating” (118). Yet, as with the case of Soviet communism, Wark is interested in looking for the fissures within the ideology, and instead of opining on Barbook and Cameron’s term moves through Ernst Mach and Paul Feyerabend en route to a consideration of Donna Haraway. Wark emphasizes how Haraway’s Marxism “insists on including nonhuman actors” (136) – her techno-science functions as a way of further breaking down the barrier that had been constructed between humans and nature. Shattering this divider is necessary to consider the ways that life itself has become caught up with capital in the age of patented life forms like OncoMouse. Amidst these entanglements Haraway’s “Cyborg Manifesto” appears to have lost none of its power – Wark sees that “cyborgs are monsters, or rather demonstrations, in the double sense of to show and to warn, of possible worlds” (146). Such a show of possibilities is to present alternatives even when, “There’s no mother nature, no father science, no way back (or forward) to integrity” (150). Returning to Bogdanov, Wark writes that “Tektology is all about constructing temporary shelter in the world” (150) – and the cyborg identity is simultaneously what constructs such shelter and seeks haven within it. Beyond Haraway, Wark considers the work of Karen Barad and Paul Edwards, in order to further illustrate that “we are at one and the same time a product of techno-science and yet inclined to think ourselves separate from it” (165). Haraway, and the web of thinkers with which Wark connects her, appear as a way to reconnect with “something like the classical Marxist and Bogdanovite open-mindedness toward the sciences” (179).
After science, Wark transitions to discussing the science fiction of Kim Stanley Robinson – in particular his Mars trilogy. Robinson’s tale of the scientist/technicians colonizing Mars and their attempts to create a better world on the one they are settling is a demonstration of how “the struggle for utopia is both technical and political, and so much else besides” (191). The value of the Mars trilogy, with its tale of revolutions, both successful and unsuccessful, and its portrayal of a transformed Earth, is in the slow unfolding of revolutionary change. In Red Mars (the first book of the trilogy, published in 1992) there is not a glorious revolution that instantly changes everything, but rather “the accumulation of minor, even molecular, elements of a new way of life and their negotiations with each other” (194). At work in the ruminations of the main characters of Red Mars, Wark detects something reminiscent of tektology even as the books themselves seem like a sort of proletkult for the Anthropocene.
Molecular Red’s tour of oft overlooked, or overly neglected thinkers, is an argument for a reengagement with Marxism, but a reengagement that willfully and carefully looks for the paths not taken. The argument is not that Lenin needs to be re-read, but that Bogdanov needs to be read. Wark does not downplay the dangers of the Anthropocene, but he refuses to wallow in dismay or pine for a pastoral past that was a fantasy in the first place. For Wark, we are closely entwined with our technology and the idea that it should all be turned off is a nonstarter. Molecular Red is not a trudge through the swamps of negativity, rather it’s a call: “Let’s use the time and information and everyday life still available to us to begin the task, quietly but in good cheer, of thinking otherwise, of working and experimenting” (221).
Wark does not conclude Molecular Red by reminding his readers that they have nothing to lose but their chains. Rather he reminds them that they still have a world to win.
Molecular Red begins with an admonishment not to despair, and ends with a similar plea not to lose hope. Granted, in order to find this hope one needs to be willing to consider that the causes for hopelessness may themselves be rooted in looking for hope in the wrong places. Wark argues, that by embracing techno-science, reveling in our cyborg selves, and creating new cultural forms to help us re-imagine our present and future – the left can make itself relevant once more. As a call for the left to embrace technology and look forward Molecular Red occupies a similar cultural shelf-space as that filled by recent books like Inventing the Future and Austerity Ecology and the Collapse-Porn Addicts. Which is to say that those who think that what is needed is “a frank acknowledgment of the entangling of our cyborg bodies within the technical” (xxi), those who think that the left needs to embrace technology with greater gusto, will find Molecular Red’s argument quite appealing. As for those who disagree – they will likely not find their minds changed by Molecular Red.
As a writer Wark has a talent for discussing dense theoretical terms in a readable and enjoyable format throughout Molecular Red. Regardless of what one ultimately thinks of Wark’s argument, one of the major strengths of Molecular Red is the way it introduces readers to overlooked theorists. After reading Wark’s chapters on Bogdanov and Platonov the reader certainly understands why Wark finds their work so engrossing and inspiring. Similarly, Wark makes a compelling case for the continued importance of Haraway’s cyborg concept and his treatment of Kim Stanley Robinson’s Mars trilogy is an apt demonstration of incorporating science fiction into works of theory. Amidst all of the grim books out there about the Anthropocene, Molecular Red is refreshing in its optimism. This is “Theory for the Anthropocene,” as the book’s subtitle puts it, but it is positive theory.
Granted, some of Wark’s linguistic flourishes become less entertaining over time – “the carbon liberation front” is an amusing concept at first but by the end of Molecular Red the term is as likely to solicit an eye-roll as introspection. A great deal of carbon has certainly been liberated, but has this been the result of a concerted effort (a “liberation front”) or has this been the result of humans not fully thinking through the consequences of technology? Certainly there are companies that have made fortunes through “liberating” carbon, but who is ultimately responsible for “the carbon liberation front?” One might be willing to treat terms like “liberation front” with less scrutiny were they not being used in a book so invested in re-vitalizing leftist theory. Does not a “liberation front” imply a movement with an ideology? It seems that the liberation of carbon is more of an accident of a capitalist ideology than the driver of that ideology itself. It may seem silly to focus upon the uneasy feeling that accompanies the term “carbon liberation front” but this is an example of a common problem with Molecular Red – the more one thinks about some of the premises the less satisfying Wark’s arguments become.
Given Wark’s commitment to reconfiguring Marxism for the Anthropocene it is unsurprising that he should choose to devote much of his attention to labor. This is especially fitting given the emphasis that Bogdanov and Platonov place on labor. Wark clearly finds much to approve of in Bogdanov’s idea that “all workers would become more like engineers, and also more like artists” (28). These are largely the type of workers one encounters in Robinson’s work and who are, generally, the heroes of Platonov’s tales, they make up a sort of “proto-hacker class” (90). It is an interesting move from the Soviet laborer to the technician/artists/hacker of Robinson – and it is not surprising that the author of A Hacker Manifesto (2004) should view hackers in such a romantic light. Yet Molecular Red is not a love letter to hackers, which makes it all the more interesting that labor in the Anthropocene is not given broader consideration. Bogdanov might have hoped that automation would make workers more like engineers and artists – but is there not still plenty of laboring going on in the Anthropocene? There is a heck of a lot of labor that goes into making the high-tech devices enjoyed by technicians, hackers and artists – though it may be a type of labor that is more convenient to ignore as it troubles the idea that workers are all metamorphosing into technician/artist/hackers. Given Platonov’s interest in the workers who seemed abandoned by the utopian promises they had been told it is a shame that Molecular Red does not pay greater attention to the forgotten workers of the Anthropocene. Yet, contemporary miners of minerals for high-tech doodads, device assemblers, e-waste recyclers, and the impoverished citizens of areas already suffering the burdens of climate change have more in common with the forgotten proletarians of Platonov than with the utopian scientists of Robinson’s Red Mars.
One way to read Molecular Red is as a plea to the left not to give up on techno-science. Though it seems worth wondering to what extent the left has actually done anything like this. Some on the left may be less willing to conclude that the Internet is the solution to every problem (“some” does not imply “the majority”), but agitating for green technologies and alternative energies seems a pretty clear demonstration that far from giving up on technology many on the left still approach it with great hope. Wark is arguing for “something like the classical Marxist and Bogdanovite open-mindedness toward the sciences…rather than the Heidegger-inflected critique of Marcuse and others” (179). Yet in looking at contemporary discussions around techno-science and the left, it does not seem that the “Heidegger-inflected critique of Marcuse and others” is particularly dominant. There may be a few theorists here and there still working to advance a rigorous critique of technology – but as the recent issues on technology from The Nation and Jacobin both show – the left is not currently being controlled by a bogey-man of Marcuse. Granted, this is a shame, for Molecular Red could have benefited from engaging with some of the critics of Marxism’s techno-utopian streak. Indeed, is the problem the lack of “open-mindedness toward the sciences” or that being open-minded has failed thus far to do much to stall the Anthropocene? Or is it that, perhaps, the left simply needs to prepare itself for being open-minded about geo-engineering? Wark describes the Anthropocene as being a sort of metabolic rift and cautions that “to reject techno-science altogether is to reject the means of knowing about metabolic rift” (180). Yet this seems to be something of a straw-man argument – how many critics are genuinely arguing that people should “reject techno-science”? Perhaps John Zerzan has a much wider readership than I knew.
Molecular Red cautions its readers against despair but the text has a significant darkness about it. Wark writes “we are cyborgs, making a cyborg planet with cyborg weather, a crazed, unstable disingression, whose information and energy systems are out of joint” (180) – but the knowledge that “we are cyborgs” does little to help the worker who has lost her job without suddenly becoming an engineer/artist, “a cyborg planet” does nothing to heal the sicknesses of those living near e-waste dumps, and calling it “cyborg weather” does little to help those who are already struggling to cope with the impacts of climate change. We may be cyborgs, but that doesn’t mean the Anthropocene will go easy on us. After all, the scientists in the Marstrilogy may work on transforming that planet into a utopia but while they are at it things do not exactly go well back on Earth. When Wark writes that “here among the ruins, something living yet remains” (xxii) he is echoing the ideology behind every anarcho-punk record cover that shows a better life being built on the ruins of the present world. But another feature of those album covers, and the allusion to “among the ruins,” is that the fact that some “living yet remains” is a testament to all of the dying that has also transpired.
McKenzie Wark has written an interesting and challenging book in Molecular Red and it is certainly a book with which it is worth engaging. Regardless of whether or not one is ultimately convinced by Wark’s argument, his final point will certainly resonate with those concerned about the present but hopeful for the future.
After all, we still have a world to win.
_____
Zachary Loeb is a writer, activist, librarian, and terrible accordion player. He earned his MSIS from the University of Texas at Austin, and is currently working towards an MA in the Media, Culture, and Communications department at NYU. His research areas include media refusal and resistance to technology, ethical implications of technology, infrastructure and e-waste, as well as the intersection of library science with the STS field. Using the moniker “The Luddbrarian,” Loeb writes at the blog Librarian Shipwreck and is a frequent contributor to The b2 Review Digital Studies section.
a review of Frank Pasquale, The Black Box Society: The Secret Algorithms That Control Money and Information (Harvard University Press, 2015)
by Nicole Dewandre
~
1. Introduction
This review is informed by its author’s specific standpoint: first, a lifelong experience in a policy-making environment, i.e. the European Commission; and, second, a passion for the work of Hannah Arendt and the conviction that she has a great deal to offer to politics and policy-making in this emerging hyperconnected era. As advisor for societal issues at DG Connect, the department of the European Commission in charge of ICT policy at EU level, I have had the privilege of convening the Onlife Initiative, which explored the consequences of the changes brought about by the deployment of ICTs on the public space and on the expectations toward policy-making. This collective thought exercise, which took place in 2012-2013, was strongly inspired by Hannah Arendt’s 1958 book The Human Condition.
This is the background against which I read the The Black Box Society: The Secret Algorithms Behind Money and Information by Frank Pasquale (references to which are indicated here parenthetically by page number). Two of the meanings of “black box“—a device that keeps track of everything during a flight, on the one hand, and the node of a system that prevents an observer from identifying the link(s) between input and output, on the other hand—serve as apt metaphors for today’s emerging Big Data environment.
Pasquale digs deep into three sectors that are at the root of what he calls the black box society: reputation (how we are rated and ranked), search (how we use ratings and rankings to organize the world), and finance (money and its derivatives, whose flows depend crucially on forms of reputation and search). Algorithms and Big Data have permeated these three activities to a point where disconnection with human judgment or control can transmogrify them into blind zombies, opening new risks, affordances and opportunities. We are far from the ideal representation of algorithms as support for decision-making. In these three areas, decision-making has been taken over by algorithms, and there is no “invisible hand” ensuring that profit-driven corporate strategies will deliver fairness or improve the quality of life.
The EU and the US contexts are both distinct and similar. In this review, I shall not comment on Pasquale’s specific policy recommendations in detail, even if as European, I appreciate the numerous references to European law and policy that Pasquale commends as good practices (ranging from digital competition law, to welfare state provision, to privacy policies). I shall instead comment from a meta-perspective, that of challenging the worldview that implicitly undergirds policy-making on both sides of the Atlantic.
2. A Meta-perspective on The Black Box Society
The meta-perspective as I see it is itself twofold: (i) we are stuck with Modern referential frameworks, which hinder our ability to attend to changing human needs, desires and expectations in this emerging hyperconnected era, and (ii) the personification of corporations in policymaking reveals shortcomings in the current representation of agents as interest-led beings.
a) Game over for Modernity!
As stated by the Onlife Initiative in its “Onlife Manifesto,” through its expression “Game over for Modernity?“, it is time for politics and policy-making to leave Modernity behind. That does not mean going back to the Middle Ages, as feared by some, but instead stepping firmly into this new era that is coming to us. I believe with Genevieve Bell and Paul Dourish that it is more effective to consider that we are now entering into the ubiquitous computing era instead of looking at it as if it was approaching fast.[1] With the miniaturisation of devices and sensors, with mobile access to broadband internet and with the generalized connectivity of objects as well as of people, not only do we witness an increase of the online world, but, more fundamentally, a collapse of the distinction between the online and the offline worlds, and therefore a radically new socio-technico-natural compound. We live in an environment which is increasingly reactive and talkative as a result of the intricate mix between off-line and online universes. Human interactions are also deeply affected by this new socio-technico-natural compound, as they are or will soon be “sticky”, i.e. leave a material trace by default and this for the first time in history. These new affordances and constraints destabilize profoundly our Modern conceptual frameworks, which rely on distinctions that are blurring, such as the one between the real and the virtual or the ones between humans, artefacts and nature, understood with mental categories dating back from the Enlightenment and before. The very expression “post-Modern” is not accurate anymore or is too shy, as it continues to position Modernity as its reference point. It is time to give a proper name to this new era we are stepping into, and hyperconnectivity may be such a name.
Policy-making however continues to rely heavily on Modern conceptual frameworks, and this not only from the policy-makers’ point of view but more widely from all those engaging in the public debate. There are many structuring features of the Modern conceptual frameworks and it goes certainly beyond this review to address them thoroughly. However, when it comes to addressing the challenges described by TheBlack Box Society, it is important to mention the epistemological stance that has been spelled out brilliantly by Susan H. Williams in her Truth, Autonomy, and Speech: Feminist Theory and the First Amendment: “the connection forged in Cartesianism between knowledge and power”[2]. Before encountering Susan Williams’s work, I came to refer to this stance less elegantly with the expression “omniscience-omnipotence utopia”[3]. Williams writes that “this epistemological stance has come to be so widely accepted and so much a part of many of our social institutions that it is almost invisible to us” and that “as a result, lawyers and judges operate largely unself-consciously with this epistemology”[4]. To Williams’s “lawyers and judges”, we should add policy-makers and stakeholders. This Cartesian epistemological stance grounds the conviction that the world can be elucidated in causal terms, that knowledge is about prediction and control, and that there is no limit to what men can achieve provided they have the will and the knowledge. In this Modern worldview, men are considered as rational subjects and their freedom is synonymous with control and autonomy. The fact that we have a limited lifetime and attention span is out of the picture as is the human’s inherent relationality. Issues are framed as if transparency and control is all that men need to make their own way.
1) One-Way Mirror or Social Hypergravity?
Frank Pasquale is well aware of and has contributed to the emerging critique of transparency and he states clearly that “transparency is not just an end in itself” (8). However, there are traces of the Modern reliance on transparency as regulative ideal in the Black Box Society. One of them is when he mobilizes the one-way mirror metaphor. He writes:
We do not live in a peaceable kingdom of private walled gardens; the contemporary world more closely resembles a one-way mirror. Important corporate actors have unprecedented knowledge of the minutiae of our daily lives, while we know little to nothing about how they use this knowledge to influence the important decisions that we—and they—make. (9)
I refrain from considering the Big Data environment as an environment that “makes sense” on its own, provided someone has access to as much data as possible. In other words, the algorithms crawling the data can hardly be compared to a “super-spy” providing the data controller with an absolute knowledge.
Another shortcoming of the one-way mirror metaphor is that the implicit corrective is a transparent pane of glass, so the watched can watch the watchers. This reliance on transparency is misleading. I prefer another metaphor that fits better, in my view: to characterise the Big Data environment in a hyperconnected conceptual framework. As alluded to earlier, in contradistinction to the previous centuries and even millennia, human interactions will, by default, be “sticky”, i.e. leave a trace. Evanescence of interactions, which used to be the default for millennia, will instead require active measures to be ensured. So, my metaphor for capturing the radicality and the scope of this change is a change of “social atmosphere” or “social gravity”, as it were. For centuries, we have slowly developed social skills, behaviors and regulations, i.e. a whole ecosystem, to strike a balance between accountability and freedom, in a world where “verba volant and scripta manent“[5], i.e. where human interactions took place in an “atmosphere” with a 1g “social gravity”, where they were evanescent by default and where action had to be taken to register them. Now, with all interactions leaving a trace by default, and each of us going around with his, her or its digital shadow, we are drifting fast towards an era where the “social atmosphere” will be of heavier gravity, say “10g”. The challenge is huge and will require a lot of collective learning and adaptation to develop the literacy and regulatory frameworks that will recreate and sustain the balance between accountability and freedom for all agents, human and corporations.
The heaviness of this new data density stands in-between or is orthogonal to the two phantasms of bright emancipatory promises of Big Data, on the one hand, or frightening fears of Big Brother, on the other hand. Because of this social hypergravity, we, individually and collectively, have indeed to be cautious about the use of Big Data, as we have to be cautious when handling dangerous or unknown substances. This heavier atmosphere, as it were, opens to increased possibilities of hurting others, notably through harassment, bullying and false rumors. The advent of Big Data does not, by itself, provide a “license to fool” nor does it free agents from the need to behave and avoid harming others. Exploiting asymmetries and new affordances to fool or to hurt others is no more acceptable behavior as it was before the advent of Big Data. Hence, although from a different metaphorical standpoint, I support Pasquale’s recommendations to pay increased attention to the new ways the current and emergent practices relying on algorithms in reputation, search and finance may be harmful or misleading and deceptive.
2) The Politics of Transparency or the Exhaustive Labor of Watchdogging?
Another “leftover” of the Modern conceptual framework that surfaces in TheBlack Box Society is the reliance on watchdogging for ensuring proper behavior by corporate agents. Relying on watchdogging for ensuring proper behavior nurtures the idea that it is all right to behave badly, as long as one is not seen doing do. This reinforces the idea that the qualification of an act depends from it being unveiled or not, as if as long as it goes unnoticed, it is all right. This puts the entire burden on the watchers and no burden whatsoever on the doers. It positions a sort of symbolic face-to-face between supposed mindless firms, who are enabled to pursue their careless strategies as long as they are not put under the light and people who are expected to spend all their time, attention and energy raising indignation against wrong behaviors. Far from empowering the watchers, this framing enslaves them to waste time monitoring actors who should be acting in much better ways already. Indeed, if unacceptable behavior is unveiled, it raises outrage, but outrage is far from bringing a solution per se. If, instead, proper behaviors are witnessed, then the watchers are bound to praise the doers. In both cases, watchers are stuck in a passive, reactive and specular posture, while all the glory or the shame is on the side of the doers. I don’t deny the need to have watchers, but I warn against the temptation of relying excessively on the divide between doers and watchers to police behaviors, without engaging collectively in the formulation of what proper and inappropriate behaviors are. And there is no ready-made consensus about this, so that it requires informed exchange of views and hard collective work. As Pasquale explains in an interview where he defends interpretative approaches to social sciences against quantitative ones:
Interpretive social scientists try to explain events as a text to be clarified, debated, argued about. They do not aspire to model our understanding of people on our understanding of atoms or molecules. The human sciences are not natural sciences. Critical moral questions can’t be settled via quantification, however refined “cost benefit analysis” and other political calculi become. Sometimes the best interpretive social science leads not to consensus, but to ever sharper disagreement about the nature of the phenomena it describes and evaluates. That’s a feature, not a bug, of the method: rather than trying to bury normative differences in jargon, it surfaces them.
The excessive reliance on watchdogging enslaves the citizenry to serve as mere “watchdogs” of corporations and government, and prevents any constructive cooperation with corporations and governments. It drains citizens’ energy for pursuing their own goals and making their own positive contributions to the world, notably by engaging in the collective work required to outline, nurture and maintain the shaping of what accounts for appropriate behaviours.
As a matter of fact, watchdogging would be nothing more than an exhausting laboring activity.
b) The Personification of Corporations
One of the red threads unifying TheBlack Box Society’s treatment of numerous technical subjects is unveiling the oddness of the comparative postures and status of corporations, on the one hand, and people, on the other hand. As nicely put by Pasquale, “corporate secrecy expands as the privacy of human beings contracts” (26), and, in the meantime, the divide between government and business is narrowing (206). Pasquale points also to the fact that at least since 2001, people have been routinely scrutinized by public agencies to deter the threatening ones from hurting others, while the threats caused by corporate wrongdoings in 2008 gave rise to much less attention and effort to hold corporations to account. He also notes that “at present, corporations and government have united to focus on the citizenry. But why not set government (and its contractors) to work on corporate wrongdoings?” (183) It is my view that these oddnesses go along with what I would call a “sensitive inversion”. Corporations, which are functional beings, are granted sensitivity as if they were human beings, in policy-making imaginaries and narratives, while men and women, who are sensitive beings, are approached in policy-making as if they were functional beings, i.e. consumers, job-holders, investors, bearer of fundamental rights, but never personae per se. The granting of sensitivity to corporations goes beyond the legal aspect of their personhood. It entails that corporations are the one whose so-called needs are taken care of by policy makers, and those who are really addressed to, qua persona. Policies are designed with business needs in mind, to foster their competitiveness or their “fitness”. People are only indirect or secondary beneficiaries of these policies.
The inversion of sensitivity might not be a problem per se, if it opened pragmatically to an effective way to design and implement policies which bear indeed positive effects for men and women in the end. But Pasquale provides ample evidence showing that this is not the case, at least in the three sectors he has looked at more closely, and certainly not in finance.
Pasquale’s critique of the hypostatization of corporations and reduction of humans has many theoretical antecedents. Looking at it from the perspective of Hannah Arendt’s The Human Condition illuminates the shortcomings and risks associated with considering corporations as agents in the public space and understanding the consequences of granting them sensitivity, or as it were, human rights. Action is the activity that flows from the fact that men and women are plural and interact with each other: “the human condition of action is plurality”.[6] Plurality is itself a ternary concept made of equality, uniqueness and relationality. First, equality as what we grant to each other when entering into a political relationship. Second, uniqueness refers to the fact that what makes each human a human qua human is precisely that who s/he is is unique. If we treat other humans as interchangeable entities or as characterised by their attributes or qualities, i.e., as a what, we do not treat them as human qua human, but as objects. Last and by no means least, the third component of plurality is the relational and dynamic nature of identity. For Arendt, the disclosure of the who “can almost never be achieved as a wilful purpose, as though one possessed and could dispose of this ‘who’ in the same manner he has and can dispose of his qualities”[7]. The who appears unmistakably to others, but remains somewhat hidden from the self. It is this relational and revelatory character of identity that confers to speech and action such a critical role and that articulates action with identity and freedom. Indeed, for entities for which the who is partly out of reach and matters, appearance in front of others, notably with speech and action, is a necessary condition of revealing that identity:
Action and speech are so closely related because the primordial and specifically human act must at the same time contain the answer to the question asked of every newcomer: who are you? In acting and speaking, men show who they are, they appear. Revelatory quality of speech and action comes to the fore where people are with others and neither for, nor against them, that is in sheer togetherness.[8]
So, in this sense, the public space is the arena where whos appear to other whos, personae to other personae.
For Arendt, the essence of politics is freedom and is grounded in action, not in labour and work. The public space is where agents coexist and experience their plurality, i.e. the fact that they are equal, unique and relational. So, it is much more than the usual American pluralist (i.e., early Dahl-ian) conception of a space where agents worry for exclusively for their own needs by bargaining aggressively. In Arendt’s perspective, the public space is where agents, self-aware of their plural characteristic, interact with each other once their basic needs have been taken care of in the private sphere. As highlighted by Seyla Benhabib in The Reluctant Modernism of Hannah Arendt, “we not only owe to Hannah Arendt’s political philosophy the recovery of the public as a central category for all democratic-liberal politics; we are also indebted to her for the insight that the public and the private are interdependent”.[9] One could not appear in public if s/he or it did not have also a private place, notably to attend to his, her or its basic needs for existence. In Arendtian terms, interactions in the public space take place between agents who are beyond their satiety threshold. Acknowledging satiety is a precondition for engaging with others in a way that is not driven by one’s own interest, but rather by their desire to act together with others—”in sheer togetherness”—and be acknowledged as who they are. If an agent perceives him-, her- or itself and behave only as a profit-maximiser or as an interest-led being, i.e. if s/he or it has no sense of satiety and no self-awareness of the relational and revelatory character of his, her or its identity, then s/he or it cannot be a “who” or an agent in political terms, and therefore, respond of him-, her- or itself. It does simply not deserve -and therefore should not be granted- the status of a personain the public space.
It is easy to imagine that there can indeed be no freedom below satiety, and that “sheer togetherness” would just be impossible among agents below their satiety level or deprived from having one. This is however the situation we are in, symbolically, when we grant corporations the status of persona while considering efficient and appropriate that they care only for profit-maximisation. For a business, making profit is a condition to stay alive, as for humans, eating is a condition to stay alive. However, in the name of the need to compete on global markets, to foster growth and to provide jobs, policy-makers embrace and legitimize an approach to businesses as profit-maximisers, despite the fact this is a reductionist caricature of what is allowed by the legal framework on company law[10]. So, the condition for businesses to deserve the status of persona in the public space is, no less than for men and women, to attend their whoness and honour their identity, by staying away from behaving according to their narrowly defined interests. It means also to care for the world as much, if not more, as for themselves.
This resonates meaningfully with the quotation from Heraclitus that serves as the epigraph for TheBlack Box Society: “There is one world in common for those who are awake, but when men are asleep each turns away into a world of his own”. Reading Arendt with Heraclitus’s categories of sleep and wakefulness, one might consider that totalitarianism arises—or is not far away—when human beings are awake in private, but asleep in public, in the sense that they silence their humanness or that their humanness is silenced by others when appearing in public. In this perspective, the merging of markets and politics—as highlighted by Pasquale—could be seen as a generalized sleep in the public space of human beings and corporations, qua personae, while all awakened activities are taking place in the private, exclusively driven by their needs and interests.
In other words—some might find a book like The Black Box Society, which offers a bold reform agenda for numerous agencies, to be too idealistic. But in my view, it falls short of being idealistic enough: there is a missing normative core to the proposals in the book, which can be corrected by democratic, political, and particularly Arendtian theory. If a populace has no acceptance of a certain level of goods and services prevailing as satiating its needs, and if it distorts the revelatory character of identity into an endless pursuit of a limitless growth, it cannot have the proper lens and approach to formulate what it takes to enable the fairness and fair play described in The Black Box Society.
3. Stepping into Hyperconnectivity
1) Agents as Relational Selves
A central feature of the Modern conceptual framework underlying policymaking is the figure of the rational subject as political proxy of humanness. I claim that this is not effective anymore in ensuring a fair and flourishing life for men and women in this emerging hyperconnected era and that we should adopt instead the figure of a “relational self” as it emerges from the Arendtian concept of plurality.
The concept of the rational subject was forged to erect Man over nature. Nowadays, the problem is not so much to distinguish men from nature, but rather to distinguish men—and women—from artefacts. Robots come close to humans and even outperform them, if we continue to define humans as rational subjects. The figure of the rational subject is torn apart between “truncated gods”—when Reason is considered as what brings eventually an overall lucidity—on the one hand, and “smart artefacts”—when reason is nothing more than logical steps or algorithms—on the other hand. Men and women are neither “Deep Blue” nor mere automatons. In between these two phantasms, the humanness of men and women is smashed. This is indeed what happens in the Kafkaesque and ridiculous situations where a thoughtless and mindless approach to Big Data is implemented, and this from both stance, as workers and as consumers. As far as the working environment is concerned, “call centers are the ultimate embodiment of the panoptic workspace. There, workers are monitored all the time” (35). Indeed, this type of overtly monitored working environment is nothing else that a materialisation of the panopticon. As consumers, we all see what Pasquale means when he writes that “far more [of us] don’t even try to engage, given the demoralizing experience of interacting with cyborgish amalgams of drop- down menus, phone trees, and call center staff”. In fact, this mindless use of automation is only the last version of the way we have been thinking for the last decades, i.e. that progress means rationalisation and de-humanisation across the board. The real culprit is not algorithms themselves, but the careless and automaton-like human implementers and managers who act along a conceptual framework according to which rationalisation and control is all that matters. More than the technologies, it is the belief that management is about control and monitoring that makes these environments properly in-human. So, staying stuck with the rational subject as a proxy for humanness, either ends up in smashing our humanness as workers and consumers and, at best, leads to absurd situations where to be free would mean spending all our time controlling we are not controlled.
As a result, keeping the rational subject as the central representation of humanness will increasingly be misleading politically speaking. It fails to provide a compass for treating each other fairly and making appropriate decisions and judgments, in order to impacting positively and meaningfully on human lives.
With her concept of plurality, Arendt offers an alternative to the rational subject for defining humanness: that of the relational self. The relational self, as it emerges from the Arendtian’s concept of plurality[11], is the man, woman or agent self-aware of his, her or its plurality, i.e. the facts that (i) he, she or it is equal to his, her or its fellows; (ii) she, he or it is unique as all other fellows are unique; and (iii) his, her or its identity as a revelatory character requiring to appear among others in order to reveal itself through speech and action. This figure of the relational self accounts for what is essential to protect politically in our humanness in a hyperconnected era, i.e. that we are truly interdependent from the mutual recognition that we grant to each other and that our humanity is precisely grounded in that mutual recognition, much more than in any “objective” difference or criteria that would allow an expert system to sort out human from non-human entities.
The relational self, as arising from Arendt’s plurality, combines relationality and freedom. It resonates deeply with the vision proposed by Susan H. Williams, i.e. the relational model of truth and the narrative model to autonomy, in order to overcome the shortcomings of the Cartesian and liberal approaches to truth and autonomy without throwing the baby, i.e. the notion of agency and responsibility, out with the bathwater, as the social constructionist and feminist critique of the conceptions of truth and autonomy may be understood of doing.[12]
Adopting the relational self as the canonical figure of humanness instead of the rational subject‘s one puts under the light the direct relationship between the quality of interactions, on the one hand, and the quality of life, on the other hand. In contradistinction with transparency and control, which are meant to empower non-relational individuals, relational selves are self-aware that they are in need of respect and fair treatment from others, instead. It also makes room for vulnerability, notably the vulnerability of our attentional spheres, and saturation, i.e. the fact that we have a limited attention span, and are far from making a “free choice” when clicking on “I have read and accept the Terms & Conditions”. Instead of transparency and control as policy ends in themselves, the quality of life of relational selves and the robustness of the world they construct together and that lies between them depend critically on being treated fairly and not being fooled.
It is interesting to note that the word “trust” blooms in policy documents, showing that the consciousness of the fact that we rely from each other is building up. Referring to trust as if it needed to be built is however a signature of the fact that we are in transition from Modernity to hyperconnectivity, and not yet fully arrived. By approaching trust as something that can be materialized we look at it with Modern eyes. As “consent is the universal solvent” (35) of control, transparency-and-control is the universal solvent of trust. Indeed, we know that transparency and control nurture suspicion and distrust. And that is precisely why they have been adopted as Modern regulatory ideals. Arendt writes: “After this deception [that we were fooled by our senses], suspicions began to haunt Modern man from all sides”[13]. So, indeed, Modern conceptual frameworks rely heavily on suspicion, as a sort of transposition in the realm of human affairs of the systematic doubt approach to scientific enquiries. Frank Pasquale quotes moral philosopher Iris Murdoch for having said: “Man is a creature who makes pictures of himself and then comes to resemble the picture” (89). If she is right—and I am afraid she is—it is of utmost importance to shift away from picturing ourselves as rational subjects and embrace instead the figure of relational selves, if only to save the fact that trust can remain a general baseline in human affairs. Indeed, if it came true that trust can only be the outcome of a generalized suspicion, then indeed we would be lost.
Besides grounding the notion of relational self, the Arendtian concept of plurality allows accounting for interactions among humans and among other plural agents, which are beyond fulfilling their basic needs (necessity) or achieving goals (instrumentality), and leads to the revelation of their identities while giving rise to unpredictable outcomes. As such, plurality enriches the basket of representations for interactions in policy making. It brings, as it were, a post-Modern –or should I dare saying a hyperconnected- view to interactions. The Modern conceptual basket for representations of interactions includes, as its central piece, causality. In Modern terms, the notion of equilibrium is approached through a mutual neutralization of forces, either with the invisible hand metaphor, or with Montesquieu’s division of powers. The Modern approach to interactions is either anchored into the representation of one pole being active or dominating (the subject) and the other pole being inert or dominated (nature, object, servant) or, else, anchored in the notion of conflicting interests or dilemmas. In this framework, the notion of equality is straightjacketed and cannot be embodied. As we have seen, this Modern straitjacket leads to approaching freedom with control and autonomy, constrained by the fact that Man is, unfortunately, not alone. Hence, in the Modern approach to humanness and freedom, plurality is a constraint, not a condition, while for relational selves, freedom is grounded in plurality.
2) From Watchdogging to Accountability and Intelligibility
If the quest for transparency and control is as illusory and worthless for relational selves, as it was instrumental for rational subjects, this does not mean that anything goes. Interactions among plural agents can only take place satisfactorily if basic and important conditions are met. Relational selves are in high need of fairness towards themselves and accountability of others. Deception and humiliation[14] should certainly be avoided as basic conditions enabling decency in the public space.
Once equipped with this concept of the relational self as the canonical figure of what can account for political agents, be they men, women, corporations and even States. In a hyperconnected era, one can indeed see clearly why the recommendations Pasquale offers in his final two chapters “Watching (and Improving) the Watchers” and “Towards an Intelligible Society,” are so important. Indeed, if watchdogging the watchers has been criticized earlier in this review as an exhausting laboring activity that does not deliver on accountability, improving the watchers goes beyond watchdogging and strives for a greater accountability. With regard to intelligibility, I think that it is indeed much more meaningful and relevant than transparency.
Pasquale invites us to think carefully about regimes of disclosure, along three dimensions: depth, scope and timing. He calls for fair data practices that could be enhanced by establishing forms of supervision, of the kind that have been established for checking on research practices involving human subjects. Pasquale suggests that each person is entitled to an explanation of the rationale for the decision concerning them and that they should have the ability to challenge that decision. He recommends immutable audit logs for holding spying activities to account. He calls also for regulatory measures compensating for the market failures arising from the fact that dominant platforms are natural monopolies. Given the importance of reputation and ranking and the dominance of Google, he argues that the First Amendment cannot be mobilized as a wild card absolving internet giants from accountability. He calls for a “CIA for finance” and a “Corporate NSA,” believing governments should devote more effort to chasing wrongdoings from corporate actors. He argues that the approach taken in the area of Health Fraud Enforcement could bear fruit in finance, search and reputation.
What I appreciate in Pasquale’s call for intelligibility is that it does indeed calibrate the needs of relational selves to interact with each other, to make sound decisions and to orient themselves in the world. Intelligibility is different from omniscience-omnipotence. It is about making sense of the world, while keeping in mind that there are different ways to do so. Intelligibility connects relational selves to the world surrounding them and allows them to act with other and move around. In the last chapter, Pasquale mentions the importance of restoring trust and the need to nurture a public space in the hyperconnected era. He calls for an end game to the Black Box. I agree with him that conscious deception inherently dissolves plurality and the common world, and needs to be strongly combatted, but I think that a lot of what takes place today goes beyond that and is really new and unchartered territories and horizons for humankind. With plurality, we can also embrace contingency in a less dramatic way that we used to in the Modern era. Contingency is a positive approach to un-certainty. It accounts for the openness of the future. The very word un-certainty is built in such a manner that certainty is considered the ideal outcome.
4. WWW, or Welcome to the World of Women or a World Welcoming Women[15]
To some extent, the fears of men in a hyperconnected era reflect all-too-familiar experiences of women. Being objects of surveillance and control, exhausting laboring without rewards and being lost through the holes of the meritocracy net, being constrained in a specular posture of other’s deeds: all these stances have been the fate of women’s lives for centuries, if not millennia. What men fear from the State or from “Big (br)Other”, they have experienced with men. So, welcome to world of women….
But this situation may be looked at more optimistically as an opportunity for women’s voices and thoughts to go mainstream and be listened to. Now that equality between women and men is enshrined in the political and legal systems of the EU and the US, concretely, women have been admitted to the status of “rational subject”, but that does not dissolve its masculine origin, and the oddness or uneasiness for women to embrace this figure. Indeed, it was forged by men with men in mind, women, for those men, being indexed on nature. Mainstreaming the figure of the relational self, born in the mind of Arendt, will be much more inspiring and empowering for women, than was the rational subject. In fact, this enhances their agency and the performativity of their thoughts and theories. So, are we heading towards a world welcoming women?
In conclusion, the advent of Big Data can be looked at in two ways. The first one is to look at it as the endpoint of the materialisation of all the promises and fears of Modern times. The second one is to look at it as a wake-up call for a new beginning; indeed, by making obvious the absurdity or the price of going all the way down to the consequences of the Modern conceptual frameworks, it calls on thinking on new grounds about how to make sense of the human condition and make it thrive. The former makes humans redundant, is self-fulfilling and does not deserve human attention and energy. Without any hesitation, I opt for the latter, i.e. the wake-up call and the new beginning.
Let’s engage in this hyperconnected era bearing in mind Virginia Woolf’s “Think we must”[16] and, thereby, shape and honour the human condition in the 21st century.
_____
Nicole Dewandre has academic degrees in engineering, economics and philosophy. She is a civil servant in the European Commission, since 1983. She was advisor to the President of the Commission, Jacques Delors, between 1986 and 1993. She then worked in the EU research policy, promoting gender equality, partnership with civil society and sustainability issues. Since 2011, she has worked on the societal issues related to the deployment of ICT technologies. She has published widely on organizational and political issues relating to ICTs.
The views expressed in this article are the sole responsibility of the author and in no way represent the view of the European Commission and its services.
Acknowledgments: This review has been made possible by the Faculty of Law of the University of Maryland in Baltimore, who hosted me as a visiting fellow for the month of September 2015. I am most grateful to Frank Pasquale, first for having written this book, but also for engaging with me so patiently over the month of September and paying so much attention to my arguments, even suggesting in some instances the best way for making my points, when I was diverging from his views. I would also like to thank Jérôme Kohn, director of the Hannah Arendt Center at the New School for Social Research, for his encouragements in pursuing the mobilisation of Hannah Arendt’s legacy in my professional environment. I am also indebted, and notably for the conclusion, to the inspiring conversations I have had with Shauna Dillavou, excecutive director of CommunityRED, and Soraya Chemaly, Washington-based feminist writer, critic and activist. Last, and surely not least, I would like to thank David Golumbia for welcoming this piece in his journal and for the care he has put in editing this text written by a non-English native speaker.
[1] This change of perspective, in itself, has the interesting side effect to take the carpet under the feet of those “addicted to speed”, as Pasquale is right when he points to this addiction (195) as being one of the reasons “why so little is being done” to address the challenges arising from the hyperconnected era.
[2] Williams, Truth, Autonomy, and Speech, New York: New York University Press, 2004 (35).
[3] See, e.g., Nicole Dewandre, ‘Rethinking the Human Condition in a Hyperconnected Era: Why Freedom Is Not About Sovereignty But About Beginnings’, in The Onlife Manifesto, ed. Luciano Floridi, Springer International Publishing, 2015 (195–215).
[5] Literally: “spoken words fly; written ones remain”
[6] Apart from action, Arendt distinguishes two other fundamental human activities that together with action account for the vita activa. These two other activities are labour and work. Labour is the activity that men and women engage in to stay alive, as organic beings: “the human condition of labour is life itself”. Labour is totally pervaded by necessity and processes. Work is the type of activity men and women engage with to produce objects and inhabit the world: “the human condition of work is worldliness”. Work is pervaded by a means-to-end logic or an instrumental rationale.
[7] Arendt, The Human Condition, 1958; reissued, University of Chicago Press, 1998 (159).
[11] This expression has been introduced in the Onlife Initiative by Charles Ess, but in a different perspective. The Ess’ relational self is grounded in pre-Modern and Eastern/oriental societies. He writes: “In “Western” societies, the affordances of what McLuhan and others call “electric media,” including contemporary ICTs, appear to foster a shift from the Modern Western emphases on the self as primarily rational, individual, and thereby an ethically autonomous moral agent towards greater (and classically “Eastern” and pre-Modern) emphases on the self as primarily emotive, and relational—i.e., as constituted exclusively in terms of one’s multiple relationships, beginning with the family and extending through the larger society and (super)natural orders”. Ess, in Floridi, ed., The Onlife Manifesto (98).
By Audrey Watters
~
After decades of explosive growth, the future of for-profit higher education might not be so bright. Or, depending on where you look, it just might be…
In recent years, there have been a number of investigations – in the media, by the government – into the for-profit college sector and questions about these schools’ ability to effectively and affordably educate their students. Sure, advertising for for-profits is still plastered all over the Web, the airwaves, and public transportation, but as a result of journalistic and legal pressures, the lure of these schools may well be a lot less powerful. If nothing else, enrollment and profits at many for-profit institutions are down.
Despite the massive amounts of money spent by the industry to prop it up – not just on ads but on lobbying and legal efforts, the Obama Administration has made cracking down on for-profits a centerpiece of its higher education policy efforts, accusing these schools of luring students with misleading and overblown promises, often leaving them with low-status degrees sneered at by employers and with loans students can’t afford to pay back.
But the Obama Administration has also just launched an initiative that will make federal financial aid available to newcomers in the for-profit education sector: ed-tech experiments like “coding bootcamps” and MOOCs. Why are these particular for-profit experiments deemed acceptable? What do they do differently from the much-maligned for-profit universities?
School as “Skills Training”
In many ways, coding bootcamps do share the justification for their existence with for-profit universities. That is, they were founded in order to help to meet the (purported) demands of the job market: training people with certain technical skills, particularly those skills that meet the short-term needs of employers. Whether they meet students’ long-term goals remains to be seen.
I write “purported” here even though it’s quite common to hear claims that the economy is facing a “STEM crisis” – that too few people have studied science, technology, engineering, or math and employers cannot find enough skilled workers to fill jobs in those fields. But claims about a shortage of technical workers are debatable, and lots of data would indicate otherwise: wages in STEM fields have remained flat, for example, and many who graduate with STEM degrees cannot find work in their field. In other words, the crisis may be “a myth.”
But it’s a powerful myth, and one that isn’t terribly new, dating back at least to the launch of the Sputnik satellite in 1957 and subsequent hand-wringing over the Soviets’ technological capabilities and technical education as compared to the US system.
There are actually a number of narratives – some of them competing narratives – at play here in the recent push for coding bootcamps, MOOCs, and other ed-tech initiatives: that everyone should go to college; that college is too expensive – “a bubble” in the Silicon Valley lexicon; that alternate forms of credentialing will be developed (by the technology sector, naturally); that the tech sector is itself a meritocracy, and college degrees do not really matter; that earning a degree in the humanities will leave you unemployed and burdened by student loan debt; that everyone should learn to code. Much like that supposed STEM crisis and skill shortage, these narratives might be powerful, but they too are hardly provable.
Nor is the promotion of a more business-focused education that new either.
Foster’s Commercial School of Boston, founded in 1832 by Benjamin Franklin Foster, is often recognized as the first school established in the United States for the specific purpose of teaching “commerce.” Many other commercial schools opened on its heels, most located in the Atlantic region in major trading centers like Philadelphia, Boston, New York, and Charleston. As the country expanded westward, so did these schools. Bryant & Stratton College was founded in Cleveland in 1854, for example, and it established a chain of schools, promising to open a branch in every American city with a population of more than 10,000. By 1864, it had opened more than 50, and the chain is still in operation today with 18 campuses in New York, Ohio, Virginia, and Wisconsin.
The curriculum of these commercial colleges was largely based around the demands of local employers alongside an economy that was changing due to the Industrial Revolution. Schools offered courses in bookkeeping, accounting, penmanship, surveying, and stenography. This was in marketed contrast to those universities built on a European model, which tended to teach topics like theology, philosophy, and classical language and literature. If these universities were “elitist,” the commercial colleges were “popular” – there were over 70,000 students enrolled in them in 1897, compared to just 5800 in colleges and universities – something that highlights what’s a familiar refrain still today: that traditional higher ed institutions do not meet everyone’s needs.
The existence of the commercial colleges became intertwined in many success stories of the nineteenth century: Andrew Carnegie attended night school in Pittsburgh to learn bookkeeping, and John D. Rockefeller studied banking and accounting at Folsom’s Commercial College in Cleveland. The type of education offered at these schools was promoted as a path to become a “self-made man.”
That’s the story that still gets told: these sorts of classes open up opportunities for anyone to gain the skills (and perhaps the certification) that will enable upward mobility.
It’s a story echoed in the ones told about (and by) John Sperling as well. Born into a working class family, Sperling worked as a merchant marine, then attended community college during the day and worked as a gas station attendant at night. He later transferred to Reed College, went on to UC Berkeley, and completed his doctorate at Cambridge University. But Sperling felt as though these prestigious colleges catered to privileged students; he wanted a better way for working adults to be able to complete their degrees. In 1976, he founded the University of Phoenix, one of the largest for-profit colleges in the US which at its peak in 2010 enrolled almost 600,000 students.
Other well-known names in the business of for-profit higher education: Walden University (founded in 1970), Capella University (founded in 1993), Laureate Education (founded in 1999), Devry University (founded in 1931), Education Management Corporation (founded in 1962), Strayer University (founded in 1892), Kaplan University (founded in 1937 as The American Institute of Commerce), and Corinthian Colleges (founded in 1995 and defunct in 2015).
It’s important to recognize the connection of these for-profit universities to older career colleges, and it would be a mistake to see these organizations as distinct from the more recent development of MOOCs and coding bootcamps. Kaplan, for example, acquired the code school Dev Bootcamp in 2014. Laureate Education is an investor in the MOOC provider Coursera. The Apollo Education Group, the University of Phoenix’s parent company, is an investor in the coding bootcamp The Iron Yard.
Much like the worries about today’s for-profit universities, even the earliest commercial colleges were frequently accused of being “purely business speculations” – “diploma mills” – mishandled by administrators who put the bottom line over the needs of students. There were concerns about the quality of instruction and about the value of the education students were receiving.
That’s part of the apprehension about for-profit universities’ (almost most) recent manifestations too: that these schools are charging a lot of money for a certification that, at the end of the day, means little. But at least the nineteenth century commercial colleges were affordable, UC Berkley history professor Caitlin Rosenthal argues in a 2012 op-ed in Bloomberg,
The most common form of tuition at these early schools was the “life scholarship.” Students paid a lump sum in exchange for unlimited instruction at any of the college’s branches – $40 for men and $30 for women in 1864. This was a considerable fee, but much less than tuition at most universities. And it was within reach of most workers – common laborers earned about $1 per day and clerks’ wages averaged $50 per month.
Many of these “life scholarships” promised that students who enrolled would land a job – and if they didn’t, they could always continue their studies. That’s quite different than the tuition at today’s colleges – for-profit or not-for-profit – which comes with no such guarantee.
Interestingly, several coding bootcamps do make this promise. A 48-week online program at Bloc will run you $24,000, for example. But if you don’t find a job that pays $60,000 after four months, your tuition will be refunded, the startup has pledged.
According to a recent survey of coding bootcamp alumni, 66% of graduates do say they’ve found employment (63% of them full-time) in a job that requires the skills they learned in the program. 89% of respondents say they found a job within 120 days of completing the bootcamp. Yet 21% say they’re unemployed – a number that seems quite high, particularly in light of that supposed shortage of programming talent.
For-Profit Higher Ed: Who’s Being Served?
The gulf between for-profit higher ed’s promise of improved job prospects and the realities of graduates’ employment, along with the price tag on its tuition rates, is one of the reasons that the Obama Administration has advocated for “gainful employment” rules. These would measure and monitor the debt-to-earnings ratio of graduates from career colleges and in turn penalize those schools whose graduates had annual loan payments more than 8% of their wages or 20% of their discretionary earnings. (The gainful employment rules only apply to those schools that are eligible for Title IV federal financial aid.)
The data is still murky about how much debt attendees at coding bootcamps accrue and how “worth it” these programs really might be. According to the aforementioned survey, the average tuition at these programs is $11,852. This figure might be a bit deceiving as the price tag and the length of bootcamps vary greatly. Moreover, many programs, such as App Academy, offer their program for free (well, plus a $5000 deposit) but then require that graduates repay up to 20% of their first year’s salary back to the school. So while the tuition might appear to be low in some cases, the indebtedness might actually be quite high.
According to Course Report’s survey, 49% of graduates say that they paid tuition out of their own pockets, 21% say they received help from family, and just 1.7% say that their employer paid (or helped with) the tuition bill. Almost 25% took out a loan.
That percentage – those going into debt for a coding bootcamp program – has increased quite dramatically over the last few years. (Less than 4% of graduates in the 2013 survey said that they had taken out a loan). In part, that’s due to the rapid expansion of the private loan industry geared towards serving this particular student population. (Incidentally, the two ed-tech companies which have raised the most money in 2015 are both loan providers: SoFi and Earnest. The former has raised $1.2 billion in venture capital this year; the latter $245 million.)
The Obama Administration’s newly proposed “EQUIP” experiment will open up federal financial aid to some coding bootcamps and other ed-tech providers (like MOOC platforms), but it’s important to underscore some of the key differences here between federal loans and private-sector loans: federal student loans don’t have to be repaid until you graduate or leave school; federal student loans offer forbearance and deferment if you’re struggling to make payments; federal student loans have a fixed interest rate, often lower than private loans; federal student loans can be forgiven if you work in public service; federal student loans (with the exception of PLUS loans) do not require a credit check. The latter in particular might help to explain the demographics of those who are currently attending coding bootcamps: if they’re having to pay out-of-pocket or take loans, students are much less likely to be low-income. Indeed, according to Course Report’s survey, the cost of the bootcamps and whether or not they offered a scholarship was one of the least important factors when students chose a program.
Here’s a look at some coding bootcamp graduates’ demographic data (as self-reported):
It’s worth considering how the demographics of students in MOOCs and coding bootcamps may (or may not) be similar to those enrolled at other for-profit post-secondary institutions, particularly since all of these programs tend to invoke the rhetoric about “democratizing education” and “expanding access.” Access for whom?
Some two million students were enrolled in for-profit colleges in 2010, up from 400,000 a decade earlier. These students are disproportionately older, African American, and female when compared to the entire higher ed student population. While one in 20 of all students are enrolled in a for-profit college, 1 in 10 African American students, 1 in 14 Latino students, and 1 in 14 first-generation college students are enrolled at a for-profit. Students at for-profits are more likely to be single parents. They’re less likely to enter with a high school diploma. Dependent students in for-profits have about half as much family income as students in not-for-profit schools. (This demographic data is drawn from the NCES and from Harvard University researchers David Deming, Claudia Goldin, and Lawrence Katz in their 2013 study on for-profit colleges.)
Deming, Goldin, and Katz argue that
The snippets of available evidence suggest that the economic returns to students who attend for-profit colleges are lower than those for public and nonprofit colleges. Moreover, default rates on student loans for proprietary schools far exceed those of other higher-education institutions.
According to one 2010 report, just 22% of first- and full-time students pursuing Bachelor’s degrees at for-profit colleges in 2008 graduated, compared to 55% and 65% of students at public and private non-profit universities respectively. Of the more than 5000 career programs that the Department of Education tracks, 72% of those offered by for-profit institutions produce graduates who earn less than high school dropouts.
For their part, today’s MOOCs and coding bootcamps also boast that their students will find great success on the job market. Coursera, for example, recently surveyed its students who’d completed one of its online courses and 72% who responded said they had experienced “career benefits.” But without the mandated reporting that comes with federal financial aid, a lot of what we know about their student population and student outcomes remains pretty speculative.
What kind of students benefit from coding bootcamps and MOOC programs, the new for-profit education? We don’t really know… although based on the history of higher education and employment, we can guess.
EQUIP and the New For-Profit Higher Ed
On October 14, the Obama Administration announced a new initiative, the Educational Quality through Innovative Partnerships (EQUIP) program, which will provide a pathway for unaccredited education programs like coding bootcamps and MOOCs to become eligible for federal financial aid. According to the Department of Education, EQUIP is meant to open up “new models of education and training” to low income students. In a press release, it argues that “Some of these new models may provide more flexible and more affordable credentials and educational options than those offered by traditional higher institutions, and are showing promise in preparing students with the training and education needed for better, in-demand jobs.”
The EQUIP initiative will partner accredited institutions with third-party providers, loosening the “50% rule” that prohibits accredited schools from outsourcing more than 50% of an accredited program. Since bootcamps and MOOC providers “are not within the purview of traditional accrediting agencies,” the Department of Education says, “we have no generally accepted means of gauging their quality.” So those organizations that apply for the experiment will have to provide an outside “quality assurance entity,” which will help assess “student outcomes” like learning and employment.
By making financial aid available for bootcamps and MOOCs, one does have to wonder if the Obama Administration is not simply opening the doors for more of precisely the sort of practices that the for-profit education industry has long been accused of: expanding rapidly, lowering the quality of instruction, focusing on marketing to certain populations (such as veterans), and profiting off of taxpayer dollars.
Who benefits from the availability of aid? And who benefits from its absence? (“Who” here refers to students and to schools.)
Shawna Scott argues in “The Code School-Industrial Complex” that without oversight, coding bootcamps re-inscribe the dominant beliefs and practices of the tech industry. Despite all the talk of “democratization,” this is a new form of gatekeeping.
Before students are even accepted, school admission officers often select for easily marketable students, which often translates to students with the most privileged characteristics. Whether through intentionally targeting those traits because it’s easier to ensure graduates will be hired, or because of unconscious bias, is difficult to discern. Because schools’ graduation and employment rates are their main marketing tool, they have a financial stake in only admitting students who are at low risk of long-term unemployment. In addition, many schools take cues from their professional developer founders and run admissions like they hire for their startups. Students may be subjected to long and intensive questionnaires, phone or in-person interviews, or be required to submit a ‘creative’ application, such as a video. These requirements are often onerous for anyone working at a paid job or as a caretaker for others. Rarely do schools proactively provide information on alternative application processes for people of disparate ability. The stereotypical programmer is once again the assumed default.
And so, despite the recent moves to sanction certain ed-tech experiments, some in the tech sector have been quite vocal in their opposition to more regulations governing coding schools. It’s not just EQUIP either; there was much outcry last year after several states, including California, “cracked down” on bootcamps. Many others have framed the entire accreditation system as a “cabal” that stifles innovation. “Innovation” in this case implies alternate certificate programs – not simply Associate’s or Bachelor’s degrees – in timely, technical topics demanded by local/industry employers.
Of course, there is an institution that’s long offered alternate certificate programs in timely, technical topics demanded by local/industry employers, and that’s the community college system.
Like much of public higher education, community colleges have seen their funding shrink in recent decades and have been tasked to do more with less. For community colleges, it’s a lot more with a lot less. Open enrollment, for example, means that these schools educate students who require more remediation. Yet despite many community colleges students being “high need,” community colleges spend far less per pupil than do four-year institutions. Deep budget cuts have also meant that even with their open enrollment policies, community colleges are having to restrict admissions. In 2012, some 470,000 students in California were on waiting lists, unable to get into the courses they need.
This is what we know from history: as the funding for public higher ed decreased – for two- and four-year schools alike, for-profit higher ed expanded, promising precisely what today’s MOOCs and coding bootcamps now insist they’re the first and the only schools to do: to offer innovative programs, training students in the kinds of skills that will lead to good jobs. History tells us otherwise…
_____
Audrey Watters is a writer who focuses on education technology – the relationship between politics, pedagogy, business, culture, and ed-tech. She has worked in the education field for over 15 years: teaching, researching, organizing, and project-managing. Although she was two chapters into her dissertation (on a topic completely unrelated to ed-tech), she decided to abandon academia, and she now happily fulfills the one job recommended to her by a junior high aptitude test: freelance writer. Her stories have appeared on NPR/KQED’s education technology blog MindShift, in the data section of O’Reilly Radar, on Inside Higher Ed, in The School Library Journal, in The Atlantic, on ReadWriteWeb, and Edutopia. She is the author of the recent book The Monsters of Education Technology (Smashwords, 2014) and working on a book calledTeaching Machines. She maintains the widely-read Hack Education blog, on which an earlier version of this essay first appeared, and writes frequently for The b2 Review Digital Studies magazine on digital technology and education.