boundary 2

Tag: design

  • R. Joshua Scannell — Architectures of Managerial Triumphalism (Review of Benjamin Bratton, The Stack: On Software and Sovereignty)

    R. Joshua Scannell — Architectures of Managerial Triumphalism (Review of Benjamin Bratton, The Stack: On Software and Sovereignty)

    A review of Benjamin Bratton, The Stack: On Software and Sovereignty (MIT Press Press, 2016)

    by R. Joshua Scannell

    The Stack

    Benjamin Bratton’s The Stack: On Software and Sovereignty is an often brilliant and regularly exasperating book. It is a diagnosis of the epochal changes in the relations between software, sovereignty, climate, and capital that underwrite the contemporary condition of digital capitalism and geopolitics.  Anybody who is interested in thinking through the imbrication of digital technology with governance ought to read The Stack. There are many arguments that are useful or interesting. But reading it is an endeavor. Sprawling out across 502 densely packed pages, The Stack is nominally a “design brief” for the future. I don’t know that I understand that characterization, no matter how many times I read this tome.

    The Stack is chockablock with schematic abstractions. They make sense intuitively or cumulatively without ever clearly coming into focus. This seems to be a deliberate strategy. Early in the book, Bratton describes The Stack–the titular “accidental megastructure” of “planetary computation” that has effectively broken and redesigned, well, everything–as “a blur.” He claims that

    Only a blur provides an accurate picture of what is going on now and to come…Our description of a system in advance of its appearance maps what we can see but cannot articulate, on the one hand, versus what we know to articulate but cannot yet see, on the other. (14)

    This is also an accurate description of the prevailing sensation one feels working through the text. As Ian Bogost wrote in his review of The Stack for Critical Inquiry, reading the book feels “intense—meandering and severe but also stimulating and surprising. After a while, it was also a bit overwhelming. I’ll take the blame for that—I am not necessarily built for Bratton’s level and volume of scholarly intensity.” I agree on all fronts.

    Bratton’s inarguable premise is that the various computational technologies that collectively define the early decades of the 21st century—smart grids, cloud platforms, mobile apps, smart cities, the Internet of Things, automation—are not analytically separable. They are often literally interconnected but, more to the point, they combine to produce a governing architecture that has subsumed older calculative technologies like the nation state, the liberal subject, the human, and the natural. Bratton calls this “accidental megastructure” The Stack.

    Bratton argues that The Stack is composed of six “layers,” the earth, the cloud, the city, the address, the interface, and the user. They all indicate more or less what one might expect, but with a counterintuitive (and often Speculative Realist) twist. The earth is the earth but is also a calculation machine. The cloud is “the cloud” but as a chthonic structure of distributed networks and nodal points that reorganize sovereign power and body forth quasi-feudal corporate sovereignties. The City is, well, cities, but not necessarily territorially bounded, formally recognized, or composed of human users. Users are also usually not human. They’re just as often robots or AI scripts. Really they can be anything that works up and down the layers, interacting with platforms (which can be governments) and routed through addresses (which are “every ‘thing’ that can be computed” including “individual units of life, loaded shipping containers, mobile devices, locations of datum in databases, input and output events and enveloped entities of all size and character” [192], etc.).

    Each layer is richly thought through and described, though it’s often unclear whether the “layer” in question is “real” or a useful conceptual envelope or both or neither. That distinction is generally untenable, and Bratton would almost certainly reject the dichotomy between the “real” and the “metaphorical.” But it isn’t irrelevant for this project. He argues early on that, contra Marxist thought that understands the state metaphorically as a machine, The Stack is a “machine-as-the-state.” That’s both metaphorical and not. There really are machines that exert sovereign power, and there are plenty of humans in state apparatuses that work for machines. But there aren’t, really, machines that are states. Right?

    Moments like these, when The Stack’s concepts productively destabilize given categories (like the state) that have never been coherent enough to justify their power are when the book is at its most compelling. And many of the counterintuitive moves that Bratton makes start and end with real, important insights. For instance, the insistence on the absolute materiality, and the absolute earthiness of The Stack and all of its operations leads Bratton to a thoroughgoing and categorical rejection of the prevailing “idiot language” that frames digital technology as though it exists in a literal “cloud,” or some sort of ethereal “virtual” that is not coincident with the “real” world. Instead, in The Stack, every point of contact between every layer is a material event that transduces and transforms everything else. To this end, he inverts Latour’s famous dictum that there is no global, only local. Instead, The Stack as planetary megastructure means that there is only global. The local is a dead letter. This is an anthropocene geography in which an electron, somewhere, is always firing because a fossil fuel is burning somewhere else. But it is also a post-anthropocene geography because humans are not The Stack’s primary users. The planet itself is a calculation machine, and it is agnostic about human life. So, there is a hybrid sovereignty: The Stack is a “nomos of the earth” in which humans are an afterthought.

    A Design for What?

    Bratton is at his conceptual best when he is at his weirdest. Cyclonopedic (Negarestani 2008) passages in which the planet slowly morphs into something like HP Lovecraft and HR Geiger’s imaginations fucking in a Peter Thiel fever dream are much more interesting (read: horrifying) than the often perfunctory “real life” examples from “real world” geopolitical trauma, like “The First Sino-Google War of 2009.” But this leads to one of the most obvious shortcomings of the text. It is supposedly a “design brief,” but it’s not clear what or who it is a design brief for.

    For Bratton, design

    means the structuring of the world in reaction to an accelerated decay and in projective anticipation of a condition that is now only the ghostliest of a virtual present tense. This is a design for accommodating (or refusing to accommodate) the post-whatever-is-melting-into-air and prototyping for pre-what-comes-next: a strategic, groping navigation (however helpless) of the punctuations that bridge between these two. (354)

    Design, then, and not theory, because Bratton’s Stack is a speculative document. Given the bewildering and potentially apocalyptic conditions of the present, he wants to extrapolate outwards. What are the heterotopias-to-come? What are the constraints? What are the possibilities? Sounding a familiar frustration with the strictures of academic labor, he argues that this moment requires something more than diagnosis and critique. Rather,

    the process by which sovereignty is made more plural becomes a matter of producing more than discoursing: more about pushing, pulling, clicking, eating, modeling, stacking, prototyping, subtracting, regulating, restoring, optimizing, leaving alone, splicing, gardening and evacuating than about reading, examining, insisting, rethinking, reminding, knowing full-well, enacting, finding problematic, and urging. (303)

    No doubt. And, not that I don’t share the frustration, but I wonder what a highly technical, 500-page diagnosis of the contemporary state of software and sovereignty published and distributed by an academic press and written for an academic audience is if not discoursing? It seems unlikely that it can serve as a blueprint for any actually-existing power brokers, even though its insights are tremendous. At the risk of sounding cynical, calling The Stack a “design brief” seems like a preemptive move to liberate Bratton from having to seriously engage with the different critical traditions that work to make sense of the world as it is in order to demand something better. This allows for a certain amount of intellectual play that can sometimes feel exhilarating but can just as often read as a dodge—as a way of escaping the ethical and political stakes that inhere in critique.

    That is an important elision for a text that is explicitly trying to imagine the geopolitics of the future. Bratton seems to pose The Stack from a nebulous “Left” position that is equally disdainful of the sort of “Folk Politics” that Srnicek and Williams (2015) so loathe and the accelerationist tinge of the Speculative Realists with whom he seems spiritually aligned. This sense of rootlessness sometimes works in Bratton’s favor. There are long stretches in which his cherry picking and remixing ideas from across a bewildering array of schools of thought yields real insights. But just as often, the “design brief” characterization seems to be a way out of thinking the implications of the conjuncture through to their conclusion. There is a breeziness about how Bratton poses futures-as-thought-experiments that is troubling.

    For instance, in thinking through the potential impacts of the capacity to measure planetary processes in real time, Bratton suggests that producing a sensible world is not only a process of generalizing measurement and representation. He argues that

    the sensibility of the world might be distributed or organized, made infrastructural, and activated to become part of how the landscape understands itself and narrates itself. It is not only a diagnostic image then; it is a tool for geo-politics in formation, emerging from the parametric multiplication and algorithmic conjugation of our surplus projections of worlds to come, perhaps in mimetic accordance with one explicit utopian conception or another, and perhaps not. Nevertheless, the decision between what is and is not governable may arise as much from what the model computational image cannot do as much as what it can. (301, emphasis added)

    Reading this, I wanted to know: What explicit utopian project is he thinking about? What are the implications of it going one way and not another? Why mimetic? What does the last bit about what is and is not governable mean? Or, more to the point: who and what is going to get killed if it goes one way and not another? There are a great many instances like this over the course of the book. At the precise moment where analysis might inform an understanding of where The Stack is taking us, Bratton bows out. He’s set down the stakes, and given a couple of ideas about what might happen. I guess that’s what a design brief is meant to do.

    Another example, this time concerning the necessity of geoengineering for solving what appears to be an ever-more-imminent climatic auto-apocalypse:

    The good news is that we know for certain that short-term “geoengineering” is not only possible but in a way inevitable, but how so? How and by whom does it go, and unfortunately for us the answer (perhaps) must arrive before we can properly articulate the question. For the darker scenarios, macroeconomics completes its metamorphosis into ecophagy, as the discovery of market failures becomes simultaneously the discovery of limits of planetary sinks (e.g., carbon, heat, waste, entropy, populist politics) and vice versa; The Stack becomes our dakhma. The shared condition, if there is one, is the mutual unspeakability and unrecognizability that occupies the seat once reserved for Kantian cosmopolitanism, now just a pre-event reception for a collective death that we will actually be able to witness and experience. (354, emphasis added)

    Setting aside the point that it is not at all clear to me that geoengineering is an inevitable or even appropriate (Crist 2017) way out of the anthropocene (or capitalocene? (Moore 2016)) crisis, if the answer for “how and by whom does it go” is to arrive before the question can be properly articulated, then the stack-to-come starts looking a lot like a sort of planetary dictatorship of, well of who? Google? Mark Zuckerberg? In-Q-Tel? Y Combinator? And what exactly is the “populist politics” that sits in the Latourian litany alongside carbon, heat, waste, and entropy as a full “planetary sink”? Does that mean Trump, and all the other globally ascendant right wing “populists?” Or does it mean “populist politics” in the Jonathan Chait sense that can’t differentiate between left and right and therefore sees both political projects as equally dismissible? Does populism include any politics that centers the needs and demands of the public? What are the commitments in this dichotomy? I suppose The Stack wouldn’t particularly care about these sorts of questions. But a human writing a 500-page playbook so that other humans might better understand the world-to-come might be expected to. After all, a choice between geoengineering or collective death might be what the human population of the planet is facing (and for most of the planet’s species, and for a great many of the planet’s human societies, already eliminated or dragged down the road towards it during the current mass extinction, there is no choice), but such a binary doesn’t make for much of a design spec.

    One final example, this time on what the political subject of the stack-to-come ought to look like:

    We…require, as I have laid out, a redefinition of the political subject in relation to the real operations of the User, one that is based not on homo economicus, parliamentary liberalism, poststructuralist linguistic reduction, or the will to secede into the moral safety of individual privacy and withdrawn from coercion. Instead, this definition should focus on composing and elevating sites of governance from the immediate, suturing interfacial material between subjects, in the stitches and the traces and the folds of interaction between bodies and things at a distance, congealing into different networks demanding very different kinds of platform sovereignty.

    If “poststructuralist linguistic reduction” is on the same plane as “parliamentary liberalism” or “homo economicus” as one among several prevailing ideas of the contemporary “political subject,” then I am fairly certain that we are in the realm of academic “theory” rather than geopolitical “design.” The more immediate point is that I do understand what the terms that we ought to abandon mean, and agree that they need to go. But I don’t understand what the redefined political subject looks like. Again, if this is “theory,” then that sort of hand waving is unfortunately often to be expected. But if it’s a design brief—even a speculative one—for the transforming nature of sovereignty and governance, then I would hope for some more clarity on what political subjectivity looks like in The Stack-To-Come.

    Or, and this is really the point, I want The Stack to tell me something more about how The Stack participates in the production and extractable circulation of populations marked for death and debility (Puar 2017). And I want to know what, exactly, is so conceptually radical about pointing out that human beings are not at the center of the planetary systems that are driving transformations in geopolitics and sovereignty. After all, hasn’t that been exactly the precondition for the emergence of The Stack? This accidental megastructure born out of the ruthless expansions of digitally driven capitalism is not just working to transform the relationship between “human” and sovereignty. The condition of its emergence is precisely that most planetary homo sapiens are not human, and are therefore disposable and disposited towards premature death. The Stack might be “our” dhakma, if we’re speaking generically as a sort of planetary humanism that cannot but be read as white—or, more accurately, “capacitated.” But the systematic construction of human stratification along lines of race, gender, sex, and ability as precondition for capitalist emergence freights the stack with a more ancient, and ignored, calculus: that of the logistical work that shuttles humans between bodies, cargo, and capital. It is, in other words, the product of an older planetary death machine: what Fred Moten and Stefano Harney (2013) call the “logistics in the hold” that makes The Stack hum along.

    The tenor of much of The Stack is redolent of managerial triumphalism. The possibility of apocalypse is always minimized. Bratton offers, a number of times, that he’s optimistic about the future. He is disdainful of the most stringent left critics of Silicon Valley, and he thinks that we’ll probably be able to trust to our engineers and institutions to work out The Stack’s world-destroying kinks. He sounds invested, in other words, in a rhetorical-political mode of thought that, for now, seems to have died on November 9, 2016. So it is not surprising that Bratton opens the book with an anecdote about Hillary Clinton’s vision of the future of world governance.

    The Stack begins with a reference to then-Secretary of State Clinton’s 2013 farewell address to the Council on Foreign Relations. In that speech, Clinton argued that the future of international governance requires a “new architecture for this new world, more Frank Gehry than formal Greek.” Unlike the Athenian Agora, which could be held up by “a few strong columns,” contemporary transnational politics is too complicated to rely on stolid architecture, and instead must make use of the type of modular assemblage that “at first might appear haphazard, but in fact, [is] highly intentional and sophisticated” that makes Gehry famous. Bratton interprets her argument as a “half-formed question, what is the architecture of the emergent geopolitics of this software society? What alignments, components, foundations, and apertures?” (Bratton 2016, 13).

    For Clinton, future governance must make a choice between Gehry and Agora. The Gehry future is that of the seemingly “haphazard” but “highly intentional and sophisticated” interlocking treaties, non-governmental organizations, super and supra-state technocratic actors working together to coordinate the disparate interests of states and corporations in the service of the smooth circulation of capital across a planetary logistics network. On the other side, a world order held up by “a few strong pillars”—by implication the status quo after the collapse of the Soviet Union, a transnational sovereign apparatus anchored by the United States. The glaring absence in this dichotomy is democracy—or rather its assumed subsumption into American nationalism. Clinton’s Gehry future is a system of government whose machinations are by design opaque to those that would be governed, but whose beneficence is guaranteed by the good will of the powerful. The Agora—the fountainhead of slaveholder democracy—is metaphorically reduced to its pillars, particularly the United States and NATO. Not unlike ancient Athens, it’s democracy as empire.

    There is something darkly prophetic of the collapse of the Clintonian world vision, and perversely apposite in Clinton’s rhetorical move to supplant as the proper metaphor for future government Gehry for the Agora. It is unclear why a megalomaniacal corporate starchitecture firm that robs public treasuries blind and facilitates tremendous labor exploitation ought to be the future for which the planet strives.

    For better or for worse, The Stack is a book about Clinton. As a “design brief,” it works from a set of ideas about how to understand and govern the relationship between software and sovereignty that were strongly intertwined with the Clinton-Obama political project. That means, abysmally, that it is now also about Trump. And Trump hangs synechdochally over theoretical provocations for what is to be done now that tech has killed the nation-state’s “Westphalian Loop.” This was a knotty question when the book went to press in February 2016 and Gehry seemed ascendant. Now that the Extreme Center’s (Ali 2015) project of tying neoliberal capitalism to non-democratic structures of technocratic governance appears to be collapsing across the planet, Clinton’s “half-formed question” is even knottier. If we’re living through the demise of the Westphalian nation state, then it’s sounding one hell of a murderous death rattle.

    Gehry or Agora?

    In the brief period between July 21st and November 8 2016, when the United States’ cognoscenti convinced itself that another Clinton regime was inevitable, there was a neatly ordered expectation of how “pragmatic” future governance under a prolonged Democratic regime would work. In the main, the public could look forward to another eight years sunken in a “Gehry-like” neoliberal surround subtended by the technocratic managerialism of the Democratic Party’s right edge. And, while for most of the country and planet, that arrangement didn’t portend much to look forward to, it was at least not explicitly nihilistic in its outlook. The focus on management, and on the deliberate dismantling of the nation state as the primary site of governance in favor of the mesh of transnational agencies and organizations that composed 21st century neoliberalism’s star actants meant that a number of questions about how the world would be arranged were left unsettled.

    By end of election week, that future had fractured. The unprecedented amateurishness, decrypted racism, and incomparable misogyny of the Trump campaign portended an administration that most thought couldn’t, or at the very least shouldn’t, be trusted with the enormous power of the American executive. This stood in contrast to Obama, and (perhaps to a lesser extent) to Clinton, who were assumed to be reasonable stewards. This paradoxically helps demonstrate just how much the “rule of law” and governance by administrative norms that theoretically underlie the liberal national state had already deteriorated under Obama and his immediate predecessors—a deterioration that was in many ways made feasible by the innovations of the digital technology sector. As many have pointed out, the command-and-control prerogatives that Obama claimed for the expansion of executive power depended essentially on the public perception of his personal character.

    The American people, for instance, could trust planetary drone warfare because Obama claimed to personally vet our secret kill list, and promised to be deliberate and reasonable about its targets. Of course, Obama is merely the most publicly visible part of a kill-chain that puts this discretionary power over life and death in the hands of the executive. The kill-chain is dependent on the power of, and sovereign faith in, digital surveillance and analytics technologies. Obama’s kill-chain, in short, runs on the capacities of an American warfare state—distributed at nodal points across the crust of the earth, and up its Van Allen belts—to read planetary chemical, territorial, and biopolitical fluxes and fluctuations as translatable data that can be packet switched into a binary apparatus of life and death. This is the calculus that Obama conjures when he defines those mobile data points that concatenate into human beings as as “baseball cards” that constitute a “continuing, imminent threat to the American people.” It is the work of planetary sovereignty that rationalizes and capacitates the murderous “fix” and “finish” of the drone program.

    In other words, Obama’s personal aura and eminent reasonableness legitimated an essentially unaccountable and non-localizable network of black sites and black ops (Paglen 2009, 2010) that loops backwards and forwards across the drone program’s horizontal regimes of national sovereignty and vertical regimes of cosmic sovereignty. It is, to use Clinton’s framework, a very Frank Gehry power structure. Donald Trump’s election didn’t transform these power dynamics. Instead, his personal qualities made the work of planetary computation in the service of sovereign power to kill suddenly seem dangerous or, perhaps better: unreasonable. Whether President Donald Trump would be so scrupulous as his predecessor in determining the list of humans fit for eradication was (formally speaking) a mystery, but practically a foregone conclusion. But in both presidents’ cases, the dichotomies between global and local, subject and sovereign, human and non-human that are meant to underwrite the nation state’s rights and responsibilities to act are fundamentally blurred.

    Likewise, Obama’s federal imprimatur transformed the transparently disturbing decision to pursue mass distribution of privately manufactured surveillance technology – Taser’s police-worn body cameras, for instance – as a reasonable policy response to America’s dependence on heavily armed paramilitary forces to maintain white supremacy and crush the poor. Under Obama and Eric Holder, American liberals broadly trusted that digital criminal justice technologies were crucial for building a better, more responsive, and more responsible justice system. With Jeff Sessions in charge of the Department of Justice, the idea that the technologies that Obama’s Presidential Task Force on 21st Century Policing lauded as crucial for achieving the “transparency” needed to “build community trust” between historically oppressed groups and the police remained plausible instruments of progressive reform suddenly seemed absurd. Predictive policing, ubiquitous smart camera surveillance, and quantitative risk assessments sounded less like a guarantee of civil rights and more like a guarantee of civil rights violations under a president that lauds extrajudicial police power. Trump goes out of his way to confirm these civil libertarian fears, such as when he told Long Island law enforcement that “laws are stacked against you. We’re changing those laws. In the meantime, we need judges for the simplest thing — things that you should be able to do without a judge.”

    But, perhaps more to the point, the rollout of these technologies, like the rollouts of the drone program, formalized a transformation in the mechanics of sovereign power that had long been underway. Stripped of the sales pitch and abstracted from the constitutional formalism that ordinarily sets the parameters for discussions of “public safety” technologies, what digital policing technologies do is flatten out the lived and living environment into a computational field. Police-worn body cameras quickly traverse the institutional terrain from a tool meant to secure civil rights against abusive officers into an artificially intelligent weapon that flags facial structures that match with outstanding warrants, that calculates changes in enframed bodily comportment to determine imminent threat to the officer-user, and that captures the observed social field as  data privately owned by the public safety industry’s weapons manufacturers. Sovereignty, in this case, travels up and down a Stack of interoperative calculative procedures, with state sanction and human action just another data point in the proper administration of quasi-state violence. After all, it is Axon (formerly Taser), and not a government that controls the servers that their body cams draw on to make real-time assessments of human danger. The state sanctions a human officer’s violence, but the decision-making apparatus that situates the violence is private, and inhuman. Inevitably, the drone war and carceral capitalism collapse into one another, as drones are outfitted with AI designed to identify crowd “violence” from the sky, a vertical parallax to pair with the officer-user’s body worn camera.

    Trump’s election seemed to show with a clarity that had hitherto been unavailable for many that wedding the American security apparatus’ planetary sovereignty to twenty years of unchecked libertarian technological triumphalism (even, or especially if in the service of liberal principles like disruption, innovation, efficiency, transparency, convenience, and generally “making the world a better place”) might, in fact, be dangerous. When the Clinton-Obama project collapsed, its assumption that the intertwining of private and state sector digital technologies inherently improves American democracy and economy, and increases individual safety and security looked absurd. The shock of Trump’s election, quickly and self-servingly blamed on Russian agents and Facebook, transformed Silicon Valley’s broadly shared Prometheanism into interrogations into the industry’s infrastructural corrosive toxicity, and its deleterious effect on the liberal national state.  If tech would ever come to Jesus, the end of 2016 would have had to be the moment. It did not.

    A few days after Trump won election I found myself a fly on the wall in a meeting with mid-level executives for one of the world’s largest technology companies (“The Company”). We were ostensibly brainstorming how to make The Cloud a force for “global good,” but Trump’s ascendancy and all its authoritarian implications made the supposed benefits of cloud computing—efficiency, accessibility, brain-shattering storage capacity—suddenly terrifying. Instead of setting about the dubious task of imagining how a transnational corporation’s efforts to leverage the gatekeeping power over access to the data of millions, and the private control over real-time identification technology (among other things) into heavily monetized semi-feudal quasi-sovereign power could be Globally Good, we talked about Trump.

    The Company’s reps worried that, Peter Thiel excepted, tech didn’t have anybody near enough to Trump’s miasmatic fog to sniff out the administration’s intentions. It was Clinton, after all, who saw the future in global information systems. Trump, as we were all so fond of pointing out, didn’t even use a computer. Unlike Clinton, the extent of Trump’s mania for surveillance and despotism was mysterious, if predictable. Nobody knew just how many people of color the administration had in its crosshairs, and The Company reps suggested that the tech world wasn’t sure how complicit it wanted to be in Trump’s explicitly totalitarian project. The execs extemporized on how fundamental the principles of democratic and republican government were to The Company, how committed they were to privacy, and how dangerous the present conjuncture was. As the meeting ground on, reason slowly asphyxiated on a self-evidently implausible bait hook: that it was now both the responsibility and appointed role of American capital, and particularly of the robber barons of Platform Capitalism (Srnicek 2016), to protect Americans from the fascistic grappling of American government. Silicon Valley was going to lead the #resistance against the very state surveillance and overreach that it capacitated, and The Company would lead Silicon Valley. That was the note on which the meeting adjourned.

    That’s not how things have played out. A month after that meeting, on December 14, 2016, almost all of Silicon Valley’s largest players sat down at Trump’s technology roundtable. Explaining themselves to an aghast (if credulous) public, tech’s titans argued that it was their goal to steer the new chief executive of American empire towards a maximally tractable gallimaufry of power. This argument, plus over one hundred companies’ decision to sign an amici curiae brief opposing Trump’s first attempt at a travel ban aimed at Muslims, seemed to publicly signal that Silicon Valley was prepared to #resist the most high-profile degradations of contemporary Republican government. But, in April 2017, Gizmodo inevitably reported that those same companies that appointed themselves the front line of defense against depraved executive overreach in fact quietly supported the new Republican president before he took office. The blog found that almost every major concern in the Valley donated tremendously to the Trump administration’s Presidential Inaugural Committee, which was impaneled to plan his sparsely attended inaugural parties. The Company alone donated half a million dollars. Only two tech firms donated more. It seemed an odd way to #resist.

    What struck me during the meeting was how weird it was that executives honestly believed a major transnational corporation would lead the political resistance against a president committed to the unfettered ability of American capital to do whatever it wants. What struck me afterward was how easily the boundaries between software and sovereignty blurred. The Company’s executives assumed, ad hoc, that their operation had the power to halt or severely hamper the illiberal policy priorities of government. By contrast, it’s hard to imagine mid-level General Motors executives imagining that they have the capacity or responsibility to safeguard the rights and privileges of the republic. Except in an indirect way, selling cars doesn’t have much to do with the health of state and civil society. But state and civil society is precisely what Silicon Valley has privatized, monetized, and re-sold to the public. But even “state and civil society” is not quite enough. What Silicon Valley endeavors to produce is, pace Bratton, a planetary simulation as prime mover. The goal of digital technology conglomerates is not only to streamline the formal and administrative roles and responsibilities of the state, or to recreate the mythical meeting houses of the public sphere online. Platform capital has as its target the informational infrastructure that makes living on earth seem to make sense, to be sensible. And in that context, it’s commonsensical to imagine software as sovereignty.

    And this is the bind that will return us to The Stack. After one and a half relentless years of the Trump presidency, and a ceaseless torrent of public scandals concerning tech companies’ abuse of power, the technocratic managerial optimism that underwrote Clinton’s speech has come to a grinding halt. For the time being, at least, the “seemingly haphazard yet highly intentional and sophisticated” governance structures that Clinton envisioned are not working as they have been pitched. At the same time, the cavalcade of revelations about the depths that technology companies plumb in order to extract value from a polluted public has led many to shed delusions about the ethical or progressive bona fides of an industry built on a collective devotion to Ayn Rand. Silicon Valley is happy to facilitate authoritarianism and Nazism, to drive unprecedented crises of homelessness, to systematically undermine any glimmer of dignity in human labor, to thoroughly toxify public discourse, to entrench and expand carceral capitalism so long as doing so expands the platform, attracts advertising and venture capital, and increases market valuation. As Bratton points out, that’s not a particularly Californian Ideology. It’s The Stack, both Gehry and Agora.

    _____

    R. Joshua Scannell holds a PhD in Sociology from the CUNY Graduate Center. He teaches sociology and women’s, gender, and sexuality studies at Hunter College, and is currently researching the political economic relations between predictive policing programs and urban informatics systems. He is the author of Cities: Unauthorized Resistance and Uncertain Sovereignty in the Urban World (Paradigm/Routledge, 2012).

    Back to the essay

    _____

    Works Cited

    • Ali, Tariq. 2015. The Extreme Center: A Warning. London: Verso
    • Crist, Eileen. 2016. “On the Poverty of Our Nomenclature.” In Anthropocene or Capitalocene: Nature, History, and the Crisis of Capitalism, edited by Jason W. Moore, 14-33. Oakland: PM Press
    • Harney, Stefano, and Fred Moten. 2013. The Undercommons: Fugitive Planning and Black Study. Brooklyn: Autonomedia.
    • Moore, Jason W. 2016. “Anthropocene or Capitolocene? Nature, History, and the Crisis of Capitalism.” In Anthropocene or Capitalocene: Nature, History, and the Crisis of Capitalism, edited by Jason W. Moore, 1-13. Oakland: PM Press
    • Negarestani, Reza. 2008. Cyclonopedia: Complicity with Anonymous Materials. Melbourne: re.press
    • Paglen, Trevor. 2009. Blank Spots on the Map: The Dark Geography of the Pentagon’s Secrert World. Boston: Dutton Adult
    • Paglen, Trevor. 2010. Invisible: Covert Operations and Classified Landscapes. Reading: Aperture Press
    • Puar, Jasbir. 2017. The Right to Maim: Debility, Capacity, Disability. Durham: Duke University Press
    • Srnicek, Nick. 2016. Platform Capitalism. Boston: Polity Press
    • Srnicek, Nick, and Alex Williams. 2016. Inventing the Future: Postcapitalism and a World Without Work. London: Verso.
  • Flat Theory

    Flat Theory

    By David M. Berry
    ~

    The world is flat.[1] 6Or perhaps better, the world is increasingly “layers.” Certainly the augmediated imaginaries of the major technology companies are now structured around a post-retina vision of mediation made possible and informed by the digital transformations ushered in by mobile technologies – whether smartphones, wearables, beacons or nearables – an internet of places and things. These imaginaries provide a sense of place, as well as sense for management, of the complex real-time streams of information and data broken into shards and fragments of narrative, visual culture, social media and messaging. Turned into software, they reorder and re-present information, decisions and judgment, amplifying the sense and senses of (neoliberal) individuality whilst reconfiguring what it means to be a node in the network of post digital capitalism.  These new imaginaries serve as abstractions of abstractions, ideologies of ideologies, a prosthesis to create a sense of coherence and intelligibility in highly particulate computational capitalism (Berry 2014). To explore the experimentation of the programming industries in relation to this it is useful to explore the design thinking and material abstractions that are becoming hegemonic at the level of the interface.

    Two new competing computational interface paradigms are now deployed in the latest version of Apple and Google’s operating systems, but more notably as regulatory structures to guide the design and strategy related to corporate policy. The first is “flat design” which has been introduced by Apple through iOS 8 and OS X Yosemite as a refresh of the aging operating systems’ human computer interface guidelines, essentially stripping the operating system of historical baggage related to techniques of design that disguised the limitations of a previous generation of technology, both in terms of screen but also processor capacity. It is important to note, however, that Apple avoids talking about “flat design” as its design methodology, preferring to talk through its platforms specificity, that is about iOS’ design or OS X’s design. The second is “material design” which was introduced by Google into its Android L, now Lollipop, operating system and which also sought to bring some sense of coherence to a multiplicity of Android devices, interfaces, OEMs and design strategies. More generally “flat design” is “the term given to the style of design in which elements lose any type of stylistic characters that make them appear as though they lift off the page” (Turner 2014). As Apple argues, one should “reconsider visual indicators of physicality and realism” and think of the user interface as “play[ing] a supporting role”, that is that techniques of mediation through the user interface should aim to provide a new kind of computational realism that presents “content” as ontologically prior to, or separate from its container in the interface (Apple 2014). This is in contrast to “rich design,” which has been described as “adding design ornaments such as bevels, reflections, drop shadows, and gradients” (Turner 2014).

    color_family_a_2xI want to explore these two main paradigms – and to a lesser extent the flat-design methodology represented in Windows 7 and the, since renamed, Metro interface – through a notion of a comprehensive attempt by both Apple and Google to produce a rich and diverse umwelt, or ecology, linked through what what Apple calls “aesthetic integrity” (Apple 2014). This is both a response to their growing landscape of devices, platforms, systems, apps and policies, but also to provide some sense of operational strategy in relation to computational imaginaries. Essentially, both approaches share an axiomatic approach to conceptualizing the building of a system of thought, in other words, a primitivist predisposition which draws from both a neo-Euclidian model of geons (for Apple), but also a notion of intrinsic value or neo-materialist formulations of essential characteristics (for Google). That is, they encapsulate a version of what I am calling here flat theory. Both of these companies are trying to deal with the problematic of multiplicities in computation, and the requirement that multiple data streams, notifications and practices have to be combined and managed within the limited geography of the screen. In other words, both approaches attempt to create what we might call aggregate interfaces by combining techniques of layout, montage and collage onto computational surfaces (Berry 2014: 70).

    The “flat turn” has not happened in a vacuum, however, and is the result of a new generation of computational hardware, smart silicon design and retina screen technologies. This was driven in large part by the mobile device revolution which has not only transformed the taken-for-granted assumptions of historical computer interface design paradigms (e.g. WIMP) but also the subject position of the user, particularly structured through the Xerox/Apple notion of single-click functional design of the interface. Indeed, one of the striking features of the new paradigm of flat design, is that it is a design philosophy about multiplicity and multi-event. The flat turn is therefore about modulation, not about enclosure, as such, indeed it is a truly processual form that constantly shifts and changes, and in many ways acts as a signpost for the future interfaces of real-time algorithmic and adaptive surfaces and experiences. The structure of control for the flat design interfaces is following that of the control society, is “short-term and [with] rapid rates of turnover, but also continuous and without limit” (Deleuze 1992). To paraphrase Deleuze: Humans are no longer in enclosures, certainly, but everywhere humans are in layers.

    manipulation_2x

    Apple uses a series of concepts to link its notion of flat design which include, aesthetic integrity, consistency, direct manipulation, feedback, metaphors, and user control (Apple 2014). Reinforcing the haptic experience of this new flat user interface has been described as building on the experience of “touching glass” to develop the “first post-Retina (Display) UI (user interface)” (Cava 2013). This is the notion of layered transparency, or better, layers of glass upon which the interface elements are painted through a logical internal structure of Z-axis layers. This laminate structure enables meaning to be conveyed through the organization of the Z-axis, both in terms of content, but also to place it within a process or the user interface system itself.

    Google, similarly, has reorganized it computational imaginary around a flattened layered paradigm of representation through the notion of material design. Matias Duarte, Google’s Vice President of Design and a Chilean computer interface designer, declared that this approach uses the notion that it “is a sufficiently advanced form of paper as to be indistinguishable from magic” (Bohn 2014). But magic which has constraints and affordances built into it, “if there were no constraints, it’s not design — it’s art” Google claims (see Interactive Material Design) (Bohn 2014). Indeed, Google argues that the “material metaphor is the unifying theory of a rationalized space and a system of motion”, further arguing:

    The fundamentals of light, surface, and movement are key to conveying how objects move, interact, and exist in space and in relation to each other. Realistic lighting shows seams, divides space, and indicates moving parts… Motion respects and reinforces the user as the prime mover… [and together] They create hierarchy, meaning, and focus (Google 2014).

    This notion of materiality is a weird materiality in as much as Google “steadfastly refuse to name the new fictional material, a decision that simultaneously gives them more flexibility and adds a level of metaphysical mysticism to the substance. That’s also important because while this material follows some physical rules, it doesn’t create the “trap” of skeuomorphism. The material isn’t a one-to-one imitation of physical paper, but instead it’s ‘magical’” (Bohn 2014). Google emphasises this connection, arguing that “in material design, every pixel drawn by an application resides on a sheet of paper. Paper has a flat background color and can be sized to serve a variety of purposes. A typical layout is composed of multiple sheets of paper” (Google Layout, 2014). The stress on material affordances, paper for Google and glass for Apple are crucial to understanding their respective stances in relation to flat design philosophy.[2]

    • Glass (Apple): Translucency, transparency, opaqueness, limpidity and pellucidity.
    • Paper (Google): Opaque, cards, slides, surfaces, tangibility, texture, lighted, casting shadows.
    Paradigmatic Substances for Materiality

    In contrast to the layers of glass paper-notes-templatethat inform the logics of transparency, opaqueness and translucency of Apple’s flat design, Google uses the notion of remediated “paper” as a digital material, that is this “material environment is a 3D space, which means all objects have x, y, and z dimensions. The z-axis is perpendicularly aligned to the plane of the display, with the positive z-axis extending towards the viewer.  Every sheet of material occupies a single position along the z-axis and has a standard 1dp thickness” (Google 2014). One might think then of Apple as painting on layers of glass, and Google as thin paper objects (material) placed upon background paper. However a key difference lies in the use of light and shadow in Google’s notion which enables the light source, located in a similar position to the user of the interface, to cast shadows of the material objects onto the objects and sheets of paper that lie beneath them (see Jitkoff 2014). Nonetheless, a laminate structure is key to the representational grammar that constitutes both of these platforms.

    armin_hofmann_2
    Armin Hofmann, head of the graphic design department at the Schule für Gestaltung Basel (Basel School of Design) and was instrumental in developing the graphic design style known as the Swiss Style. Designs from 1958 and 1959.

    Interestingly, both design strategies emerge from an engagement with and reconfiguration of the principles of design that draw from the Swiss style (sometimes called the International Typographic Style) in design (Ashghar 2014, Turner 2014).[3] This approach emerged in the 1940s, and

    mainly focused on the use of grids, sans-serif typography, and clean hierarchy of content and layout. During the 40’s and 50’s, Swiss design often included a combination of a very large photograph with simple and minimal typography (Turner 2014).

    The design grammar of the Swiss style has been combined with minimalism and the principle of “responsive design”, that is that the materiality and specificity of the device should be responsive to the interface and context being displayed. Minimalism is a “term used in the 20th century, in particular from the 1960s, to describe a style characterized by an impersonal austerity, plain geometric configurations and industrially processed materials” (MoMA 2014).

    img-robert-morris-1_125225955286
    Robert Morris: Untitled (Scatter Piece), 1968-69, felt, steel, lead, zinc, copper, aluminum, brass, dimensions variable; at Leo Castelli Gallery, New York. Photo Genevieve Hanson. All works © 2010 Robert Morris/Artists Rights Society (ARS), New York.

    Robert Morris, one of the principle artists of Minimalism, and author of the influential Notes on Sculpture used “simple, regular and irregular polyhedrons. Influenced by theories in psychology and phenomenology” which he argued “established in the mind of the beholder ‘strong gestalt sensation’, whereby form and shape could be grasped intuitively” (MoMA 2014).[4]

    The implications of these two competing world-views are far-reaching in that much of the worlds initial contact, or touch points, for data services, real-time streams and computational power is increasingly through the platforms controlled by these two companies. However, they are also deeply influential across the programming industries, and we see alternatives and multiple reconfigurations in relation to the challenge raised by the “flattened” design paradigms. That is, they both represent, if only in potentia, a situation of a power relation and through this an ideological veneer on computation more generally. Further, with the proliferation of computational devices – and the screenic imaginary associated with them in the contemporary computational condition – there appears a new logic which lies behind, justifies and legitimates these design methodologies.

    It seems to me that these new flat design philosophies, in the broad sense, produce an order in precepts and concepts in order to give meaning and purpose not only in the interactions with computational platforms, but also more widely in terms of everyday life. Flat design and material design are competing philosophies that offer alternative patterns of both creation and interpretation, which are meant to have not only interface design implications, but more broadly in the ordering of concepts and ideas, the practices and the experience of computational technologies broadly conceived. Another way to put this could be to think about these moves as being a computational founding, the generation of, or argument for, an axial framework for building, reconfiguration and preservation.

    Indeed, flat design provides and more importantly serves, as a translational or metaphorical heuristic for both re-presenting the computational, but also teaches consumers and users how to use and manipulate new complex computational systems and stacks. In other words, in a striking visual technique flat design communicates the vertical structure of the computational stack, on which the Stack corporations are themselves constituted. But also begins to move beyond the specificity of the device as privileged site of a computational interface interaction from beginning to end. For example, interface techniques are abstracted away from the specificity of the device, for example through Apple’s “handoff” continuity framework which also potentially changes reading and writing practices in interesting ways and new use-cases for wearables and nearables.

    These new interface paradigms, introduced by the flat turn, have very interesting possibilities for the application of interface criticism, through unpacking and exploring the major trends and practices of the Stacks, that is, the major technology companies. I think that further than this, the notion of layers are instrumental in mediating the experience of an increasingly algorithmic society (e.g. think dashboards, personal information systems, quantified self, etc.), and as such provide an interpretative frame for a world of computational patterns but also a constituting grammar for building these systems in the first place. There is an element in which the notion of the postdigital may also be a useful way into thinking about the question of the link between art, computation and design given here (see Berry and Dieter, forthcoming) but also the importance of notions of materiality for the conceptualization deployed by designers working within both the flat design and material design paradigms – whether of paper, glass, or some other “material” substance.[5]
    _____

    David M. Berry is Reader in the School of Media, Film and Music at the University of Sussex. He writes widely on computation and the digital and blogs at Stunlaw. He is the author of Critical Theory and the Digital, The Philosophy of Software: Code and Mediation in the Digital Age , Copy, Rip, Burn: The Politics of Copyleft and Open Source, editor of Understanding Digital Humanities and co-editor of Postdigital Aesthetics: Art, Computation And Design. He is also a Director of the Sussex Humanities Lab.

    Back to the essay
    _____

    Notes

    [1] Many thanks to Michael Dieter and Søren Pold for the discussion which inspired this post.

    [2] The choice of paper and glass as the founding metaphors for the flat design philosophies of Google and Apple raise interesting questions for the way in which these companies articulate the remediation of other media forms, such as books, magazines, newspapers, music, television and film, etc. Indeed, the very idea of “publication” and the material carrier for the notion of publication is informed by the materiality, even if only a notional affordance given by this conceptualization. It would be interesting to see how the book is remediated through each of the design philosophies that inform both companies, for example.

    [3] One is struck by the posters produced in the Swiss style which date to the 1950s and 60s but which today remind one of the mobile device screens of the 21st Century.

    [4] There is also some interesting links to be explored between the Superflat style and postmodern art movement, founded by the artist Takashi Murakami, which is influenced by manga and anime, both in terms of the aesthetic but also in relation to the cultural moment in which “flatness” is linked to “shallow emptiness.”

    [5] There is some interesting work to be done in thinking about the non-visual aspects of flat theory, such as the increasing use of APIs, such as the RESTful api, but also sound interfaces that use “flat” sound to indicate spatiality in terms of interface or interaction design. There are also interesting implications for the design thinking implicit in the Apple Watch, and the Virtual Reality and Augmented Reality platforms of Oculus Rift, Microsoft HoloLens, Meta and Magic Leap.

    Bibliography
  • What Drives Automation?

    What Drives Automation?

    glass-cagea review of Nicholas Carr, The Glass Cage: Automation and Us (W.W. Norton, 2014)
    by Mike Bulajewski
    ~

    Debates about digital technology are often presented in terms of stark polar opposites: on one side, cyber-utopians who champion the new and the cutting edge, and on the other, cyber-skeptics who hold on to obsolete technology. The framing is one-dimensional in the general sense that it is superficial, but also in a more precise and mathematical sense that it implicitly treats the development of technology as linear. Relative to the present, there are only two possible positions and two possible directions to move; one can be either for or against, ahead or behind.[1]

    Although often invoked as a prelude to transcending the division and offering a balanced assessment, in describing the dispute in these pro or con terms one has already betrayed one’s orientation, tilting the field against the critical voice by assigning it an untenable position. Criticizing a new technology is misconstrued as a simple defense of the old technology or of no technology, which turns legitimate criticism into mere conservative fustiness, a refusal to adapt and a failure to accept change.

    Few critics of technology match these descriptions, and those who do, like the anarcho-primitivists who claim to be horrified by contemporary technology, nonetheless accede to the basic framework set by technological apologists. The two sides disagree only on the preferred direction of travel, making this brand of criticism more pro-technology than it first appears. One should not forget that the high-tech futurism of Silicon Valley is supplemented by the balancing counterweight of countercultural primitivism, with Burning Man expeditions, technology-free Waldorf schools for children of tech workers, spouses who embrace premodern New Age beliefs, romantic agrarianism, and restorative digital detox retreats featuring yoga and meditation. The diametric opposition between pro- and anti-technology is internal to the technology industry, perhaps a symptom of the repression of genuine debate about the merits of its products.

    ***

    Nicholas Carr’s most recent book, The Glass Cage: Automation and Us, is a critique of the use of automation and a warning of its human consequences, but to conclude, as some reviewers have, that he is against automation or against technology as such is to fall prey to this one-dimensional fallacy.[2]

    The book considers the use of automation in areas like medicine, architecture, finance, manufacturing and law, but it begins with an example that’s closer to home for most of us: driving a car. Transportation and wayfinding are minor themes throughout the book, and with Google and large automobile manufacturers promising to put self-driving cars on the street within a decade, the impact of automation in this area may soon be felt in our daily lives like never before. Early in the book, we are introduced to problems that human factors engineers working with airline autopilot systems have discovered and may be forewarnings of a future of the unchecked automating of transportation.

    Carr discusses automation bias—the tendency for operators to assume the system is correct and external signals that contradict it are wrong—and the closely related problem of automation complacency, which occurs when operators assume the system is infallible and so abandon their supervisory role. These problems have been linked to major air disasters and are behind less-catastrophic events like oblivious drivers blindly following their navigation systems into nearby lakes or down flights of stairs.

    The chapter dedicated to deskilling is certain to raise the ire of skeptical readers because it begins with an account of the negative impact of GPS technology on Inuit hunters who live in the remote northern reaches of Canada. As GPS devices proliferated, hunters lost what a tribal elder describes as “the wisdom and knowledge of the Inuit”: premodern wayfinding methods that rely on natural phenomena like wind, stars, tides and snowdrifts to navigate. Inuit wayfinding skills are truly impressive. The anthropologist Claudio Aporta reports traveling with a hunter across twenty square kilometers of flat featureless land as he located seven fox traps that he had never seen before, set by his uncle twenty five years prior. These talents have been eroded as Inuit hunters have adopted GPS devices that seem to do the job equally well, but have the unexpected side effect of increasing injuries and deaths as hunters succumb to equipment malfunctions and the twin perils of automation complacency and bias.

    Laboring under the misconceptions of the one-dimensional fallacy, it would be natural to take this as a smoking gun of Carr’s alleged anti-technology perspective and privileging of the premodern, but the closing sentences of the chapter point us away from that conclusion:

    We ignore the ways that software programs and automated systems might be reconfigured so as not to weaken our grasp on the world but to strengthen it. For, as human factors researchers and other experts on automation have found, there are ways to break the glass cage without losing the many benefits computers grant us. (151)

    These words segue into the following chapter, where Carr identifies the dominant philosophy that designs automation technologies to inadvertently produce problems that he identified earlier: technology-centered automation. This approach to design is distrustful of humans, perhaps even misanthropic. It views us as weak, inefficient, unreliable and error-prone, and seeks to minimize our involvement in the work to be done. It institutes a division of labor between human and machine that gives the bulk of the work over to the machine, only seeking human input in anomalous situations. This philosophy is behind modern autopilot systems that hand off control to human pilots for only a few minutes in a flight.

    The fundamental argument of the book is that this design philosophy can lead to undesirable consequences. Carr seeks an alternative he calls human-centered automation, an approach that ensures the human operator remains engaged and alert. Autopilot systems designed with this philosophy might return manual control to the pilots at irregular intervals to ensure they remain vigilant and practice their flying skills. It could provide tactile feedback of its operations so that pilots are involved in a visceral way rather than passively monitoring screens. Decision support systems like those used in healthcare could take a secondary role of reviewing and critiquing a decision made by a doctor made rather than the other way around.

    The Glass Cage calls for a fundamental shift in how we understand error. Under the current regime, an error is an inefficiency or an inconvenience, to be avoided at all costs. As defined by Carr, a human-centered approach to design treats error differently, viewing it as an opportunity for learning. He illustrates this with a personal experience of repeatedly failing a difficult mission in the video game Red Dead Redemption, and points to the satisfaction of finally winning a difficult game as an example of what is lost when technology is designed to be too easy. He offers video games as a model for the kinds of technologies he would like to see: tools that engage us in difficult challenges, that encourage us to develop expertise and to experience flow states.

    But Carr has an idiosyncratic definition of human-centered design which becomes apparent when he counterposes his position against the prominent design consultant Peter Merholz. Echoing premises almost universally adopted by designers, Merholz calls for simple, frictionless interfaces and devices that don’t require a great deal of skill, memorization or effort to operate. Carr objects that that eliminates learning, skill building and mental engagement—perhaps a valid criticism, but it’s strange to suggest that this reflects a misanthropic technology-centered approach.

    A frequently invoked maxim of human-centered design is that technology should adapt to people, rather than people adapting to technology. In practice, the primary consideration is helping the user achieve his or her goal as efficiently and effectively as possible, removing unnecessary obstacles and delays that stand in the way. Carr argues for the value of challenges, difficulties and demands placed on users to learn and hone skills, all of which fall under the prohibited category of people adapting to technology.

    In his example of playing Red Dead Redemption, Carr prizes the repeated failure and frustration before finally succeeding at the game. From the lens of human-centered design, that kind of experience is seen as a very serious problem that should be eliminated quickly, which is probably why this kind of design is rarely employed at game studios. In fact, it doesn’t really make sense to think of a game player as having a goal, at least not from the traditional human-centered standpoint. The driver of a car has a goal: to get from point A to point B; a Facebook user wants to share pictures with friends; the user of a word processor wants to write a document; and so on. As designers, we want to make these tasks easy, efficient and frictionless. The most obvious way of framing game play is to say that the player’s goal is to complete the game. We would then proceed to remove all obstacles, frustrations, challenges and opportunities for error that stand in the way so that they may accomplish this goal more efficiently, and then there would be nothing left for them to do. We would have ruined the game.

    This is not necessarily the result of a misanthropic preference for technology over humanity, though it may be. It is also the likely outcome of a perfectly sincere and humanistic belief that we shouldn’t inconvenience the user with difficulties that stand in the way of their goal. As human factors researcher David Woods puts it, “The road to technology-centered systems is paved with human-centered intentions,”[3] a phrasing which suggests that these two philosophies aren’t quite as distinct as Carr would have us believe.

    Carr’s vision of human-centered design differs markedly from contemporary design practice, which stresses convenience, simplicity, efficiency for the user and ease of use. In calling for less simplicity and convenience, he is in effect critical of really existing human-centeredness, and that troubles any reading of The Glass Cage that views it a book about restoring our humanity in a world driven mad by machines.

    It might be better described as a book about restoring one conception of humanity in a world driven mad by another. It is possible to argue that the difference between the two appears in psychoanalytic theory as the difference between drive and desire. The user engages with a technology in order to achieve a goal because they perceive themselves as lacking something. Through the use of this tool, they believe they can regain it and fill in this lack. It follows that designers ought to help the user achieve their goal—to reach their desire—as quickly and efficiently as possible because this will satisfy them and make them happy.

    But the insight of psychoanalysis is that lack is ontological and irreducible, it cannot be filled in any permanent way because any concrete lack we experience is in fact metonymic for a constitutive lack of being. As a result, as desiring subjects we are caught in an endless loop of seeking out that object of desire, feeling disappointed when we find it because it didn’t fulfill our fantasies and then finding a new object to chase. The alternative is to shift from desire to drive, turning this failure into a triumph. Slavoj Žižek describes drive as follows: “the very failure to reach its goal, the repetition of this failure, the endless circulation around the object, generates a satisfaction of its own.”[4]

    This satisfaction is perhaps what Carr aims at when he celebrates the frustrations and challenges of video games and of work in general. That video games can’t be made more efficient without ruining them indicates that what players really want is for their goal to be thwarted, evoking the psychoanalytic maxim that summarizes the difference between desire and drive: from the missing/lost object, to loss itself as an object. This point is by no means tangential. Early on, Carr introduces the concept of miswanting, defined as the tendency to desire what we don’t really want and won’t make us happy—in this case, leisure and ease over work and challenge. Psychoanalysts holds that all human wanting (within the register of desire) is miswanting. Through fantasy, we imagine an illusory fullness or completeness of which actual experience always falls short.[5]

    Carr’s revaluation of challenge, effort and, ultimately, dissatisfaction cannot represent a correction of the error of miswanting­–of rediscovering the true source of pleasure and happiness in work. Instead, it radicalizes the error: we should learn to derive a kind of satisfaction from our failure to enjoy. Or, in the final chapter, as Carr says of the farmer in Robert Frost’s poem Mowing, who is hard at work and yet far from the demands of productivity: “He’s not seeking some greater truth beyond the work. The work is the truth.”

    ***

    Nicholas Carr has a track record of provoking designers to rethink their assumptions. With The Shallows, along with other authors making related arguments, he influenced software developers to create a new class of tools that cut off the internet, eliminate notifications or block social media web sites to help us concentrate. Starting with OS X Lion in 2011, Apple began offering a full screen mode that hides distracting interface elements and background windows from inactive applications.

    What transformative effects could The Glass Cage have on the way software is designed? The book certainly offers compelling reasons to question whether ease of use should always be paramount. Advocates for simplicity are rarely challenged, but they may now find themselves facing unexpected objections. Software could become more challenging and difficult to use—not in the sense of a recalcitrant WiFi router that emits incomprehensible error codes, but more like a game. Designers might draw inspiration from video games, perhaps looking to classics like the first level of Super Mario Brothers, a masterpiece of level design that teaches the fundamental rules of the game without ever requiring the player to read the manual or step through a tutorial.

    Everywhere that automation now reigns, new possibilities announce themselves. A spell checker might stop to teach spelling rules, or make a game out of letting the user take a shot at correcting mistakes it has detected. What if there was a GPS navigation device that enhanced our sense of spatial awareness rather than eroding it, that engaged our attention on to the road rather than let us tune out. Could we build an app that helps drivers maintain good their skills by challenging them to adopt safer and more fuel-efficient driving techniques?

    Carr points out that the preference for easy-to-use technologies that reduce users’ engagement is partly a consequence of economic interests and cost reduction policies that profit from the deskilling and reduction of the workforce, and these aren’t dislodged simply by pressing for new design philosophies. But to his credit, Carr has written two best-selling books aimed at the general interest reader on the fairly obscure topic of human-computer interaction. User experience designers working in the technology industry often face an uphill battle in trying to build human-centered products (however that is defined). When these matters attract public attention and debate, it makes their job a little easier.

    _____

    Mike Bulajewski (@mrteacup) is a user experience designer with a Master’s degree from University of Washington’s Human Centered Design and Engineering program. He writes about technology, psychoanalysis, philosophy, design, ideology & Slavoj Žižek at MrTeacup.org. He has previously written about the Spike Jonze film Her for The b2 Review Digital Studies section.

    Back to the essay

    _____

    [1] Differences between individual technologies are ignored and replaced by the monolithic master category of Technology. Jonah Lehrer’s review of Nicholas Carr’s 2010 book The Shallows in the New York Times exemplifies such thinking. Lehrer finds contradictory evidence against Carr’s argument that the internet is weakening our mental faculties in scientific studies that attribute cognitive improvements to playing video games, a non sequitur which gains meaning only by subsuming these two very different technologies under a single general category of Technology. Evgeny Morozov is one of the sharpest critics of this tendency. Here one is reminded of his retort in his article “Ghosts in the Machine” (2013): “That dentistry has been beneficial to civilization tells us very little about the wonders of data-mining.”

    [2] There are a range of possible causes for this constrictive linear geometry: a tendency to see a progressive narrative of history; a consumerist notion of agency which only allows shoppers to either upgrade or stick with what they have; or the oft-cited binary logic of digital technology. One may speculate about the influence of the popular technology marketing book by Geoffrey A. Moore, Crossing the Chasm (2014) whose titular chasm is the gap between the elite group of innovators and early adopters—the avant-garde—and the recalcitrant masses bringing up the rear who must be persuaded to sign on to their vision.

    [3] David D. Woods and David Tinapple (1999). “W3: Watching Human Factors Watch People at Work.” Proceedings of the 43rd Annual Meeting of the Human Factors and Ergonomics Society (1999).

    [4] Slavoj Žižek, The Parallax View (2006), 63.

    [5] The cultural and political implications of this shift are explored at length in Todd McGowan’s two books The End of Dissatisfaction: Jacques Lacan and the Emerging Society of Enjoyment (2003) and Enjoying What We Don’t Have: The Political Project of Psychoanalysis (2013).