boundary 2

Tag: Benjamin Bratton

  • R. Joshua Scannell — Architectures of Managerial Triumphalism (Review of Benjamin Bratton, The Stack: On Software and Sovereignty)

    R. Joshua Scannell — Architectures of Managerial Triumphalism (Review of Benjamin Bratton, The Stack: On Software and Sovereignty)

    A review of Benjamin Bratton, The Stack: On Software and Sovereignty (MIT Press Press, 2016)

    by R. Joshua Scannell

    The Stack

    Benjamin Bratton’s The Stack: On Software and Sovereignty is an often brilliant and regularly exasperating book. It is a diagnosis of the epochal changes in the relations between software, sovereignty, climate, and capital that underwrite the contemporary condition of digital capitalism and geopolitics.  Anybody who is interested in thinking through the imbrication of digital technology with governance ought to read The Stack. There are many arguments that are useful or interesting. But reading it is an endeavor. Sprawling out across 502 densely packed pages, The Stack is nominally a “design brief” for the future. I don’t know that I understand that characterization, no matter how many times I read this tome.

    The Stack is chockablock with schematic abstractions. They make sense intuitively or cumulatively without ever clearly coming into focus. This seems to be a deliberate strategy. Early in the book, Bratton describes The Stack–the titular “accidental megastructure” of “planetary computation” that has effectively broken and redesigned, well, everything–as “a blur.” He claims that

    Only a blur provides an accurate picture of what is going on now and to come…Our description of a system in advance of its appearance maps what we can see but cannot articulate, on the one hand, versus what we know to articulate but cannot yet see, on the other. (14)

    This is also an accurate description of the prevailing sensation one feels working through the text. As Ian Bogost wrote in his review of The Stack for Critical Inquiry, reading the book feels “intense—meandering and severe but also stimulating and surprising. After a while, it was also a bit overwhelming. I’ll take the blame for that—I am not necessarily built for Bratton’s level and volume of scholarly intensity.” I agree on all fronts.

    Bratton’s inarguable premise is that the various computational technologies that collectively define the early decades of the 21st century—smart grids, cloud platforms, mobile apps, smart cities, the Internet of Things, automation—are not analytically separable. They are often literally interconnected but, more to the point, they combine to produce a governing architecture that has subsumed older calculative technologies like the nation state, the liberal subject, the human, and the natural. Bratton calls this “accidental megastructure” The Stack.

    Bratton argues that The Stack is composed of six “layers,” the earth, the cloud, the city, the address, the interface, and the user. They all indicate more or less what one might expect, but with a counterintuitive (and often Speculative Realist) twist. The earth is the earth but is also a calculation machine. The cloud is “the cloud” but as a chthonic structure of distributed networks and nodal points that reorganize sovereign power and body forth quasi-feudal corporate sovereignties. The City is, well, cities, but not necessarily territorially bounded, formally recognized, or composed of human users. Users are also usually not human. They’re just as often robots or AI scripts. Really they can be anything that works up and down the layers, interacting with platforms (which can be governments) and routed through addresses (which are “every ‘thing’ that can be computed” including “individual units of life, loaded shipping containers, mobile devices, locations of datum in databases, input and output events and enveloped entities of all size and character” [192], etc.).

    Each layer is richly thought through and described, though it’s often unclear whether the “layer” in question is “real” or a useful conceptual envelope or both or neither. That distinction is generally untenable, and Bratton would almost certainly reject the dichotomy between the “real” and the “metaphorical.” But it isn’t irrelevant for this project. He argues early on that, contra Marxist thought that understands the state metaphorically as a machine, The Stack is a “machine-as-the-state.” That’s both metaphorical and not. There really are machines that exert sovereign power, and there are plenty of humans in state apparatuses that work for machines. But there aren’t, really, machines that are states. Right?

    Moments like these, when The Stack’s concepts productively destabilize given categories (like the state) that have never been coherent enough to justify their power are when the book is at its most compelling. And many of the counterintuitive moves that Bratton makes start and end with real, important insights. For instance, the insistence on the absolute materiality, and the absolute earthiness of The Stack and all of its operations leads Bratton to a thoroughgoing and categorical rejection of the prevailing “idiot language” that frames digital technology as though it exists in a literal “cloud,” or some sort of ethereal “virtual” that is not coincident with the “real” world. Instead, in The Stack, every point of contact between every layer is a material event that transduces and transforms everything else. To this end, he inverts Latour’s famous dictum that there is no global, only local. Instead, The Stack as planetary megastructure means that there is only global. The local is a dead letter. This is an anthropocene geography in which an electron, somewhere, is always firing because a fossil fuel is burning somewhere else. But it is also a post-anthropocene geography because humans are not The Stack’s primary users. The planet itself is a calculation machine, and it is agnostic about human life. So, there is a hybrid sovereignty: The Stack is a “nomos of the earth” in which humans are an afterthought.

    A Design for What?

    Bratton is at his conceptual best when he is at his weirdest. Cyclonopedic (Negarestani 2008) passages in which the planet slowly morphs into something like HP Lovecraft and HR Geiger’s imaginations fucking in a Peter Thiel fever dream are much more interesting (read: horrifying) than the often perfunctory “real life” examples from “real world” geopolitical trauma, like “The First Sino-Google War of 2009.” But this leads to one of the most obvious shortcomings of the text. It is supposedly a “design brief,” but it’s not clear what or who it is a design brief for.

    For Bratton, design

    means the structuring of the world in reaction to an accelerated decay and in projective anticipation of a condition that is now only the ghostliest of a virtual present tense. This is a design for accommodating (or refusing to accommodate) the post-whatever-is-melting-into-air and prototyping for pre-what-comes-next: a strategic, groping navigation (however helpless) of the punctuations that bridge between these two. (354)

    Design, then, and not theory, because Bratton’s Stack is a speculative document. Given the bewildering and potentially apocalyptic conditions of the present, he wants to extrapolate outwards. What are the heterotopias-to-come? What are the constraints? What are the possibilities? Sounding a familiar frustration with the strictures of academic labor, he argues that this moment requires something more than diagnosis and critique. Rather,

    the process by which sovereignty is made more plural becomes a matter of producing more than discoursing: more about pushing, pulling, clicking, eating, modeling, stacking, prototyping, subtracting, regulating, restoring, optimizing, leaving alone, splicing, gardening and evacuating than about reading, examining, insisting, rethinking, reminding, knowing full-well, enacting, finding problematic, and urging. (303)

    No doubt. And, not that I don’t share the frustration, but I wonder what a highly technical, 500-page diagnosis of the contemporary state of software and sovereignty published and distributed by an academic press and written for an academic audience is if not discoursing? It seems unlikely that it can serve as a blueprint for any actually-existing power brokers, even though its insights are tremendous. At the risk of sounding cynical, calling The Stack a “design brief” seems like a preemptive move to liberate Bratton from having to seriously engage with the different critical traditions that work to make sense of the world as it is in order to demand something better. This allows for a certain amount of intellectual play that can sometimes feel exhilarating but can just as often read as a dodge—as a way of escaping the ethical and political stakes that inhere in critique.

    That is an important elision for a text that is explicitly trying to imagine the geopolitics of the future. Bratton seems to pose The Stack from a nebulous “Left” position that is equally disdainful of the sort of “Folk Politics” that Srnicek and Williams (2015) so loathe and the accelerationist tinge of the Speculative Realists with whom he seems spiritually aligned. This sense of rootlessness sometimes works in Bratton’s favor. There are long stretches in which his cherry picking and remixing ideas from across a bewildering array of schools of thought yields real insights. But just as often, the “design brief” characterization seems to be a way out of thinking the implications of the conjuncture through to their conclusion. There is a breeziness about how Bratton poses futures-as-thought-experiments that is troubling.

    For instance, in thinking through the potential impacts of the capacity to measure planetary processes in real time, Bratton suggests that producing a sensible world is not only a process of generalizing measurement and representation. He argues that

    the sensibility of the world might be distributed or organized, made infrastructural, and activated to become part of how the landscape understands itself and narrates itself. It is not only a diagnostic image then; it is a tool for geo-politics in formation, emerging from the parametric multiplication and algorithmic conjugation of our surplus projections of worlds to come, perhaps in mimetic accordance with one explicit utopian conception or another, and perhaps not. Nevertheless, the decision between what is and is not governable may arise as much from what the model computational image cannot do as much as what it can. (301, emphasis added)

    Reading this, I wanted to know: What explicit utopian project is he thinking about? What are the implications of it going one way and not another? Why mimetic? What does the last bit about what is and is not governable mean? Or, more to the point: who and what is going to get killed if it goes one way and not another? There are a great many instances like this over the course of the book. At the precise moment where analysis might inform an understanding of where The Stack is taking us, Bratton bows out. He’s set down the stakes, and given a couple of ideas about what might happen. I guess that’s what a design brief is meant to do.

    Another example, this time concerning the necessity of geoengineering for solving what appears to be an ever-more-imminent climatic auto-apocalypse:

    The good news is that we know for certain that short-term “geoengineering” is not only possible but in a way inevitable, but how so? How and by whom does it go, and unfortunately for us the answer (perhaps) must arrive before we can properly articulate the question. For the darker scenarios, macroeconomics completes its metamorphosis into ecophagy, as the discovery of market failures becomes simultaneously the discovery of limits of planetary sinks (e.g., carbon, heat, waste, entropy, populist politics) and vice versa; The Stack becomes our dakhma. The shared condition, if there is one, is the mutual unspeakability and unrecognizability that occupies the seat once reserved for Kantian cosmopolitanism, now just a pre-event reception for a collective death that we will actually be able to witness and experience. (354, emphasis added)

    Setting aside the point that it is not at all clear to me that geoengineering is an inevitable or even appropriate (Crist 2017) way out of the anthropocene (or capitalocene? (Moore 2016)) crisis, if the answer for “how and by whom does it go” is to arrive before the question can be properly articulated, then the stack-to-come starts looking a lot like a sort of planetary dictatorship of, well of who? Google? Mark Zuckerberg? In-Q-Tel? Y Combinator? And what exactly is the “populist politics” that sits in the Latourian litany alongside carbon, heat, waste, and entropy as a full “planetary sink”? Does that mean Trump, and all the other globally ascendant right wing “populists?” Or does it mean “populist politics” in the Jonathan Chait sense that can’t differentiate between left and right and therefore sees both political projects as equally dismissible? Does populism include any politics that centers the needs and demands of the public? What are the commitments in this dichotomy? I suppose The Stack wouldn’t particularly care about these sorts of questions. But a human writing a 500-page playbook so that other humans might better understand the world-to-come might be expected to. After all, a choice between geoengineering or collective death might be what the human population of the planet is facing (and for most of the planet’s species, and for a great many of the planet’s human societies, already eliminated or dragged down the road towards it during the current mass extinction, there is no choice), but such a binary doesn’t make for much of a design spec.

    One final example, this time on what the political subject of the stack-to-come ought to look like:

    We…require, as I have laid out, a redefinition of the political subject in relation to the real operations of the User, one that is based not on homo economicus, parliamentary liberalism, poststructuralist linguistic reduction, or the will to secede into the moral safety of individual privacy and withdrawn from coercion. Instead, this definition should focus on composing and elevating sites of governance from the immediate, suturing interfacial material between subjects, in the stitches and the traces and the folds of interaction between bodies and things at a distance, congealing into different networks demanding very different kinds of platform sovereignty.

    If “poststructuralist linguistic reduction” is on the same plane as “parliamentary liberalism” or “homo economicus” as one among several prevailing ideas of the contemporary “political subject,” then I am fairly certain that we are in the realm of academic “theory” rather than geopolitical “design.” The more immediate point is that I do understand what the terms that we ought to abandon mean, and agree that they need to go. But I don’t understand what the redefined political subject looks like. Again, if this is “theory,” then that sort of hand waving is unfortunately often to be expected. But if it’s a design brief—even a speculative one—for the transforming nature of sovereignty and governance, then I would hope for some more clarity on what political subjectivity looks like in The Stack-To-Come.

    Or, and this is really the point, I want The Stack to tell me something more about how The Stack participates in the production and extractable circulation of populations marked for death and debility (Puar 2017). And I want to know what, exactly, is so conceptually radical about pointing out that human beings are not at the center of the planetary systems that are driving transformations in geopolitics and sovereignty. After all, hasn’t that been exactly the precondition for the emergence of The Stack? This accidental megastructure born out of the ruthless expansions of digitally driven capitalism is not just working to transform the relationship between “human” and sovereignty. The condition of its emergence is precisely that most planetary homo sapiens are not human, and are therefore disposable and disposited towards premature death. The Stack might be “our” dhakma, if we’re speaking generically as a sort of planetary humanism that cannot but be read as white—or, more accurately, “capacitated.” But the systematic construction of human stratification along lines of race, gender, sex, and ability as precondition for capitalist emergence freights the stack with a more ancient, and ignored, calculus: that of the logistical work that shuttles humans between bodies, cargo, and capital. It is, in other words, the product of an older planetary death machine: what Fred Moten and Stefano Harney (2013) call the “logistics in the hold” that makes The Stack hum along.

    The tenor of much of The Stack is redolent of managerial triumphalism. The possibility of apocalypse is always minimized. Bratton offers, a number of times, that he’s optimistic about the future. He is disdainful of the most stringent left critics of Silicon Valley, and he thinks that we’ll probably be able to trust to our engineers and institutions to work out The Stack’s world-destroying kinks. He sounds invested, in other words, in a rhetorical-political mode of thought that, for now, seems to have died on November 9, 2016. So it is not surprising that Bratton opens the book with an anecdote about Hillary Clinton’s vision of the future of world governance.

    The Stack begins with a reference to then-Secretary of State Clinton’s 2013 farewell address to the Council on Foreign Relations. In that speech, Clinton argued that the future of international governance requires a “new architecture for this new world, more Frank Gehry than formal Greek.” Unlike the Athenian Agora, which could be held up by “a few strong columns,” contemporary transnational politics is too complicated to rely on stolid architecture, and instead must make use of the type of modular assemblage that “at first might appear haphazard, but in fact, [is] highly intentional and sophisticated” that makes Gehry famous. Bratton interprets her argument as a “half-formed question, what is the architecture of the emergent geopolitics of this software society? What alignments, components, foundations, and apertures?” (Bratton 2016, 13).

    For Clinton, future governance must make a choice between Gehry and Agora. The Gehry future is that of the seemingly “haphazard” but “highly intentional and sophisticated” interlocking treaties, non-governmental organizations, super and supra-state technocratic actors working together to coordinate the disparate interests of states and corporations in the service of the smooth circulation of capital across a planetary logistics network. On the other side, a world order held up by “a few strong pillars”—by implication the status quo after the collapse of the Soviet Union, a transnational sovereign apparatus anchored by the United States. The glaring absence in this dichotomy is democracy—or rather its assumed subsumption into American nationalism. Clinton’s Gehry future is a system of government whose machinations are by design opaque to those that would be governed, but whose beneficence is guaranteed by the good will of the powerful. The Agora—the fountainhead of slaveholder democracy—is metaphorically reduced to its pillars, particularly the United States and NATO. Not unlike ancient Athens, it’s democracy as empire.

    There is something darkly prophetic of the collapse of the Clintonian world vision, and perversely apposite in Clinton’s rhetorical move to supplant as the proper metaphor for future government Gehry for the Agora. It is unclear why a megalomaniacal corporate starchitecture firm that robs public treasuries blind and facilitates tremendous labor exploitation ought to be the future for which the planet strives.

    For better or for worse, The Stack is a book about Clinton. As a “design brief,” it works from a set of ideas about how to understand and govern the relationship between software and sovereignty that were strongly intertwined with the Clinton-Obama political project. That means, abysmally, that it is now also about Trump. And Trump hangs synechdochally over theoretical provocations for what is to be done now that tech has killed the nation-state’s “Westphalian Loop.” This was a knotty question when the book went to press in February 2016 and Gehry seemed ascendant. Now that the Extreme Center’s (Ali 2015) project of tying neoliberal capitalism to non-democratic structures of technocratic governance appears to be collapsing across the planet, Clinton’s “half-formed question” is even knottier. If we’re living through the demise of the Westphalian nation state, then it’s sounding one hell of a murderous death rattle.

    Gehry or Agora?

    In the brief period between July 21st and November 8 2016, when the United States’ cognoscenti convinced itself that another Clinton regime was inevitable, there was a neatly ordered expectation of how “pragmatic” future governance under a prolonged Democratic regime would work. In the main, the public could look forward to another eight years sunken in a “Gehry-like” neoliberal surround subtended by the technocratic managerialism of the Democratic Party’s right edge. And, while for most of the country and planet, that arrangement didn’t portend much to look forward to, it was at least not explicitly nihilistic in its outlook. The focus on management, and on the deliberate dismantling of the nation state as the primary site of governance in favor of the mesh of transnational agencies and organizations that composed 21st century neoliberalism’s star actants meant that a number of questions about how the world would be arranged were left unsettled.

    By end of election week, that future had fractured. The unprecedented amateurishness, decrypted racism, and incomparable misogyny of the Trump campaign portended an administration that most thought couldn’t, or at the very least shouldn’t, be trusted with the enormous power of the American executive. This stood in contrast to Obama, and (perhaps to a lesser extent) to Clinton, who were assumed to be reasonable stewards. This paradoxically helps demonstrate just how much the “rule of law” and governance by administrative norms that theoretically underlie the liberal national state had already deteriorated under Obama and his immediate predecessors—a deterioration that was in many ways made feasible by the innovations of the digital technology sector. As many have pointed out, the command-and-control prerogatives that Obama claimed for the expansion of executive power depended essentially on the public perception of his personal character.

    The American people, for instance, could trust planetary drone warfare because Obama claimed to personally vet our secret kill list, and promised to be deliberate and reasonable about its targets. Of course, Obama is merely the most publicly visible part of a kill-chain that puts this discretionary power over life and death in the hands of the executive. The kill-chain is dependent on the power of, and sovereign faith in, digital surveillance and analytics technologies. Obama’s kill-chain, in short, runs on the capacities of an American warfare state—distributed at nodal points across the crust of the earth, and up its Van Allen belts—to read planetary chemical, territorial, and biopolitical fluxes and fluctuations as translatable data that can be packet switched into a binary apparatus of life and death. This is the calculus that Obama conjures when he defines those mobile data points that concatenate into human beings as as “baseball cards” that constitute a “continuing, imminent threat to the American people.” It is the work of planetary sovereignty that rationalizes and capacitates the murderous “fix” and “finish” of the drone program.

    In other words, Obama’s personal aura and eminent reasonableness legitimated an essentially unaccountable and non-localizable network of black sites and black ops (Paglen 2009, 2010) that loops backwards and forwards across the drone program’s horizontal regimes of national sovereignty and vertical regimes of cosmic sovereignty. It is, to use Clinton’s framework, a very Frank Gehry power structure. Donald Trump’s election didn’t transform these power dynamics. Instead, his personal qualities made the work of planetary computation in the service of sovereign power to kill suddenly seem dangerous or, perhaps better: unreasonable. Whether President Donald Trump would be so scrupulous as his predecessor in determining the list of humans fit for eradication was (formally speaking) a mystery, but practically a foregone conclusion. But in both presidents’ cases, the dichotomies between global and local, subject and sovereign, human and non-human that are meant to underwrite the nation state’s rights and responsibilities to act are fundamentally blurred.

    Likewise, Obama’s federal imprimatur transformed the transparently disturbing decision to pursue mass distribution of privately manufactured surveillance technology – Taser’s police-worn body cameras, for instance – as a reasonable policy response to America’s dependence on heavily armed paramilitary forces to maintain white supremacy and crush the poor. Under Obama and Eric Holder, American liberals broadly trusted that digital criminal justice technologies were crucial for building a better, more responsive, and more responsible justice system. With Jeff Sessions in charge of the Department of Justice, the idea that the technologies that Obama’s Presidential Task Force on 21st Century Policing lauded as crucial for achieving the “transparency” needed to “build community trust” between historically oppressed groups and the police remained plausible instruments of progressive reform suddenly seemed absurd. Predictive policing, ubiquitous smart camera surveillance, and quantitative risk assessments sounded less like a guarantee of civil rights and more like a guarantee of civil rights violations under a president that lauds extrajudicial police power. Trump goes out of his way to confirm these civil libertarian fears, such as when he told Long Island law enforcement that “laws are stacked against you. We’re changing those laws. In the meantime, we need judges for the simplest thing — things that you should be able to do without a judge.”

    But, perhaps more to the point, the rollout of these technologies, like the rollouts of the drone program, formalized a transformation in the mechanics of sovereign power that had long been underway. Stripped of the sales pitch and abstracted from the constitutional formalism that ordinarily sets the parameters for discussions of “public safety” technologies, what digital policing technologies do is flatten out the lived and living environment into a computational field. Police-worn body cameras quickly traverse the institutional terrain from a tool meant to secure civil rights against abusive officers into an artificially intelligent weapon that flags facial structures that match with outstanding warrants, that calculates changes in enframed bodily comportment to determine imminent threat to the officer-user, and that captures the observed social field as  data privately owned by the public safety industry’s weapons manufacturers. Sovereignty, in this case, travels up and down a Stack of interoperative calculative procedures, with state sanction and human action just another data point in the proper administration of quasi-state violence. After all, it is Axon (formerly Taser), and not a government that controls the servers that their body cams draw on to make real-time assessments of human danger. The state sanctions a human officer’s violence, but the decision-making apparatus that situates the violence is private, and inhuman. Inevitably, the drone war and carceral capitalism collapse into one another, as drones are outfitted with AI designed to identify crowd “violence” from the sky, a vertical parallax to pair with the officer-user’s body worn camera.

    Trump’s election seemed to show with a clarity that had hitherto been unavailable for many that wedding the American security apparatus’ planetary sovereignty to twenty years of unchecked libertarian technological triumphalism (even, or especially if in the service of liberal principles like disruption, innovation, efficiency, transparency, convenience, and generally “making the world a better place”) might, in fact, be dangerous. When the Clinton-Obama project collapsed, its assumption that the intertwining of private and state sector digital technologies inherently improves American democracy and economy, and increases individual safety and security looked absurd. The shock of Trump’s election, quickly and self-servingly blamed on Russian agents and Facebook, transformed Silicon Valley’s broadly shared Prometheanism into interrogations into the industry’s infrastructural corrosive toxicity, and its deleterious effect on the liberal national state.  If tech would ever come to Jesus, the end of 2016 would have had to be the moment. It did not.

    A few days after Trump won election I found myself a fly on the wall in a meeting with mid-level executives for one of the world’s largest technology companies (“The Company”). We were ostensibly brainstorming how to make The Cloud a force for “global good,” but Trump’s ascendancy and all its authoritarian implications made the supposed benefits of cloud computing—efficiency, accessibility, brain-shattering storage capacity—suddenly terrifying. Instead of setting about the dubious task of imagining how a transnational corporation’s efforts to leverage the gatekeeping power over access to the data of millions, and the private control over real-time identification technology (among other things) into heavily monetized semi-feudal quasi-sovereign power could be Globally Good, we talked about Trump.

    The Company’s reps worried that, Peter Thiel excepted, tech didn’t have anybody near enough to Trump’s miasmatic fog to sniff out the administration’s intentions. It was Clinton, after all, who saw the future in global information systems. Trump, as we were all so fond of pointing out, didn’t even use a computer. Unlike Clinton, the extent of Trump’s mania for surveillance and despotism was mysterious, if predictable. Nobody knew just how many people of color the administration had in its crosshairs, and The Company reps suggested that the tech world wasn’t sure how complicit it wanted to be in Trump’s explicitly totalitarian project. The execs extemporized on how fundamental the principles of democratic and republican government were to The Company, how committed they were to privacy, and how dangerous the present conjuncture was. As the meeting ground on, reason slowly asphyxiated on a self-evidently implausible bait hook: that it was now both the responsibility and appointed role of American capital, and particularly of the robber barons of Platform Capitalism (Srnicek 2016), to protect Americans from the fascistic grappling of American government. Silicon Valley was going to lead the #resistance against the very state surveillance and overreach that it capacitated, and The Company would lead Silicon Valley. That was the note on which the meeting adjourned.

    That’s not how things have played out. A month after that meeting, on December 14, 2016, almost all of Silicon Valley’s largest players sat down at Trump’s technology roundtable. Explaining themselves to an aghast (if credulous) public, tech’s titans argued that it was their goal to steer the new chief executive of American empire towards a maximally tractable gallimaufry of power. This argument, plus over one hundred companies’ decision to sign an amici curiae brief opposing Trump’s first attempt at a travel ban aimed at Muslims, seemed to publicly signal that Silicon Valley was prepared to #resist the most high-profile degradations of contemporary Republican government. But, in April 2017, Gizmodo inevitably reported that those same companies that appointed themselves the front line of defense against depraved executive overreach in fact quietly supported the new Republican president before he took office. The blog found that almost every major concern in the Valley donated tremendously to the Trump administration’s Presidential Inaugural Committee, which was impaneled to plan his sparsely attended inaugural parties. The Company alone donated half a million dollars. Only two tech firms donated more. It seemed an odd way to #resist.

    What struck me during the meeting was how weird it was that executives honestly believed a major transnational corporation would lead the political resistance against a president committed to the unfettered ability of American capital to do whatever it wants. What struck me afterward was how easily the boundaries between software and sovereignty blurred. The Company’s executives assumed, ad hoc, that their operation had the power to halt or severely hamper the illiberal policy priorities of government. By contrast, it’s hard to imagine mid-level General Motors executives imagining that they have the capacity or responsibility to safeguard the rights and privileges of the republic. Except in an indirect way, selling cars doesn’t have much to do with the health of state and civil society. But state and civil society is precisely what Silicon Valley has privatized, monetized, and re-sold to the public. But even “state and civil society” is not quite enough. What Silicon Valley endeavors to produce is, pace Bratton, a planetary simulation as prime mover. The goal of digital technology conglomerates is not only to streamline the formal and administrative roles and responsibilities of the state, or to recreate the mythical meeting houses of the public sphere online. Platform capital has as its target the informational infrastructure that makes living on earth seem to make sense, to be sensible. And in that context, it’s commonsensical to imagine software as sovereignty.

    And this is the bind that will return us to The Stack. After one and a half relentless years of the Trump presidency, and a ceaseless torrent of public scandals concerning tech companies’ abuse of power, the technocratic managerial optimism that underwrote Clinton’s speech has come to a grinding halt. For the time being, at least, the “seemingly haphazard yet highly intentional and sophisticated” governance structures that Clinton envisioned are not working as they have been pitched. At the same time, the cavalcade of revelations about the depths that technology companies plumb in order to extract value from a polluted public has led many to shed delusions about the ethical or progressive bona fides of an industry built on a collective devotion to Ayn Rand. Silicon Valley is happy to facilitate authoritarianism and Nazism, to drive unprecedented crises of homelessness, to systematically undermine any glimmer of dignity in human labor, to thoroughly toxify public discourse, to entrench and expand carceral capitalism so long as doing so expands the platform, attracts advertising and venture capital, and increases market valuation. As Bratton points out, that’s not a particularly Californian Ideology. It’s The Stack, both Gehry and Agora.

    _____

    R. Joshua Scannell holds a PhD in Sociology from the CUNY Graduate Center. He teaches sociology and women’s, gender, and sexuality studies at Hunter College, and is currently researching the political economic relations between predictive policing programs and urban informatics systems. He is the author of Cities: Unauthorized Resistance and Uncertain Sovereignty in the Urban World (Paradigm/Routledge, 2012).

    Back to the essay

    _____

    Works Cited

    • Ali, Tariq. 2015. The Extreme Center: A Warning. London: Verso
    • Crist, Eileen. 2016. “On the Poverty of Our Nomenclature.” In Anthropocene or Capitalocene: Nature, History, and the Crisis of Capitalism, edited by Jason W. Moore, 14-33. Oakland: PM Press
    • Harney, Stefano, and Fred Moten. 2013. The Undercommons: Fugitive Planning and Black Study. Brooklyn: Autonomedia.
    • Moore, Jason W. 2016. “Anthropocene or Capitolocene? Nature, History, and the Crisis of Capitalism.” In Anthropocene or Capitalocene: Nature, History, and the Crisis of Capitalism, edited by Jason W. Moore, 1-13. Oakland: PM Press
    • Negarestani, Reza. 2008. Cyclonopedia: Complicity with Anonymous Materials. Melbourne: re.press
    • Paglen, Trevor. 2009. Blank Spots on the Map: The Dark Geography of the Pentagon’s Secrert World. Boston: Dutton Adult
    • Paglen, Trevor. 2010. Invisible: Covert Operations and Classified Landscapes. Reading: Aperture Press
    • Puar, Jasbir. 2017. The Right to Maim: Debility, Capacity, Disability. Durham: Duke University Press
    • Srnicek, Nick. 2016. Platform Capitalism. Boston: Polity Press
    • Srnicek, Nick, and Alex Williams. 2016. Inventing the Future: Postcapitalism and a World Without Work. London: Verso.
  • Artificial Intelligence as Alien Intelligence

    Artificial Intelligence as Alien Intelligence

    By Dale Carrico
    ~

    Science fiction is a genre of literature in which artifacts and techniques humans devise as exemplary expressions of our intelligence result in problems that perplex our intelligence or even bring it into existential crisis. It is scarcely surprising that a genre so preoccupied with the status and scope of intelligence would provide endless variations on the conceits of either the construction of artificial intelligences or contact with alien intelligences.

    Of course, both the making of artificial intelligence and making contact with alien intelligence are organized efforts to which many humans are actually devoted, and not simply imaginative sites in which writers spin their allegories and exhibit their symptoms. It is interesting that after generations of failure the practical efforts to construct artificial intelligence or contact alien intelligence have often shunted their adherents to the margins of scientific consensus and invested these efforts with the coloration of scientific subcultures: While computer science and the search for extraterrestrial intelligence both remain legitimate fields of research, both AI and aliens also attract subcultural enthusiasms and resonate with cultic theology, each attracts its consumer fandoms and public Cons, each has its True Believers and even its UFO cults and Robot cults at the extremities.

    Champions of artificial intelligence in particular have coped in many ways with the serial failure of their project to achieve its desired end (which is not to deny that the project has borne fruit) whatever the confidence with which generation after generation of these champions have insisted that desired end is near. Some have turned to more modest computational ambitions, making useful software or mischievous algorithms in which sad vestiges of the older dreams can still be seen to cling. Some are simply stubborn dead-enders for Good Old Fashioned AI‘s expected eventual and even imminent vindication, all appearances to the contrary notwithstanding. And still others have doubled down, distracting attention from the failures and problems bedeviling AI discourse simply by raising its pitch and stakes, no longer promising that artificial intelligence is around the corner but warning that artificial super-intelligence is coming soon to end human history.

    alien planet

    Another strategy for coping with the failure of artificial intelligence on its conventional terms has assumed a higher profile among its champions lately, drawing support for the real plausibility of one science-fictional conceit — construction of artificial intelligence — by appealing to another science-fictional conceit, contact with alien intelligence. This rhetorical gambit has often been conjoined to the compensation of failed AI with its hyperbolic amplification into super-AI which I have already mentioned, and it is in that context that I have written about it before myself. But in a piece published a few days ago in The New York Times, “Outing A.I.: Beyond the Turing Test,” Benjamin Bratton, a professor of visual arts at U.C. San Diego and Director of a design think-tank, has elaborated a comparatively sophisticated case for treating artificial intelligence as alien intelligence with which we can productively grapple. Near the conclusion of his piece Bratton declares that “Musk, Gates and Hawking made headlines by speaking to the dangers that A.I. may pose. Their points are important, but I fear were largely misunderstood by many readers.” Of course these figures made their headlines by making the arguments about super-intelligence I have already rejected, and mentioning them seems to indicate Bratton’s sympathy with their gambit and even suggests that his argument aims to help us to understand them better on their own terms. Nevertheless, I take Bratton’s argument seriously not because of but in spite of this connection. Ultimately, Bratton makes a case for understanding AI as alien that does not depend on the deranging hyperbole and marketing of robocalypse or robo-rapture for its force.

    In the piece, Bratton claims “Our popular conception of artificial intelligence is distorted by an anthropocentric fallacy.” The point is, of course, well taken, and the litany he rehearses to illustrate it is enormously familiar by now as he proceeds to survey popular images from Kubrick’s HAL to Jonze’s Her and to document public deliberation about the significance of computation articulated through such imagery as the “rise of the machines” in the Terminator franchise or the need for Asimov’s famous fictional “Three Laws of Robotics.” It is easy — and may nonetheless be quite important — to agree with Bratton’s observation that our computational/media devices lack cruel intentions and are not susceptible to Asimovian consciences, and hence thinking about the threats and promises and meanings of these devices through such frames and figures is not particularly helpful to us even though we habitually recur to them by now. As I say, it would be easy and important to agree with such a claim, but Bratton’s proposal is in fact somewhat a different one:

    [A] mature A.I. is not necessarily a humanlike intelligence, or one that is at our disposal. If we look for A.I. in the wrong ways, it may emerge in forms that are needlessly difficult to recognize, amplifying its risks and retarding its benefits. This is not just a concern for the future. A.I. is already out of the lab and deep into the fabric of things. “Soft A.I.,” such as Apple’s Siri and Amazon recommendation engines, along with infrastructural A.I., such as high-speed algorithmic trading, smart vehicles and industrial robotics, are increasingly a part of everyday life.

    Here the serial failure of the program of artificial intelligence is redeemed simply by declaring victory. Bratton demonstrates that crying uncle does not preclude one from still crying wolf. It’s not that Siri is some sickly premonition of the AI-daydream still endlessly deferred, but that it represents the real rise of what robot cultist Hans Moravec once promised would be our “mind children” but here and now as elfen aliens with an intelligence unto themselves. It’s not that calling a dumb car a “smart” car is simply a hilarious bit of obvious marketing hyperbole, but represents the recognition of a new order of intelligent machines among us. Rather than criticize the way we may be “amplifying its risks and retarding its benefits” by reading computation through the inapt lens of intelligence at all, he proposes that we should resist holding machine intelligence to the standards that have hitherto defined it for fear of making its recognition “too difficult.”

    The kernel of legitimacy in Bratton’s inquiry is its recognition that “intelligence is notoriously difficult to define and human intelligence simply can’t exhaust the possibilities.” To deny these modest reminders is to indulge in what he calls “the pretentious folklore” of anthropocentrism. I agree that anthropocentrism in our attributions of intelligence has facilitated great violence and exploitation in the world, denying the dignity and standing of Cetaceans and Great Apes, but has also facilitated racist, sexist, xenophobic travesties by denigrating humans as beastly and unintelligent objects at the disposal of “intelligent” masters. “Some philosophers write about the possible ethical ‘rights’ of A.I. as sentient entities, but,” Bratton is quick to insist, “that’s not my point here.” Given his insistence that the “advent of robust inhuman A.I.” will force a “reality-based” “disenchantment” to “abolish the false centrality and absolute specialness of human thought and species-being” which he blames in his concluding paragraph with providing “theological and legislative comfort to chattel slavery” it is not entirely clear to me that emancipating artificial aliens is not finally among the stakes that move his argument whatever his protestations to the contrary. But one can forgive him for not dwelling on such concerns: the denial of an intelligence and sensitivity provoking responsiveness and demanding responsibilities in us all to women, people of color, foreigners, children, the different, the suffering, nonhuman animals compels defensive and evasive circumlocutions that are simply not needed to deny intelligence and standing to an abacus or a desk lamp. It is one thing to warn of the anthropocentric fallacy but another to indulge in the pathetic fallacy.

    Bratton insists to the contrary that his primary concern is that anthropocentrism skews our assessment of real risks and benefits. “Unfortunately, the popular conception of A.I., at least as depicted in countless movies, games and books, still seems to assume that humanlike characteristics (anger, jealousy, confusion, avarice, pride, desire, not to mention cold alienation) are the most important ones to be on the lookout for.” And of course he is right. The champions of AI have been more than complicit in this popular conception, eager to attract attention and funds for their project among technoscientific illiterates drawn to such dramatic narratives. But we are distracted from the real risks of computation so long as we expect risks to arise from a machinic malevolence that has never been on offer nor even in the offing. Writes Bratton: “Perhaps what we really fear, even more than a Big Machine that wants to kill us, is one that sees us as irrelevant. Worse than being seen as an enemy is not being seen at all.”

    But surely the inevitable question posed by Bratton’s disenchanting expose at this point should be: Why, once we have set aside the pretentious folklore of machines with diabolical malevolence, do we not set aside as no less pretentiously folkloric the attribution of diabolical indifference to machines? Why, once we have set aside the delusive confusion of machine behavior with (actual or eventual) human intelligence, do we not set aside as no less delusive the confusion of machine behavior with intelligence altogether? There is no question were a gigantic bulldozer with an incapacitated driver to swerve from a construction site onto a crowded city thoroughfare this would represent a considerable threat, but however tempting it might be in the fraught moment or reflective aftermath poetically to invest that bulldozer with either agency or intellect it is clear that nothing would be gained in the practical comprehension of the threat it poses by so doing. It is no more helpful now in an epoch of Greenhouse storms than it was for pre-scientific storytellers to invest thunder and whirlwinds with intelligence. Although Bratton makes great play over the need to overcome folkloric anthropocentrism in our figuration of and deliberation over computation, mystifying agencies and mythical personages linger on in his accounting however he insists on the alienness of “their” intelligence.

    Bratton warns us about the “infrastructural A.I.” of high-speed financial trading algorithms, Google and Amazon search algorithms, “smart” vehicles (and no doubt weaponized drones and autonomous weapons systems would count among these), and corporate-military profiling programs that oppress us with surveillance and harass us with targeted ads. I share all of these concerns, of course, but personally insist that our critical engagement with infrastructural coding is profoundly undermined when it is invested with insinuations of autonomous intelligence. In “Art in the Age of Mechanical Reproducibility,” Walter Benjamin pointed out that when philosophers talk about the historical force of art they do so with the prejudices of philosophers: they tend to write about those narrative and visual forms of art that might seem argumentative in allegorical and iconic forms that appear analogous to the concentrated modes of thought demanded by philosophy itself. Benjamin proposed that perhaps the more diffuse and distracted ways we are shaped in our assumptions and aspirations by the durable affordances and constraints of the made world of architecture and agriculture might turn out to drive history as much or even more than the pet artforms of philosophers do. Lawrence Lessig made much the same point when he declared at the turn of the millennium that “Code Is Law.”

    It is well known that special interests with rich patrons shape the legislative process and sometimes even explicitly craft legislation word for word in ways that benefit them to the cost and risk of majorities. It is hard to see how our assessment of this ongoing crime and danger would be helped and not hindered by pretending legislation is an autonomous force exhibiting an alien intelligence, rather than a constellation of practices, norms, laws, institutions, ritual and material artifice, the legacy of the historical play of intelligent actors and the site for the ongoing contention of intelligent actors here and now. To figure legislation as a beast or alien with a will of its own would amount to a fetishistic displacement of intelligence away from the actual actors actually responsible for the forms that legislation actually takes. It is easy to see why such a displacement is attractive: it profitably abets the abuses of majorities by minorities while it absolves majorities from conscious complicity in the terms of their own exploitation by laws made, after all, in our names. But while these consoling fantasies have an obvious allure this hardly justifies our endorsement of them.

    I have already written in the past about those who want to propose, as Bratton seems inclined to do in the present, that the collapse of global finance in 2008 represented the working of inscrutable artificial intelligences facilitating rapid transactions and supporting novel financial instruments of what was called by Long Boom digerati the “new economy.” I wrote:

    It is not computers and programs and autonomous techno-agents who are the protagonists of the still unfolding crime of predatory plutocratic wealth-concentration and anti-democratizing austerity. The villains of this bloodsoaked epic are the bankers and auditors and captured-regulators and neoliberal ministers who employed these programs and instruments for parochial gain and who then exonerated and rationalized and still enable their crimes. Our financial markets are not so complex we no longer understand them. In fact everybody knows exactly what is going on. Everybody understands everything. Fraudsters [are] engaged in very conventional, very recognizable, very straightforward but unprecedentedly massive acts of fraud and theft under the cover of lies.

    I have already written in the past about those who want to propose, as Bratton seems inclined to do in the present, that our discomfiture in the setting of ubiquitous algorithmic mediation results from an autonomous force over which humans intentions are secondary considerations. I wrote:

    [W]hat imaginary scene is being conjured up in this exculpatory rhetoric in which inadvertent cruelty is ‘coming from code’ as opposed to coming from actual persons? Aren’t coders actual persons, for example? … [O]f course I know what [is] mean[t by the insistence…] that none of this was ‘a deliberate assault.’ But it occurs to me that it requires the least imaginable measure of thought on the part of those actually responsible for this code to recognize that the cruelty of [one user’s] confrontation with their algorithm was the inevitable at least occasional result for no small number of the human beings who use Facebook and who live lives that attest to suffering, defeat, humiliation, and loss as well as to parties and promotions and vacations… What if the conspicuousness of [this] experience of algorithmic cruelty indicates less an exceptional circumstance than the clarifying exposure of a more general failure, a more ubiquitous cruelty? … We all joke about the ridiculous substitutions performed by autocorrect functions, or the laughable recommendations that follow from the odd purchase of a book from Amazon or an outing from Groupon. We should joke, but don’t, when people treat a word cloud as an analysis of a speech or an essay. We don’t joke so much when a credit score substitutes for the judgment whether a citizen deserves the chance to become a homeowner or start a small business, or when a Big Data profile substitutes for the judgment whether a citizen should become a heat signature for a drone committing extrajudicial murder in all of our names. [An] experience of algorithmic cruelty [may be] extraordinary, but that does not mean it cannot also be a window onto an experience of algorithmic cruelty that is ordinary. The question whether we might still ‘opt out’ from the ordinary cruelty of algorithmic mediation is not a design question at all, but an urgent political one.

    I have already written in the past about those who want to propose, as Bratton seems inclined to do in the present, that so-called Killer Robots are a threat that must be engaged by resisting or banning “them” in their alterity rather than by assigning moral and criminal responsibility on those who code, manufacture, fund, and deploy them. I wrote:

    Well-meaning opponents of war atrocities and engines of war would do well to think how tech companies stand to benefit from military contracts for ‘smarter’ software and bleeding-edge gizmos when terrorized and technoscientifically illiterate majorities and public officials take SillyCon Valley’s warnings seriously about our ‘complacency’ in the face of truly autonomous weapons and artificial super-intelligence that do not exist. It is crucial that necessary regulation and even banning of dangerous ‘autonomous weapons’ proceeds in a way that does not abet the mis-attribution of agency, and hence accountability, to devices. Every ‘autonomous’ weapons system expresses and mediates decisions by responsible humans usually all too eager to disavow the blood on their hands. Every legitimate fear of ‘killer robots’ is best addressed by making their coders, designers, manufacturers, officials, and operators accountable for criminal and unethical tools and uses of tools… There simply is no such thing as a smart bomb. Every bomb is stupid. There is no such thing as an autonomous weapon. Every weapon is deployed. The only killer robots that actually exist are human beings waging and profiting from war.

    “Arguably,” argues Bratton, “the Anthropocene itself is due less to technology run amok than to the humanist legacy that understands the world as having been given for our needs and created in our image. We hear this in the words of thought leaders who evangelize the superiority of a world where machines are subservient to the needs and wishes of humanity… This is the sentiment — this philosophy of technology exactly — that is the basic algorithm of the Anthropocenic predicament, and consenting to it would also foreclose adequate encounters with A.I.” The Anthropocene in this formulation names the emergence of environmental or planetary consciousness, an emergence sometimes coupled to the global circulation of the image of the fragility and interdependence of the whole earth as seen by humans from outer space. It is the recognition that the world in which we evolved to flourish might be impacted by our collective actions in ways that threaten us all. Notice, by the way, that multiculture and historical struggle are figured as just another “algorithm” here.

    I do not agree that planetary catastrophe inevitably followed from the conception of the earth as a gift besetting us to sustain us, indeed this premise understood in terms of stewardship or commonwealth would go far in correcting and preventing such careless destruction in my opinion. It is the false and facile (indeed infantile) conception of a finite world somehow equal to infinite human desires that has landed us and keeps us delusive ignoramuses lodged in this genocidal and suicidal predicament. Certainly I agree with Bratton that it would be wrong to attribute the waste and pollution and depletion of our common resources by extractive-industrial-consumer societies indifferent to ecosystemic limits to “technology run amok.” The problem of so saying is not that to do so disrespects “technology” — as presumably in his view no longer treating machines as properly “subservient to the needs and wishes of humanity” would more wholesomely respect “technology,” whatever that is supposed to mean — since of course technology does not exist in this general or abstract way to be respected or disrespected.

    The reality at hand is that humans are running amok in ways that are facilitated and mediated by certain technologies. What is demanded in this moment by our predicament is the clear-eyed assessment of the long-term costs, risks, and benefits of technoscientific interventions into finite ecosystems to the actual diversity of their stakeholders and the distribution of these costs, risks, and benefits in an equitable way. Quite a lot of unsustainable extractive and industrial production as well as mass consumption and waste would be rendered unprofitable and unappealing were its costs and risks widely recognized and equitably distributed. Such an understanding suggests that what is wanted is to insist on the culpability and situation of actually intelligent human actors, mediated and facilitated as they are in enormously complicated and demanding ways by technique and artifice. The last thing we need to do is invest technology-in-general or environmental-forces with alien intelligence or agency apart from ourselves.

    I am beginning to wonder whether the unavoidable and in many ways humbling recognition (unavoidable not least because of environmental catastrophe and global neoliberal precarization) that human agency emerges out of enormously complex and dynamic ensembles of interdependent/prostheticized actors gives rise to compensatory investments of some artifacts — especially digital networks, weapons of mass destruction, pandemic diseases, environmental forces — with the sovereign aspect of agency we no longer believe in for ourselves? It is strangely consoling to pretend our technologies in some fancied monolithic construal represent the rise of “alien intelligences,” even threatening ones, other than and apart from ourselves, not least because our own intelligence is an alienated one and prostheticized through and through. Consider the indispensability of pedagogical techniques of rote memorization, the metaphorization and narrativization of rhetoric in songs and stories and craft, the technique of the memory palace, the technologies of writing and reading, the articulation of metabolism and duration by timepieces, the shaping of both the body and its bearing by habit and by athletic training, the lifelong interplay of infrastructure and consciousness: all human intellect is already technique. All culture is prosthetic and all prostheses are culture.

    Bratton wants to narrate as a kind of progressive enlightenment the mystification he recommends that would invest computation with alien intelligence and agency while at once divesting intelligent human actors, coders, funders, users of computation of responsibility for the violations and abuses of other humans enabled and mediated by that computation. This investment with intelligence and divestment of responsibility he likens to the Copernican Revolution in which humans sustained the momentary humiliation of realizing that they were not the center of the universe but received in exchange the eventual compensation of incredible powers of prediction and control. One might wonder whether the exchange of the faith that humanity was the apple of God’s eye for a new technoscientific faith in which we aspired toward godlike powers ourselves was really so much a humiliation as the exchange of one megalomania for another. But what I want to recall by way of conclusion instead is that the trope of a Copernican humiliation of the intelligent human subject is already quite a familiar one:

    In his Introductory Lectures on Psychoanalysis Sigmund Freud notoriously proposed that

    In the course of centuries the naive self-love of men has had to submit to two major blows at the hands of science. The first was when they learnt that our earth was not the center of the universe but only a tiny fragment of a cosmic system of scarcely imaginable vastness. This is associated in our minds with the name of Copernicus… The second blow fell when biological research de­stroyed man’s supposedly privileged place in creation and proved his descent from the animal kingdom and his ineradicable animal nature. This revaluation has been accomplished in our own days by Darwin… though not without the most violent contemporary opposition. But human megalomania will have suffered its third and most wounding blow from the psychological research of the present time which seeks to prove to the ego that it is not even master in its own house, but must content itself with scanty information of what is going on un­consciously in the mind.

    However we may feel about psychoanalysis as a pseudo-scientific enterprise that did more therapeutic harm than good, Freud’s works considered instead as contributions to moral philosophy and cultural theory have few modern equals. The idea that human consciousness is split from the beginning as the very condition of its constitution, the creative if self-destructive result of an impulse of rational self-preservation beset by the overabundant irrationality of humanity and history, imposed a modesty incomparably more demanding than Bratton’s wan proposal in the same name. Indeed, to the extent that the irrational drives of the dynamic unconscious are often figured as a brute machinic automatism, one is tempted to suggest that Bratton’s modest proposal of alien artifactual intelligence is a fetishistic disavowal of the greater modesty demanded by the alienating recognition of the stratification of human intelligence by unconscious forces (and his moniker a symptomatic citation). What is striking about the language of psychoanalysis is the way it has been taken up to provide resources for imaginative empathy across the gulf of differences: whether in the extraordinary work of recent generations of feminist, queer, and postcolonial scholars re-orienting the project of the conspicuously sexist, heterosexist, cissexist, racist, imperialist, bourgeois thinker who was Freud to emancipatory ends, or in the stunning leaps in which Freud identified with neurotic others through psychoanalytic reading, going so far as to find in the paranoid system-building of the psychotic Dr. Schreber an exemplar of human science and civilization and a mirror in which he could see reflected both himself and psychoanalysis itself. Freud’s Copernican humiliation opened up new possibilities of responsiveness in difference out of which could be built urgently necessary responsibilities otherwise. I worry that Bratton’s Copernican modesty opens up new occasions for techno-fetishistic fables of history and disavowals of responsibility for its actual human protagonists.
    _____

    Dale Carrico is a member of the visiting faculty at the San Francisco Art Institute as well as a lecturer in the Department of Rhetoric at the University of California at Berkeley from which he received his PhD in 2005. His work focuses on the politics of science and technology, especially peer-to-peer formations and global development discourse and is informed by a commitment to democratic socialism (or social democracy, if that freaks you out less), environmental justice critique, and queer theory. He is a persistent critic of futurological discourses, especially on his Amor Mundi blog, on which an earlier version of this post first appeared.

    Back to the essay