boundary 2

Tag: futurology

  • Anthony Galluzzo — Utopia as Method, Social Science Fiction, and the Flight From Reality (Review of Frase, Four Futures)

    Anthony Galluzzo — Utopia as Method, Social Science Fiction, and the Flight From Reality (Review of Frase, Four Futures)

    a review of Peter Frase, Four Futures: Life After Capitalism (Verso Jacobin Series, 2016)

    by Anthony Galluzzo

    ~

    Charlie Brooker’s acclaimed British techno-dystopian television series, Black Mirror, returned last year in a more American-friendly form. The third season, now broadcast on Netflix, opened with “Nosedive,” a satirical depiction of a recognizable near future when user-generated social media scores—on the model of Yelp reviews, Facebook likes, and Twitter retweets—determine life chances, including access to basic services, such as housing, credit, and jobs. The show follows striver Lacie Pound—played by Bryce Howard—who, in seeking to boost her solid 4.2 life score, ends up inadvertently wiping out all of her points, in the nosedive named by the episode’s title. Brooker offers his viewers a nightmare variation on a now familiar online reality, as Lacie rates every human interaction and is rated in turn, to disastrous result. And this nightmare is not so far from the case, as online reputational hierarchies increasingly determine access to precarious employment opportunities. We can see this process in today’s so-called sharing economy, in which user approval determines how many rides will go to the Uber driver, or if the room you are renting on Airbnb, in order to pay your own exorbitant rent, gets rented.

    Brooker grappled with similar themes during the show’s first season; for example, “Fifteen Million Merits” shows us a future world of human beings forced to spend their time on exercise bikes, presumably in order to generate power plus the “merits” that function as currency, even as they are forced to watch non-stop television, advertisements included. It is television—specifically a talent show—that offers an apparent escape to the episode’s protagonists. Brooker revisits these concerns—which combine anxieties regarding new media and ecological collapse in the context of a viciously unequal society—in the final episode of the new season, entitled “Hated in the Nation,” which features robotic bees, built for pollination in a world after colony collapse, that are hacked and turned to murderous use. Here is an apt metaphor for the virtual swarming that characterizes so much online interaction.

    Black Mirror corresponds to what literary critic Tom Moylan calls a “critical dystopia.” [1] Rather than a simple exercise in pessimism or anti-utopianism, Moylan argues that critical dystopias, like their utopian counterparts, also offer emancipatory political possibilities in exposing the limits of our social and political status quo, such as the naïve techno-optimism that is certainly one object of Brooker’s satirical anatomies. Brooker in this way does what Jacobin Magazine editor and social critic Peter Frase claims to do in his Four Futures: Life After Capitalism, a speculative exercise in “social science fiction” that uses utopian and dystopian science fiction as means to explore what might come after global capitalism. Ironically, Frase includes both online reputational hierarchies and robotic bees in his two utopian scenarios: one of the more dramatic, if perhaps inadvertent, ways that Frase collapses dystopian into utopian futures

    Frase echoes the opening lines of Marx and Engels’ Communist Manifesto as he describes the twin “specters of ecological catastrophe and automation” that haunt any possible post-capitalist future. While total automation threatens to make human workers obsolete, the global planetary crisis threatens life on earth, as we have known it for the past 12000 years or so. Frase contends that we are facing a “crisis of scarcity and a crisis of abundance at the same time,” making our moment one “full of promise and danger.” [2]

    The attentive reader can already see in this introductory framework the too-often unargued assumptions and easy dichotomies that characterize the book as a whole. For example, why is total automation plausible in the next 25 years, according to Frase, who largely supports this claim by drawing on the breathless pronouncements of a technophilic business press that has made similar promises for nearly a hundred years? And why does automation equal abundance—assuming the more egalitarian social order that Frase alternately calls “communism” or “socialism”—especially when we consider the  ecological crisis Frase invokes as one of his two specters? This crisis is very much bound to an energy-intensive technosphere that is already pushing against several of the planetary boundaries that make for a habitable planet; total automation would expand this same technosphere by several orders of magnitude, requiring that much more energy, materials, and  environmental sinks to absorb tomorrow’s life-sized iPhone or their corpses. Frase deliberately avoids these empirical questions—and the various debates among economists, environmental scientists and computer programmers about the feasibility of AI, the extent to which automation is actually displacing workers, and the ecological limits to technological growth, at least as technology is currently constituted—by offering his work as the “social science fiction” mentioned above, perhaps in the vein of Black Mirror. He distinguishes this method from futurism or prediction, as he writes, “science fiction is to futurism as social theory is to conspiracy theory.” [3]

    In one of his few direct citations, Frase invokes Marxist literary critic Fredric Jameson, who argues that conspiracy theory and its fictions are ideologically distorted attempts to map an elusive and opaque global capitalism: “Conspiracy, one is tempted to say, is the poor person’s cognitive mapping in the postmodern age; it is the degraded figure of the total logic of late capital, a desperate attempt to represent the latter’s system, whose failure is marked by its slippage into sheer theme and content.” [4] For Jameson, a more comprehensive cognitive map of our planetary capitalist civilization necessitates new forms of representation to better capture and perhaps undo our seemingly eternal and immovable status quo. In the words of McKenzie Wark, Jameson proposes nothing less than a “theoretical-aesthetic practice of correlating the field of culture with the field of political economy.” [5] And it is possibly with this “theoretical-aesthetic practice” in mind that Frase turns to science fiction as his preferred tool of social analysis.

    The book accordingly proceeds in the way of a grid organized around the coordinates “abundance/scarcity” and “egalitarianism/hierarchy”—in another echo of Jameson, namely his structuralist penchant for Greimas squares. Hence we get abundance with egalitarianism, or “communism,” followed by its dystopian counterpart, rentism, or hierarchical plenty in the first two futures; similarly, the final futures move from an equitable scarcity, or “socialism” to a hierarchical and apocalyptic “exterminism.” Each of these chapters begins with a science fiction, ranging from an ostensibly communist Star Trek to the exterminationist visions presented in Orson Scott Card’s Ender’s Game, upon which Frase builds his various future scenarios. These scenarios are more often than not commentaries on present day phenomena, such as 3D printers or the sharing economy, or advocacy for various measures, like a Universal Basic Income, which Frase presents as the key to achieving his desired communist future.

    With each of his futures anchored in a literary (or cinematic, or televisual) science fiction narrative, Frase’s speculations rely on imaginative literature, even as he avoids any explicit engagement with literary criticism and theory, such as the aforementioned work of  Jameson.  Jameson famously argues (see Jameson 1982, and the more elaborated later versions in texts such as Jameson 2005) that the utopian text, beginning with Thomas More’s Utopia, simultaneously offers a mystified version of dominant social relations and an imaginative space for rehearsing radically different forms of sociality. But this dialectic of ideology and utopia is absent from Frase’s analysis, where his select space operas are all good or all bad: either the Jetsons or Elysium.

    And, in a marked contrast with Jameson’s symptomatic readings, some science fiction is for Frase more equal than others when it comes to radical sociological speculation, as evinced by his contrasting views of George Lucas’s Star Wars and Gene Roddenberry’s Star Trek.  According to Frase, in “Star Wars, you don’t really care about the particularities of the galactic political economy,” while in Star Trek, “these details actually matter. Even though Star Trek and Star Wars might superficially look like similar tales of space travel and swashbuckling, they are fundamentally different types of fiction. The former exists only for its characters and its mythic narrative, while the latter wants to root its characters in a richly and logically structured social world.” [6]

    Frase here understates his investment in Star Trek, whose “structured social world” is later revealed as his ideal-type for a high tech fully automated luxury communism, while Star Wars is relegated to the role of the space fantasy foil. But surely the original Star Wars is at least an anticolonial allegory, in which a ragtag rebel alliance faces off against a technologically superior evil empire, that was intentionally inspired by the Vietnam War. Lucas turned to the space opera after he lost his bid to direct Apocalypse Now—which was originally based on Lucas’s own idea. According to one account of the franchise’s genesis, “the Vietnam War, which was an asymmetric conflict with a huge power unable to prevail against guerrilla fighters, instead became an influence on Star Wars. As Lucas later said, ‘A lot of my interest in Apocalypse Now carried over into Star Wars.” [7]

    Texts—literary, cinematic, and otherwise—often combine progressive and reactionary, utopian and ideological elements. Yet it is precisely the mixed character of speculative narrative that Frase ignores throughout his analysis, reducing each of his literary examples to unequivocally good or bad, utopian or dystopian, blueprints for “life after capitalism.” Why anchor radical social analysis in various science fictions while refusing basic interpretive argument? As with so much else in Four Futures, Frase uses assumption—asserting that Star Trek has one specific political valence or that total automation guided by advanced AI is an inevitability within 25 years—in the service of his preferred policy outcomes (and the nightmare scenarios that function as the only alternatives to those outcomes), while avoiding engagement with debates related to technology, ecology, labor, and the utopian imagination.

    Frase in this way evacuates the politically progressive and critical utopian dimensions from George Lucas’s franchise, elevating the escapist and reactionary dimensions that represent the ideological, as opposed to the utopian, pole of this fantasy. Frase similarly ignores the ideological elements of Roddenberry’s Star Trek: “The communistic quality of the Star Trek universe is often obscured because the films and TV shows are centered on the military hierarchy of Starfleet, which explores the galaxy and comes into conflict with alien races. But even this seems largely a voluntarily chosen hierarchy.” [8]

    Frase’s focus, regarding Star Trek, is almost entirely on the replicators  that can make something,  anything, from nothing, so that Captain Picard, from the eighties era series reboot, orders a “cup of Earl Grey, hot,” from one of these magical machines, and immediately receives Earl Grey, hot. Frase equates our present-day 3D printers with these same replicators over the course of all his four futures, despite the fact that unlike replicators, 3D printers require inputs: they do not make matter, but shape it.

    3D printing encompasses a variety of processes in which would-be makers create an image with a computer and CAD (computer aided design) software, which in turn provides a blueprint for the three-dimensional object to be “printed.” This requires either the addition of material—usually plastic—and the injection of that material into a mould.  The most basic type of 3D printing involves heating  “(plastic, glue-based) material that is then extruded through a nozzle. The nozzle is attached to an apparatus similar to a normal 2D ink-jet printer, just that it moves up and down, as well. The material is put on layer over layer. The technology is not substantially different from ink-jet printing, it only requires slightly more powerful computing electronics and a material with the right melting and extrusion qualities.” [9] This is still the most affordable and pervasive way to make objects with 3D printers—most often used to make small models and components. It is also the version of 3D printing that lends itself to celebratory narratives of post-industrial techno-artisanal home manufacture pushed by industry cheerleaders and enthusiasts alike. Yet, the more elaborate versions of 3D printing—“printing’ everything from complex machinery to  food to human organs—rely on the more complex and  expensive industrial versions of the technology that require lasers (e.g., stereolithography and selective laser sintering).  Frase espouses a particular left techno-utopian line that sees the end of mass production in 3D printing—especially with the free circulation of the programs for various products outside of our intellectual property regime; this is how he distinguishes his communist utopia from the dystopian rentism that most resembles our current moment,  with material abundance taken for granted. And it is this fantasy of material abundance and post-work/post-worker production that presumably appeals to Frase, who describes himself as an advocate of “enlightened Luddism.”

    This is an inadvertently ironic characterization, considering the extent to which these emancipatory claims conceal and distort the labor discipline imperative that is central to the shape and development of this technology, as Johan Söderberg argues, “we need to put enthusiastic claims for 3D printers into perspective. One claim is that laid-off American workers can find a new source of income by selling printed goods over the Internet, which will be an improvement, as degraded factory jobs are replaced with more creative employment opportunities. But factory jobs were not always monotonous. They were deliberately made so, in no small part through the introduction of the same technology that is expected to restore craftsmanship. ‘Makers’ should be seen as the historical result of the negation of the workers’ movement.” [10]

    Söderberg draws on the work of David Noble, who outlines how the numerical control technology central to the growth of post-war factory automation was developed specifically to de-skill and dis-empower workers during the Cold War period. Unlike Frase, both of these authors foreground those social relations, which include capital’s need to more thoroughly exploit and dominate labor, embedded in the architecture of complex megatechnical systems, from  factory automation to 3D printers. In collapsing 3D printers into Star Trek-style replicators, Frase avoids these questions as well as the more immediately salient issue of resource constraints that should occupy any prognostication that takes the environmental crisis seriously.

    The replicator is the key to Frase’s dream of endless abundance on the model of post-war US style consumer affluence and the end of all human labor. But, rather than a simple blueprint for utopia, Star Trek’s juxtaposition of techno-abundance with military hierarchy and a tacitly expansionist galactic empire—despite the show’s depiction of a Starfleet “prime directive” that forbids direct intervention into the affairs of the extraterrestrial civilizations encountered by the federation’s starships, the Enterprise’s crew, like its ostensibly benevolent US original, almost always intervenes—is significant. The original Star Trek is arguably a liberal iteration of Kennedy-era US exceptionalism, and reflects a moment in which relatively wide-spread first world abundance was underwritten by the deliberate underdevelopment, appropriation, and exploitation of various “alien races’” resources, land, and labor abroad. Abundance in fact comes from somewhere and some one.

    As historian H. Bruce Franklin argues, the original series reflects US Cold War liberalism, which combined Roddenberry’s progressive stances regarding racial inclusion within the parameters of the United States and its Starfleet doppelganger, with a tacitly anti-communist expansionist viewpoint, so that the show’s Klingon villains often serve as proxies for the Soviet menace. Franklin accordingly charts the show’s depictions of the Vietnam War, moving from a pro-war and pro-American stance to a mildly anti-war position in the wake of the Tet Offensive over the course of several episodes: “The first of these two episodes, ‘The City on the Edge of Forever‘ and ‘A Private Little War,’ had suggested that the Vietnam War was merely an unpleasant necessity on the way to the future dramatized by Star Trek. But the last two, ‘The Omega Glory‘ and ‘Let That Be Your Last Battlefield,’ broadcast in the period between March 1968 and January 1969, are so thoroughly infused with the desperation of the period that they openly call for a radical change of historic course, including an end to the Vietnam War and to the war at home.” [11]

    Perhaps Frase’s inattention to Jameson’s dialectic of ideology and utopia reflects a too-literal approach to these fantastical narratives, even as he proffers them as valid tools for radical political and social analysis. We could see in this inattention a bit too much of the fan-boy’s enthusiasm, which is also evinced by the rather narrow and backward-looking focus on post-war space operas to the exclusion of the self-consciously radical science fiction narratives of Ursula LeGuin, Samuel Delany, and Octavia Butler, among others. These writers use the tropes of speculative fiction to imagine profoundly different social relations that are the end-goal of all emancipatory movements. In place of emancipated social relations, Frase too often relies on technology and his readings must in turn be read with these limitations in mind.

    Unlike the best speculative fiction, utopian or dystopian, Frase’s “social science fiction” too often avoids the question of social relations—including the social relations embedded in the complex megatechnical systems Frase  takes for granted as neutral forces of production. He accordingly announces at the outset of his exercise: “I will make the strongest assumption possible: all need for human labor in the production process can be eliminated, and it is possible to live a life of pure leisure while machines do all the work.” [12] The science fiction trope effectively absolves Frase from engagement with the technological, ecological, or social feasibility of these predictions, even as he announces his ideological affinities with a certain version of post- and anti-work politics that breaks with orthodox Marxism and its socialist variants.

    Frase’s Jetsonian vision of the future resonates with various futurist currents that  can we now see across the political spectrum, from the Silicon Valley Singulitarianism of Ray Kurzweil or Elon Musk, on the right, to various neo-Promethean currents on the left, including so-called “left accelerationism.” Frase defends his assumption as a desire “to avoid long-standing debates about post-capitalist organization of the production process.” While such a strict delimitation is permissible for speculative fiction—an imaginative exercise regarding what is logically possible, including time travel or immortality—Frase specifically offers science fiction as a mode of social analysis, which presumably entails grappling with rather than avoiding current debates on labor, automation, and the production process.

    Ruth Levitas, in her 2013 book Utopia as Method: The Imaginary Reconstitution of Society, offers a more rigorous definition of social science fiction via her eponymous “utopia as method.”  This method combines sociological analysis and imaginative speculation, which Levitas defends as “holistic. Unlike political philosophy and political theory, which have been more open than sociology to normative approaches, this holism is expressed at the level of concrete social institutions and processes.” [13] But that attentiveness to concrete social institutions and practices combined with counterfactual speculation regarding another kind of human social world are exactly what is missing in Four Futures. Frase uses grand speculative assumptions-such as the inevitable rise of human-like AI or the complete disappearance of human labor, all within 25 years or so—in order to avoid significant debates that are ironically much more present in purely fictional works, such as the aforementioned Black Mirror or the novels of Kim Stanley Robinson, than in his own overtly non-fictional speculations. From the standpoint of radical literary criticism and radical social theory, Four Futures is wanting. It fails as analysis. And, if one primary purpose of utopian speculation, in its positive and negative forms, is to open an imaginative space in which wholly other forms of human social relations can be entertained, Frase’s speculative exercise also exhibits a revealing paucity of imagination.

    This is most evident in Frase’s most  explicitly utopian future, which he calls “communism,” without any mention of class struggle, the collective ownership of the means of production, or any of the other elements we usually associate with “communism”; instead, 3D printers-cum-replicators will produce whatever you need whenever you need it at home, an individualizing techno-solution to the problem of labor, production, and its organization that resembles alchemy in its indifference to material reality and the scarce material inputs required by 3D printers. Frase proffers a magical vision of technology so as to avoid grappling with the question of social relations; even more than this, in the coda to this chapter, Frase reveals the extent to which current patterns of social organization and stratification remain under Frase’s “communism.” Frase begins this coda with a question: “in a communist society, what do we do all day?”  To which he responds: “The kind of communism   I’ve described is sometimes mistakenly construed, by both its critics and its adherents,  as a society in which hierarchy and conflict are wholly absent. But rather than see the abolition of the capital-wage relation as a single shot solution to all possible social problems, it is perhaps better to think of it in the terms used by political scientist, Corey Robin, as a way to ‘convert hysterical misery into ordinary unhappiness.’” [14]

    Frase goes on to argue—rightly—that the abolition of class society or wage labor will not put an end to a variety of other oppressions, such as those based in gender and racial stratification; he in this way departs from the class reductionist tendencies sometimes on view in the magazine he edits.  His invocation of Corey Robin is nonetheless odd considering the Promethean tenor of Frase’s preferred futures. Robin contends that while the end of exploitation, and capitalist social relations, would remove the major obstacle to  human flourishing, human beings will remain finite and fragile creatures in a finite and fragile world. Robin in this way overlaps with Fredric Jameson’s remarkable essay on Soviet writer Andre Platonov’s Chevengur, in which Jameson writes: “Utopia is merely the political and social solution of collective life: it does not do away with the tensions and inherent contradictions  inherent in both interpersonal relations and in bodily existence itself (among them, those of sexuality), but rather exacerbates those and allows them free rein, by removing the artificial miseries of money and self-preservation [since] it is not the function of Utopia to bring the dead back to life nor abolish death in the first place.” [15] Both Jameson and Robin recall Frankfurt School thinker Herbert Marcuse’s distinction between necessary and surplus repression: while the latter encompasses all of the unnecessary miseries attendant upon a class stratified form of social organization that runs on exploitation, the former represents the necessary adjustments we make to socio-material reality and its limits.

    It is telling that while Star Trek-style replicators fall within the purview of the possible for Frase, hierarchy, like death, will always be with us, since he at least initially argues that status hierarchies will persist after the “organizing force of the capital relation has been removed” (59). Frase oscillates between describing these status hierarchies as an unavoidable, if unpleasant, necessity and a desirable counter to the uniformity of an egalitarian society. Frase illustrates this point in recalling Cory Doctorow’s Down and Out in The Magic Kingdom, a dystopian novel that depicts a world where all people’s needs are met at the same time that everyone competes for reputational “points”—called Whuffie—on the model of Facebook “likes” and Twitter retweets. Frase’s communism here resembles the world of Black Mirror described above.  Although Frase shifts from the rhetoric of necessity to qualified praise in an extended discussion of Dogecoin, an alternative currency used to tip or “transfer a small number of to another Internet user in appreciation of their witty and helpful contributions” (60). Yet Dogecoin, among all cryptocurrencies, is mostly a joke, and like many cryptocurrencies is one whose “decentralized” nature scammers have used to their own advantage, most famously in 2015. In the words of one former enthusiast: “Unfortunately, the whole ordeal really deflated my enthusiasm for cryptocurrencies. I experimented, I got burned, and I’m moving on to less gimmicky enterprises.” [16]

    But how is this dystopian scenario either necessary or desirable?  Frase contends that “the communist society I’ve sketched here, though imperfect, is at least one in which conflict is no longer based in the opposition between wage workers and capitalists or on struggles…over scarce resources” (67). His account of how capitalism might be overthrown—through a guaranteed universal income—is insufficient, while resource scarcity and its relationship to techno-abundance remains unaddressed in a book that purports to take the environmental crisis seriously. What is of more immediate interest in the case of this coda to his most explicitly utopian future is Frase’s non-recognition of how internet status hierarchies and alternative currencies are modeled on and work in tandem with capitalist logics of entrepreneurial selfhood. We might consider Pierre Bourdieu’s theory of social and cultural capital in this regard, or how these digital platforms and their ever-shifting reputational hierarchies are the foundation of what Jodi Dean calls “communicative capitalism.” [17]

    Yet Frase concludes his chapter by telling his readers that it would be a “misnomer” to call his communist future an “egalitarian configuration.” Perhaps Frase offers his fully automated Facebook utopia as counterpoint to the Cold War era critique of utopianism in general and communism in particular: it leads to grey uniformity and universal mediocrity. This response—a variation on Frase’s earlier discussion of Star Trek’s “voluntary hierarchy”—accepts the premise of the Cold War anti-utopian criticisms, i.e., how the human differences that make life interesting, and generate new possibilities, require hierarchy of some kind. In other words, this exercise in utopian speculation cannot move outside the horizon of our own present day ideological common sense.

    We can again see this tendency at the very start of the book. Is total automation an unambiguous utopia or a reflection of Frase’s own unexamined ideological proclivities, on view throughout the various futures, for high tech solutions to complex socio-ecological problems? For various flavors of deus ex machina—from 3D printers to replicators to robotic bees—in place of social actors changing the material realities that constrain them through collective action? Conversely, are the “crisis of scarcity” and the visions of ecological apocalypse Frase evokes intermittently throughout his book purely dystopian or ideological? Surely, since Thomas Malthus’s 1798 Essay on Population, apologists for various ruling orders have used the threat of scarcity and material limits to justify inequity, exploitation, and class division: poverty is “natural.” Yet, can’t we also discern in contemporary visions of apocalypse a radical desire to break with a stagnant capitalist status quo? And in the case of the environmental state of emergency, don’t we have a rallying point for constructing a very different eco-socialist order?

    Frase is a founding editor of Jacobin magazine and a long-time member of the Democratic Socialists of America. He nonetheless distinguishes himself from the reformist and electoral currents at those organizations, in addition to much of what passes for orthodox Marxism. Rather than full employment—for example—Frase calls for the abolition of work and the working class in a way that echoes more radical anti-work and post-workerist modes of communist theory. So, in a recent editorial published by Jacobin, entitled “What It Means to Be on the Left,” Frase differentiates himself from many of his DSA comrades in declaring that “The socialist project, for me, is about something more than just immediate demands for more jobs, or higher wages, or universal social programs, or shorter hours. It’s about those things. But it’s also about transcending, and abolishing, much of what we think defines our identities and our way of life.” Frase goes on to sketch an emphatically utopian communist horizon that includes the abolition of class, race, and gender as such. These are laudable positions, especially when we consider a new new left milieu some of whose most visible representatives dismiss race and gender concerns as “identity politics,” while redefining radical class politics as a better deal for some amorphous US working class within an apparently perennial capitalist status quo.

    Frase’s utopianism in this way represents an important counterpoint within this emergent left. Yet his book-length speculative exercise—policy proposals cloaked as possible scenarios—reveals his own enduring investments in the simple “forces vs. relations of production” dichotomy that underwrote so much of twentieth century state socialism with its disastrous ecological record and human cost.  And this simple faith in the emancipatory potential of capitalist technology—given the right political circumstances despite the complete absence of what creating those circumstances might entail— frequently resembles a social democratic version of the Californian ideology or the kind of Silicon Valley conventional wisdom pushed by Elon Musk. This is a more efficient, egalitarian, and techno-utopian version of US capitalism. Frase mines various left communist currents, from post-operaismo to communization, only to evacuate these currents of their radical charge in marrying them to technocratic and technophilic reformism, hence UBI plus “replicators” will spontaneously lead to full communism. Four Futures is in this way an important, because symptomatic, expression of what Jason Smith (2017) calls “social democratic accelerationism,” animated by a strange faith in magical machines in addition to a disturbing animus toward ecology, non-human life, and the natural world in general.

    _____

    Anthony Galluzzo earned his PhD in English Literature at UCLA. He specializes in radical transatlantic English language literary cultures of the late eighteenth- and nineteenth centuries. He has taught at the United States Military Academy at West Point, Colby College, and NYU.

    Back to the essay

    _____

    Notes

    [1] See Tom Moylan, Scraps of the Untainted Sky: Science Fiction, Utopia, Dystopia (Boulder: Westview Press, 2000).

    [2] Peter Frase, Four Futures: Life After Capitalism. (London: Verso Books, 2016),
    3.

    [3] Ibid, 27.

    [4] Fredric Jameson,  “Cognitive Mapping.” In C. Nelson and L. Grossberg, eds. Marxism and the Interpretation of Culture (Illinois: University of Illinois Press, 1990), 6.

    [5] McKenzie Wark, “Cognitive Mapping,” Public Seminar (May 2015).

    [6] Frase, 24.

    [7] This space fantasy also exhibits the escapist, mythopoetic, and even reactionary elements Frase notes—for example, its hereditary caste of Jedi fighters and their ancient religion—as Benjamin Hufbauer notes, “in many ways, the political meanings in Star Wars were and are progressive, but in other ways the film can be described as middle-of-the-road, or even conservative. Hufbauer, “The Politics Behind the Original Star Wars,” Los Angeles Review of Books (December 21, 2015).

    [8] Frase, 49.

    [9]  Angry Workers World, “Soldering On: Report on Working in a 3D-Printer Manufacturing Plant in London,” libcom. org (March 24, 2017).

    [10] Johan Söderberg, “A Critique of 3D Printing as a Critical Technology,” P2P Foundation (March 16, 2013).

    [11] Franklin, “Star Trek in the Vietnam Era,” Science Fiction Studies, #62 = Volume 21, Part 1 (March 1994).

    [12] Frase, 6.

    [13] Ruth Levitas, Utopia As Method: The Imaginary Reconstitution of Society. (London: Palgrave Macmillan, 2013), xiv-xv.

    [14] Frase, 58.

    [15]  Jameson, “Utopia, Modernism, and Death,” in Seeds of Time (New York: Columbia University Press, 1996), 110.

    [16]  Kaleigh Rogers, “The Guy Who Ruined Dogecoin,” VICE Motherboard (March 6, 2015).

    [17] See Jodi Dean, Democracy and Other Neoliberal Fantasies: Communicative Capitalism and Left  Politics (Durham: Duke University Press, 2009).

    _____

    Works Cited

    • Frase, Peter. 2016. Four Futures: Life After Capitalism. New York: Verso.
    • Jameson, Fredric. 1982. “Progress vs. Utopia; Or Can We Imagine The Future?” Science Fiction Studies 9:2 (July). 147-158
    • Jameson, Fredric. 1996. “Utopia, Modernism, and Death,” in Seeds of Time. New York: Columbia University Press.
    • Jameson, Fredric. 2005. Archaeologies of the Future: The Desire Called Utopia and Other Science Fictions. London: Verso.
    • Levitas, Ruth. 2013. Utopia As Method; The Imaginary Reconstitution of Society. London: Palgrave Macmillan.
    • Moylan, Tom. 2000. Scraps of the Untainted Sky: Science Fiction, Utopia, Dystopia. Boulder: Westview Press.
    • Smith, Jason E. 2017. “Nowhere To Go: Automation Then And Now.” The Brooklyn Rail (March 1).

     

  • Audrey Watters – The Best Way to Predict the Future is to Issue a Press Release

    Audrey Watters – The Best Way to Predict the Future is to Issue a Press Release

    By Audrey Watters

    ~

    This talk was delivered at Virginia Commonwealth University today as part of a seminar co-sponsored by the Departments of English and Sociology and the Media, Art, and Text PhD Program. The slides are also available here.

    Thank you very much for inviting me here to speak today. I’m particularly pleased to be speaking to those from Sociology and those from the English and those from the Media, Art, and Text departments, and I hope my talk can walk the line between and among disciplines and methods – or piss everyone off in equal measure. Either way.

    This is the last public talk I’ll deliver in 2016, and I confess I am relieved (I am exhausted!) as well as honored to be here. But when I finish this talk, my work for the year isn’t done. No rest for the wicked – ever, but particularly in the freelance economy.

    As I have done for the past six years, I will spend the rest of November and December publishing my review of what I deem the “Top Ed-Tech Trends” of the year. It’s an intense research project that usually tops out at about 75,000 words, written over the course of four to six weeks. I pick ten trends and themes in order to closely at the recent past, the near-term history of education technology. Because of the amount of information that is published about ed-tech – the amount of information, its irrelevance, its incoherence, its lack of context – it can be quite challenging to keep up with what is really happening in ed-tech. And just as importantly, what is not happening.

    So that’s what I try to do. And I’ll boast right here – no shame in that – no one else does as in-depth or thorough job as me, certainly no one who is entirely independent from venture capital, corporate or institutional backing, or philanthropic funding. (Of course, if you look for those education technology writers who are independent from venture capital, corporate or institutional backing, or philanthropic funding, there is pretty much only me.)

    The stories that I write about the “Top Ed-Tech Trends” are the antithesis of most articles you’ll see about education technology that invoke “top” and “trends.” For me, still framing my work that way – “top trends” – is a purposeful rhetorical move to shed light, to subvert, to offer a sly commentary of sorts on the shallowness of what passes as journalism, criticism, analysis. I’m not interested in making quickly thrown-together lists and bullet points. I’m not interested in publishing clickbait. I am interested nevertheless in the stories – shallow or sweeping – that we tell and spread about technology and education technology, about the future of education technology, about our technological future.

    Let me be clear, I am not a futurist – even though I’m often described as “ed-tech’s Cassandra.” The tagline of my website is “the history of the future of education,” and I’m much more interested in chronicling the predictions that others make, have made about the future of education than I am writing predictions of my own.

    One of my favorites: “Books will soon be obsolete in schools,” Thomas Edison said in 1913. Any day now. Any day now.

    Here are a couple of more recent predictions:

    “In fifty years, there will be only ten institutions in the world delivering higher education and Udacity has a shot at being one of them.” – that’s Sebastian Thrun, best known perhaps for his work at Google on the self-driving car and as a co-founder of the MOOC (massive open online course) startup Udacity. The quotation is from 2012.

    And from 2013, by Harvard Business School professor, author of the book The Innovator’s Dilemma, and popularizer of the phrase “disruptive innovation,” Clayton Christensen: “In fifteen years from now, half of US universities may be in bankruptcy. In the end I’m excited to see that happen. So pray for Harvard Business School if you wouldn’t mind.”

    Pray for Harvard Business School. No. I don’t think so.

    Both of these predictions are fantasy. Nightmarish, yes. But fantasy. Fantasy about a future of education. It’s a powerful story, but not a prediction made based on data or modeling or quantitative research into the growing (or shrinking) higher education sector. Indeed, according to the latest statistics from the Department of Education – now granted, this is from the 2012–2013 academic year – there are 4726 degree-granting postsecondary institutions in the United States. A 46% increase since 1980. There are, according to another source (non-governmental and less reliable, I think), over 25,000 universities in the world. This number is increasing year-over-year as well. So to predict that the vast vast majority of these schools (save Harvard, of course) will go away in the next decade or so or that they’ll be bankrupt or replaced by Silicon Valley’s version of online training is simply wishful thinking – dangerous, wishful thinking from two prominent figures who will benefit greatly if this particular fantasy comes true (and not just because they’ll get to claim that they predicted this future).

    Here’s my “take home” point: if you repeat this fantasy, these predictions often enough, if you repeat it in front of powerful investors, university administrators, politicians, journalists, then the fantasy becomes factualized. (Not factual. Not true. But “truthy,” to borrow from Stephen Colbert’s notion of “truthiness.”) So you repeat the fantasy in order to direct and to control the future. Because this is key: the fantasy then becomes the basis for decision-making.

    Fantasy. Fortune-telling. Or as capitalism prefers to call it “market research.”

    “Market research” involves fantastic stories of future markets. These predictions are often accompanied with a press release touting the size that this or that market will soon grow to – how many billions of dollars schools will spend on computers by 2020, how many billions of dollars of virtual reality gear schools will buy by 2025, how many billions of dollars of schools will spend on robot tutors by 2030, how many billions of dollars will companies spend on online training by 2035, how big will coding bootcamp market will be by 2040, and so on. The markets, according to the press releases, are always growing. Fantasy.

    In 2011, the analyst firm Gartner predicted that annual tablet shipments would exceed 300 million units by 2015. Half of those, the firm said, would be iPads. IDC estimates that the total number of shipments in 2015 was actually around 207 million units. Apple sold just 50 million iPads. That’s not even the best worst Gartner prediction. In October of 2006, Gartner said that Apple’s “best bet for long-term success is to quit the hardware business and license the Mac to Dell.” Less than three months later, Apple introduced the iPhone. The very next day, Apple shares hit $97.80, an all-time high for the company. By 2012 – yes, thanks to its hardware business – Apple’s stock had risen to the point that the company was worth a record-breaking $624 billion.

    But somehow, folks – including many, many in education and education technology – still pay attention to Gartner. They still pay Gartner a lot of money for consulting and forecasting services.

    People find comfort in these predictions, in these fantasies. Why?

    Gartner is perhaps best known for its “Hype Cycle,” a proprietary graphic presentation that claims to show how emerging technologies will be adopted.

    According to Gartner, technologies go through five stages: first, there is a “technology trigger.” As the new technology emerges, a lot of attention is paid to it in the press. Eventually it reaches the second stage: the “peak of inflated expectations.” So many promises have been made about this technological breakthrough. Then, the third stage: the “trough of disillusionment.” Interest wanes. Experiments fail. Promises are broken. As the technology matures, the hype picks up again, more slowly – this is the “slope of enlightenment.” Eventually the new technology becomes mainstream – the “plateau of productivity.”

    It’s not that hard to identify significant problems with the Hype Cycle, least of which being it’s not a cycle. It’s a curve. It’s not a particularly scientific model. It demands that technologies always move forward along it.

    Gartner says its methodology is proprietary – which is code for “hidden from scrutiny.” Gartner says, rather vaguely, that it relies on scenarios and surveys and pattern recognition to place technologies on the line. But most of the time when Gartner uses the word “methodology,” it is trying to signify “science,” and what it really means is “expensive reports you should buy to help you make better business decisions.”

    Can it really help you make better business decisions? It’s just a curve with some technologies plotted along it. The Hype Cycle doesn’t help explain why technologies move from one stage to another. It doesn’t account for technological precursors – new technologies rarely appear out of nowhere – or political or social changes that might prompt or preclude adoption. And at the end it is simply too optimistic, unreasonably so, I’d argue. No matter how dumb or useless a new technology is, according to the Hype Cycle at least, it will eventually become widely adopted. Where would you plot the Segway, for example? (In 2008, ever hopeful, Gartner insisted that “This thing certainly isn’t dead and maybe it will yet blossom.” Maybe it will, Gartner. Maybe it will.)

    And maybe this gets to the heart as to why I’m not a futurist. I don’t share this belief in an increasingly technological future; I don’t believe that more technology means the world gets “more better.” I don’t believe that more technology means that education gets “more better.”

    Every year since 2004, the New Media Consortium, a non-profit organization that advocates for new media and new technologies in education, has issued its own forecasting report, the Horizon Report, naming a handful of technologies that, as the name suggests, it contends are “on the horizon.”

    Unlike Gartner, the New Media Consortium is fairly transparent about how this process works. The organization invites various “experts” to participate in the advisory board that, throughout the course of each year, works on assembling its list of emerging technologies. The process relies on the Delphi method, whittling down a long list of trends and technologies by a process of ranking and voting until six key trends, six emerging technologies remain.

    Disclosure/disclaimer: I am a folklorist by training. The last time I took a class on “methods” was, like, 1998. And admittedly I never learned about the Delphi method – what the New Media Consortium uses for this research project – until I became a scholar of education technology looking into the Horizon Report. As a folklorist, of course, I did catch the reference to the Oracle of Delphi.

    Like so much of computer technology, the roots of the Delphi method are in the military, developed during the Cold War to forecast technological developments that the military might use and that the military might have to respond to. The military wanted better predictive capabilities. But – and here’s the catch – it wanted to identify technology trends without being caught up in theory. It wanted to identify technology trends without developing models. How do you do that? You gather experts. You get those experts to consensus.

    So here is the consensus from the past twelve years of the Horizon Report for higher education. These are the technologies it has identified that are between one and five years from mainstream adoption:

    It’s pretty easy, as with the Gartner Hype Cycle, to look at these predictions and note that they are almost all wrong in some way or another.

    Some are wrong because, say, the timeline is a bit off. The Horizon Report said in 2010 that “open content” was less than a year away from widespread adoption. I think we’re still inching towards that goal – admittedly “open textbooks” have seen a big push at the federal and at some state levels in the last year or so.

    Some of these predictions are just plain wrong. Virtual worlds in 2007, for example.

    And some are wrong because, to borrow a phrase from the theoretical physicist Wolfgang Pauli, they’re “not even wrong.” Take “collaborative learning,” for example, which this year’s K–12 report posits as a mid-term trend. Like, how would you argue against “collaborative learning” as occurring – now or some day – in classrooms? As a prediction about the future, it is not even wrong.

    But wrong or right – that’s not really the problem. Or rather, it’s not the only problem even if it is the easiest critique to make. I’m not terribly concerned about the accuracy of the predictions about the future of education technology that the Horizon Report has made over the last decade. But I do wonder how these stories influence decision-making across campuses.

    What might these predictions – this history of the future – tell us about the wishful thinking surrounding education technology and about the direction that the people the New Media Consortium views as “experts” want the future to take. What can we learn about the future by looking at the history of our imagining about education’s future. What role does powerful ed-tech storytelling (also known as marketing) play in shaping that future? Because remember: to predict the future is to control it – to attempt to control the story, to attempt to control what comes to pass.

    It’s both convenient and troubling then these forward-looking reports act as though they have no history of their own; they purposefully minimize or erase their own past. Each year – and I think this is what irks me most – the NMC fails to looks back at what it had predicted just the year before. It never revisits older predictions. It never mentions that they even exist. Gartner too removes technologies from the Hype Cycle each year with no explanation for what happened, no explanation as to why trends suddenly appear and disappear and reappear. These reports only look forward, with no history to ground their direction in.

    I understand why these sorts of reports exist, I do. I recognize that they are rhetorically useful to certain people in certain positions making certain claims about “what to do” in the future. You can write in a proposal that, “According to Gartner… blah blah blah.” Or “The Horizon Reports indicates that this is one of the most important trends in coming years, and that is why we need to commit significant resources – money and staff – to this initiative.” But then, let’s be honest, these reports aren’t about forecasting a future. They’re about justifying expenditures.

    “The best way to predict the future is to invent it,” computer scientist Alan Kay once famously said. I’d wager that the easiest way is just to make stuff up and issue a press release. I mean, really. You don’t even need the pretense of a methodology. Nobody is going to remember what you predicted. Nobody is going to remember if your prediction was right or wrong. Nobody – certainly not the technology press, which is often painfully unaware of any history, near-term or long ago – is going to call you to task. This is particularly true if you make your prediction vague – like “within our lifetime” – or set your target date just far enough in the future – “In fifty years, there will be only ten institutions in the world delivering higher education and Udacity has a shot at being one of them.”

    Let’s consider: is there something about the field of computer science in particular – and its ideological underpinnings – that makes it more prone to encourage, embrace, espouse these sorts of predictions? Is there something about Americans’ faith in science and technology, about our belief in technological progress as a signal of socio-economic or political progress, that makes us more susceptible to take these predictions at face value? Is there something about our fears and uncertainties – and not just now, days before this Presidential Election where we are obsessed with polls, refreshing Nate Silver’s website obsessively – that makes us prone to seek comfort, reassurance, certainty from those who can claim that they know what the future will hold?

    “Software is eating the world,” investor Marc Andreessen pronounced in a Wall Street Journal op-ed in 2011. “Over the next 10 years,” he wrote, “I expect many more industries to be disrupted by software, with new world-beating Silicon Valley companies doing the disruption in more cases than not.” Buy stock in technology companies was really the underlying message of Andreessen’s op-ed; this isn’t another tech bubble, he wanted to reinsure investors. But many in Silicon Valley have interpreted this pronouncement – “software is eating the world” – as an affirmation and an inevitability. I hear it repeated all the time – “software is eating the world” – as though, once again, repeating things makes them true or makes them profound.

    If we believe that, indeed, “software is eating the world,” that we are living in a moment of extraordinary technological change, that we must – according to Gartner or the Horizon Report – be ever-vigilant about emerging technologies, that these technologies are contributing to uncertainty, to disruption, then it seems likely that we will demand a change in turn to our educational institutions (to lots of institutions, but let’s just focus on education). This is why this sort of forecasting is so important for us to scrutinize – to do so quantitatively and qualitatively, to look at methods and at theory, to ask who’s telling the story and who’s spreading the story, to listen for counter-narratives.

    This technological change, according to some of the most popular stories, is happening faster than ever before. It is creating an unprecedented explosion in the production of information. New information technologies, so we’re told, must therefore change how we learn – change what we need to know, how we know, how we create and share knowledge. Because of the pace of change and the scale of change and the locus of change (that is, “Silicon Valley” not “The Ivory Tower”) – again, so we’re told – our institutions, our public institutions can no longer keep up. These institutions will soon be outmoded, irrelevant. Again – “In fifty years, there will be only ten institutions in the world delivering higher education and Udacity has a shot at being one of them.”

    These forecasting reports, these predictions about the future make themselves necessary through this powerful refrain, insisting that technological change is creating so much uncertainty that decision-makers need to be ever vigilant, ever attentive to new products.

    As Neil Postman and others have cautioned us, technologies tend to become mythic – unassailable, God-given, natural, irrefutable, absolute. So it is predicted. So it is written. Techno-scripture, to which we hand over a certain level of control – to the technologies themselves, sure, but just as importantly to the industries and the ideologies behind them. Take, for example, the founding editor of the technology trade magazine Wired, Kevin Kelly. His 2010 book was called What Technology Wants, as though technology is a living being with desires and drives; the title of his 2016 book, The Inevitable. We humans, in this framework, have no choice. The future – a certain flavor of technological future – is pre-ordained. Inevitable.

    I’ll repeat: I am not a futurist. I don’t make predictions. But I can look at the past and at the present in order to dissect stories about the future.

    So is the pace of technological change accelerating? Is society adopting technologies faster than it’s ever done before? Perhaps it feels like it. It certainly makes for a good headline, a good stump speech, a good keynote, a good marketing claim, a good myth. But the claim starts to fall apart under scrutiny.

    This graph comes from an article in the online publication Vox that includes a couple of those darling made-to-go-viral videos of young children using “old” technologies like rotary phones and portable cassette players – highly clickable, highly sharable stuff. The visual argument in the graph: the number of years it takes for one quarter of the US population to adopt a new technology has been shrinking with each new innovation.

    But the data is flawed. Some of the dates given for these inventions are questionable at best, if not outright inaccurate. If nothing else, it’s not so easy to pinpoint the exact moment, the exact year when a new technology came into being. There often are competing claims as to who invented a technology and when, for example, and there are early prototypes that may or may not “count.” James Clerk Maxwell did publish A Treatise on Electricity and Magnetism in 1873. Alexander Graham Bell made his famous telephone call to his assistant in 1876. Guglielmo Marconi did file his patent for radio in 1897. John Logie Baird demonstrated a working television system in 1926. The MITS Altair 8800, an early personal computer that came as a kit you had to assemble, was released in 1975. But Martin Cooper, a Motorola exec, made the first mobile telephone call in 1973, not 1983. And the Internet? The first ARPANET link was established between UCLA and the Stanford Research Institute in 1969. The Internet was not invented in 1991.

    So we can reorganize the bar graph. But it’s still got problems.

    The Internet did become more privatized, more commercialized around that date – 1991 – and thanks to companies like AOL, a version of it became more accessible to more people. But if you’re looking at when technologies became accessible to people, you can’t use 1873 as your date for electricity, you can’t use 1876 as your year for the telephone, and you can’t use 1926 as your year for the television. It took years for the infrastructure of electricity and telephony to be built, for access to become widespread; and subsequent technologies, let’s remember, have simply piggy-backed on these existing networks. Our Internet service providers today are likely telephone and TV companies; our houses are already wired for new WiFi-enabled products and predictions.

    Economic historians who are interested in these sorts of comparisons of technologies and their effects typically set the threshold at 50% – that is, how long does it take after a technology is commercialized (not simply “invented”) for half the population to adopt it. This way, you’re not only looking at the economic behaviors of the wealthy, the early-adopters, the city-dwellers, and so on (but to be clear, you are still looking at a particular demographic – the privileged half.)

    And that changes the graph again:

    How many years do you think it’ll be before half of US households have a smart watch? A drone? A 3D printer? Virtual reality goggles? A self-driving car? Will they? Will it be fewer years than 9? I mean, it would have to be if, indeed, “technology” is speeding up and we are adopting new technologies faster than ever before.

    Some of us might adopt technology products quickly, to be sure. Some of us might eagerly buy every new Apple gadget that’s released. But we can’t claim that the pace of technological change is speeding up just because we personally go out and buy a new iPhone every time Apple tells us the old model is obsolete. Removing the headphone jack from the latest iPhone does not mean “technology changing faster than ever,” nor does showing how headphones have changed since the 1970s. None of this is really a reflection of the pace of change; it’s a reflection of our disposable income and a ideology of obsolescence.

    Some economic historians like Robert J. Gordon actually contend that we’re not in a period of great technological innovation at all; instead, we find ourselves in a period of technological stagnation. The changes brought about by the development of information technologies in the last 40 years or so pale in comparison, Gordon argues (and this is from his recent book The Rise and Fall of American Growth: The US Standard of Living Since the Civil War), to those “great inventions” that powered massive economic growth and tremendous social change in the period from 1870 to 1970 – namely electricity, sanitation, chemicals and pharmaceuticals, the internal combustion engine, and mass communication. But that doesn’t jibe with “software is eating the world,” does it?

    Let’s return briefly to those Horizon Report predictions again. They certainly reflect this belief that technology must be speeding up. Every year, there’s something new. There has to be. That’s the purpose of the report. The horizon is always “out there,” off in the distance.

    But if you squint, you can see each year’s report also reflects a decided lack of technological change. Every year, something is repeated – perhaps rephrased. And look at the predictions about mobile computing:

    • 2006 – the phones in their pockets
    • 2007 – the phones in their pockets
    • 2008 – oh crap, we don’t have enough bandwidth for the phones in their pockets
    • 2009 – the phones in their pockets
    • 2010 – the phones in their pockets
    • 2011 – the phones in their pockets
    • 2012 – the phones too big for their pockets
    • 2013 – the apps on the phones too big for their pockets
    • 2015 – the phones in their pockets
    • 2016 – the phones in their pockets

    This hardly makes the case for technological speeding up, for technology changing faster than it’s ever changed before. But that’s the story that people tell nevertheless. Why?

    I pay attention to this story, as someone who studies education and education technology, because I think these sorts of predictions, these assessments about the present and the future, frequently serve to define, disrupt, destabilize our institutions. This is particularly pertinent to our schools which are already caught between a boundedness to the past – replicating scholarship, cultural capital, for example – and the demands they bend to the future – preparing students for civic, economic, social relations yet to be determined.

    But I also pay attention to these sorts of stories because there’s that part of me that is horrified at the stuff – predictions – that people pass off as true or as inevitable.

    “65% of today’s students will be employed in jobs that don’t exist yet.” I hear this statistic cited all the time. And it’s important, rhetorically, that it’s a statistic – that gives the appearance of being scientific. Why 65%? Why not 72% or 53%? How could we even know such a thing? Some people cite this as a figure from the Department of Labor. It is not. I can’t find its origin – but it must be true: a futurist said it in a keynote, and the video was posted to the Internet.

    The statistic is particularly amusing when quoted alongside one of the many predictions we’ve been inundated with lately about the coming automation of work. In 2014, The Economist asserted that “nearly half of American jobs could be automated in a decade or two.”“Before the end of this century,” Wired Magazine’s Kevin Kelly announced earlier this year, “70 percent of today’s occupations will be replaced by automation.”

    Therefore the task for schools – and I hope you can start to see where these different predictions start to converge – is to prepare students for a highly technological future, a future that has been almost entirely severed from the systems and processes and practices and institutions of the past. And if schools cannot conform to this particular future, then “In fifty years, there will be only ten institutions in the world delivering higher education and Udacity has a shot at being one of them.”

    Now, I don’t believe that there’s anything inevitable about the future. I don’t believe that Moore’s Law – that the number of transistors on an integrated circuit doubles every two years and therefore computers are always exponentially smaller and faster – is actually a law. I don’t believe that robots will take, let alone need take, all our jobs. I don’t believe that YouTube has been rendered school irrevocably out-of-date. I don’t believe that technologies are changing so quickly that we should hand over our institutions to entrepreneurs, privatize our public sphere for techno-plutocrats.

    I don’t believe that we should cheer Elon Musk’s plans to abandon this planet and colonize Mars – he’s predicted he’ll do so by 2026. I believe we stay and we fight. I believe we need to recognize this as an ego-driven escapist evangelism.

    I believe we need to recognize that predicting the future is a form of evangelism as well. Sure gets couched in terms of science, it is underwritten by global capitalism. But it’s a story – a story that then takes on these mythic proportions, insisting that it is unassailable, unverifiable, but true.

    The best way to invent the future is to issue a press release. The best way to resist this future is to recognize that, once you poke at the methodology and the ideology that underpins it, a press release is all that it is.

    Image credits: 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28. And a special thanks to Tressie McMillan Cottom and David Golumbia for organizing this talk. And to Mike Caulfield for always helping me hash out these ideas.
    _____

    Audrey Watters is a writer who focuses on education technology – the relationship between politics, pedagogy, business, culture, and ed-tech. She has worked in the education field for over 15 years: teaching, researching, organizing, and project-managing. Although she was two chapters into her dissertation (on a topic completely unrelated to ed-tech), she decided to abandon academia, and she now happily fulfills the one job recommended to her by a junior high aptitude test: freelance writer. Her stories have appeared on NPR/KQED’s education technology blog MindShift, in the data section of O’Reilly Radar, on Inside Higher Ed, in The School Library Journal, in The Atlantic, on ReadWriteWeb, and Edutopia. She is the author of the recent book The Monsters of Education Technology (Smashwords, 2014) and working on a book called Teaching Machines. She maintains the widely-read Hack Education blog, and writes frequently for The b2 Review Digital Studies magazine on digital technology and education.

    Back to the essay

  • Artificial Intelligence as Alien Intelligence

    Artificial Intelligence as Alien Intelligence

    By Dale Carrico
    ~

    Science fiction is a genre of literature in which artifacts and techniques humans devise as exemplary expressions of our intelligence result in problems that perplex our intelligence or even bring it into existential crisis. It is scarcely surprising that a genre so preoccupied with the status and scope of intelligence would provide endless variations on the conceits of either the construction of artificial intelligences or contact with alien intelligences.

    Of course, both the making of artificial intelligence and making contact with alien intelligence are organized efforts to which many humans are actually devoted, and not simply imaginative sites in which writers spin their allegories and exhibit their symptoms. It is interesting that after generations of failure the practical efforts to construct artificial intelligence or contact alien intelligence have often shunted their adherents to the margins of scientific consensus and invested these efforts with the coloration of scientific subcultures: While computer science and the search for extraterrestrial intelligence both remain legitimate fields of research, both AI and aliens also attract subcultural enthusiasms and resonate with cultic theology, each attracts its consumer fandoms and public Cons, each has its True Believers and even its UFO cults and Robot cults at the extremities.

    Champions of artificial intelligence in particular have coped in many ways with the serial failure of their project to achieve its desired end (which is not to deny that the project has borne fruit) whatever the confidence with which generation after generation of these champions have insisted that desired end is near. Some have turned to more modest computational ambitions, making useful software or mischievous algorithms in which sad vestiges of the older dreams can still be seen to cling. Some are simply stubborn dead-enders for Good Old Fashioned AI‘s expected eventual and even imminent vindication, all appearances to the contrary notwithstanding. And still others have doubled down, distracting attention from the failures and problems bedeviling AI discourse simply by raising its pitch and stakes, no longer promising that artificial intelligence is around the corner but warning that artificial super-intelligence is coming soon to end human history.

    alien planet

    Another strategy for coping with the failure of artificial intelligence on its conventional terms has assumed a higher profile among its champions lately, drawing support for the real plausibility of one science-fictional conceit — construction of artificial intelligence — by appealing to another science-fictional conceit, contact with alien intelligence. This rhetorical gambit has often been conjoined to the compensation of failed AI with its hyperbolic amplification into super-AI which I have already mentioned, and it is in that context that I have written about it before myself. But in a piece published a few days ago in The New York Times, “Outing A.I.: Beyond the Turing Test,” Benjamin Bratton, a professor of visual arts at U.C. San Diego and Director of a design think-tank, has elaborated a comparatively sophisticated case for treating artificial intelligence as alien intelligence with which we can productively grapple. Near the conclusion of his piece Bratton declares that “Musk, Gates and Hawking made headlines by speaking to the dangers that A.I. may pose. Their points are important, but I fear were largely misunderstood by many readers.” Of course these figures made their headlines by making the arguments about super-intelligence I have already rejected, and mentioning them seems to indicate Bratton’s sympathy with their gambit and even suggests that his argument aims to help us to understand them better on their own terms. Nevertheless, I take Bratton’s argument seriously not because of but in spite of this connection. Ultimately, Bratton makes a case for understanding AI as alien that does not depend on the deranging hyperbole and marketing of robocalypse or robo-rapture for its force.

    In the piece, Bratton claims “Our popular conception of artificial intelligence is distorted by an anthropocentric fallacy.” The point is, of course, well taken, and the litany he rehearses to illustrate it is enormously familiar by now as he proceeds to survey popular images from Kubrick’s HAL to Jonze’s Her and to document public deliberation about the significance of computation articulated through such imagery as the “rise of the machines” in the Terminator franchise or the need for Asimov’s famous fictional “Three Laws of Robotics.” It is easy — and may nonetheless be quite important — to agree with Bratton’s observation that our computational/media devices lack cruel intentions and are not susceptible to Asimovian consciences, and hence thinking about the threats and promises and meanings of these devices through such frames and figures is not particularly helpful to us even though we habitually recur to them by now. As I say, it would be easy and important to agree with such a claim, but Bratton’s proposal is in fact somewhat a different one:

    [A] mature A.I. is not necessarily a humanlike intelligence, or one that is at our disposal. If we look for A.I. in the wrong ways, it may emerge in forms that are needlessly difficult to recognize, amplifying its risks and retarding its benefits. This is not just a concern for the future. A.I. is already out of the lab and deep into the fabric of things. “Soft A.I.,” such as Apple’s Siri and Amazon recommendation engines, along with infrastructural A.I., such as high-speed algorithmic trading, smart vehicles and industrial robotics, are increasingly a part of everyday life.

    Here the serial failure of the program of artificial intelligence is redeemed simply by declaring victory. Bratton demonstrates that crying uncle does not preclude one from still crying wolf. It’s not that Siri is some sickly premonition of the AI-daydream still endlessly deferred, but that it represents the real rise of what robot cultist Hans Moravec once promised would be our “mind children” but here and now as elfen aliens with an intelligence unto themselves. It’s not that calling a dumb car a “smart” car is simply a hilarious bit of obvious marketing hyperbole, but represents the recognition of a new order of intelligent machines among us. Rather than criticize the way we may be “amplifying its risks and retarding its benefits” by reading computation through the inapt lens of intelligence at all, he proposes that we should resist holding machine intelligence to the standards that have hitherto defined it for fear of making its recognition “too difficult.”

    The kernel of legitimacy in Bratton’s inquiry is its recognition that “intelligence is notoriously difficult to define and human intelligence simply can’t exhaust the possibilities.” To deny these modest reminders is to indulge in what he calls “the pretentious folklore” of anthropocentrism. I agree that anthropocentrism in our attributions of intelligence has facilitated great violence and exploitation in the world, denying the dignity and standing of Cetaceans and Great Apes, but has also facilitated racist, sexist, xenophobic travesties by denigrating humans as beastly and unintelligent objects at the disposal of “intelligent” masters. “Some philosophers write about the possible ethical ‘rights’ of A.I. as sentient entities, but,” Bratton is quick to insist, “that’s not my point here.” Given his insistence that the “advent of robust inhuman A.I.” will force a “reality-based” “disenchantment” to “abolish the false centrality and absolute specialness of human thought and species-being” which he blames in his concluding paragraph with providing “theological and legislative comfort to chattel slavery” it is not entirely clear to me that emancipating artificial aliens is not finally among the stakes that move his argument whatever his protestations to the contrary. But one can forgive him for not dwelling on such concerns: the denial of an intelligence and sensitivity provoking responsiveness and demanding responsibilities in us all to women, people of color, foreigners, children, the different, the suffering, nonhuman animals compels defensive and evasive circumlocutions that are simply not needed to deny intelligence and standing to an abacus or a desk lamp. It is one thing to warn of the anthropocentric fallacy but another to indulge in the pathetic fallacy.

    Bratton insists to the contrary that his primary concern is that anthropocentrism skews our assessment of real risks and benefits. “Unfortunately, the popular conception of A.I., at least as depicted in countless movies, games and books, still seems to assume that humanlike characteristics (anger, jealousy, confusion, avarice, pride, desire, not to mention cold alienation) are the most important ones to be on the lookout for.” And of course he is right. The champions of AI have been more than complicit in this popular conception, eager to attract attention and funds for their project among technoscientific illiterates drawn to such dramatic narratives. But we are distracted from the real risks of computation so long as we expect risks to arise from a machinic malevolence that has never been on offer nor even in the offing. Writes Bratton: “Perhaps what we really fear, even more than a Big Machine that wants to kill us, is one that sees us as irrelevant. Worse than being seen as an enemy is not being seen at all.”

    But surely the inevitable question posed by Bratton’s disenchanting expose at this point should be: Why, once we have set aside the pretentious folklore of machines with diabolical malevolence, do we not set aside as no less pretentiously folkloric the attribution of diabolical indifference to machines? Why, once we have set aside the delusive confusion of machine behavior with (actual or eventual) human intelligence, do we not set aside as no less delusive the confusion of machine behavior with intelligence altogether? There is no question were a gigantic bulldozer with an incapacitated driver to swerve from a construction site onto a crowded city thoroughfare this would represent a considerable threat, but however tempting it might be in the fraught moment or reflective aftermath poetically to invest that bulldozer with either agency or intellect it is clear that nothing would be gained in the practical comprehension of the threat it poses by so doing. It is no more helpful now in an epoch of Greenhouse storms than it was for pre-scientific storytellers to invest thunder and whirlwinds with intelligence. Although Bratton makes great play over the need to overcome folkloric anthropocentrism in our figuration of and deliberation over computation, mystifying agencies and mythical personages linger on in his accounting however he insists on the alienness of “their” intelligence.

    Bratton warns us about the “infrastructural A.I.” of high-speed financial trading algorithms, Google and Amazon search algorithms, “smart” vehicles (and no doubt weaponized drones and autonomous weapons systems would count among these), and corporate-military profiling programs that oppress us with surveillance and harass us with targeted ads. I share all of these concerns, of course, but personally insist that our critical engagement with infrastructural coding is profoundly undermined when it is invested with insinuations of autonomous intelligence. In “Art in the Age of Mechanical Reproducibility,” Walter Benjamin pointed out that when philosophers talk about the historical force of art they do so with the prejudices of philosophers: they tend to write about those narrative and visual forms of art that might seem argumentative in allegorical and iconic forms that appear analogous to the concentrated modes of thought demanded by philosophy itself. Benjamin proposed that perhaps the more diffuse and distracted ways we are shaped in our assumptions and aspirations by the durable affordances and constraints of the made world of architecture and agriculture might turn out to drive history as much or even more than the pet artforms of philosophers do. Lawrence Lessig made much the same point when he declared at the turn of the millennium that “Code Is Law.”

    It is well known that special interests with rich patrons shape the legislative process and sometimes even explicitly craft legislation word for word in ways that benefit them to the cost and risk of majorities. It is hard to see how our assessment of this ongoing crime and danger would be helped and not hindered by pretending legislation is an autonomous force exhibiting an alien intelligence, rather than a constellation of practices, norms, laws, institutions, ritual and material artifice, the legacy of the historical play of intelligent actors and the site for the ongoing contention of intelligent actors here and now. To figure legislation as a beast or alien with a will of its own would amount to a fetishistic displacement of intelligence away from the actual actors actually responsible for the forms that legislation actually takes. It is easy to see why such a displacement is attractive: it profitably abets the abuses of majorities by minorities while it absolves majorities from conscious complicity in the terms of their own exploitation by laws made, after all, in our names. But while these consoling fantasies have an obvious allure this hardly justifies our endorsement of them.

    I have already written in the past about those who want to propose, as Bratton seems inclined to do in the present, that the collapse of global finance in 2008 represented the working of inscrutable artificial intelligences facilitating rapid transactions and supporting novel financial instruments of what was called by Long Boom digerati the “new economy.” I wrote:

    It is not computers and programs and autonomous techno-agents who are the protagonists of the still unfolding crime of predatory plutocratic wealth-concentration and anti-democratizing austerity. The villains of this bloodsoaked epic are the bankers and auditors and captured-regulators and neoliberal ministers who employed these programs and instruments for parochial gain and who then exonerated and rationalized and still enable their crimes. Our financial markets are not so complex we no longer understand them. In fact everybody knows exactly what is going on. Everybody understands everything. Fraudsters [are] engaged in very conventional, very recognizable, very straightforward but unprecedentedly massive acts of fraud and theft under the cover of lies.

    I have already written in the past about those who want to propose, as Bratton seems inclined to do in the present, that our discomfiture in the setting of ubiquitous algorithmic mediation results from an autonomous force over which humans intentions are secondary considerations. I wrote:

    [W]hat imaginary scene is being conjured up in this exculpatory rhetoric in which inadvertent cruelty is ‘coming from code’ as opposed to coming from actual persons? Aren’t coders actual persons, for example? … [O]f course I know what [is] mean[t by the insistence…] that none of this was ‘a deliberate assault.’ But it occurs to me that it requires the least imaginable measure of thought on the part of those actually responsible for this code to recognize that the cruelty of [one user’s] confrontation with their algorithm was the inevitable at least occasional result for no small number of the human beings who use Facebook and who live lives that attest to suffering, defeat, humiliation, and loss as well as to parties and promotions and vacations… What if the conspicuousness of [this] experience of algorithmic cruelty indicates less an exceptional circumstance than the clarifying exposure of a more general failure, a more ubiquitous cruelty? … We all joke about the ridiculous substitutions performed by autocorrect functions, or the laughable recommendations that follow from the odd purchase of a book from Amazon or an outing from Groupon. We should joke, but don’t, when people treat a word cloud as an analysis of a speech or an essay. We don’t joke so much when a credit score substitutes for the judgment whether a citizen deserves the chance to become a homeowner or start a small business, or when a Big Data profile substitutes for the judgment whether a citizen should become a heat signature for a drone committing extrajudicial murder in all of our names. [An] experience of algorithmic cruelty [may be] extraordinary, but that does not mean it cannot also be a window onto an experience of algorithmic cruelty that is ordinary. The question whether we might still ‘opt out’ from the ordinary cruelty of algorithmic mediation is not a design question at all, but an urgent political one.

    I have already written in the past about those who want to propose, as Bratton seems inclined to do in the present, that so-called Killer Robots are a threat that must be engaged by resisting or banning “them” in their alterity rather than by assigning moral and criminal responsibility on those who code, manufacture, fund, and deploy them. I wrote:

    Well-meaning opponents of war atrocities and engines of war would do well to think how tech companies stand to benefit from military contracts for ‘smarter’ software and bleeding-edge gizmos when terrorized and technoscientifically illiterate majorities and public officials take SillyCon Valley’s warnings seriously about our ‘complacency’ in the face of truly autonomous weapons and artificial super-intelligence that do not exist. It is crucial that necessary regulation and even banning of dangerous ‘autonomous weapons’ proceeds in a way that does not abet the mis-attribution of agency, and hence accountability, to devices. Every ‘autonomous’ weapons system expresses and mediates decisions by responsible humans usually all too eager to disavow the blood on their hands. Every legitimate fear of ‘killer robots’ is best addressed by making their coders, designers, manufacturers, officials, and operators accountable for criminal and unethical tools and uses of tools… There simply is no such thing as a smart bomb. Every bomb is stupid. There is no such thing as an autonomous weapon. Every weapon is deployed. The only killer robots that actually exist are human beings waging and profiting from war.

    “Arguably,” argues Bratton, “the Anthropocene itself is due less to technology run amok than to the humanist legacy that understands the world as having been given for our needs and created in our image. We hear this in the words of thought leaders who evangelize the superiority of a world where machines are subservient to the needs and wishes of humanity… This is the sentiment — this philosophy of technology exactly — that is the basic algorithm of the Anthropocenic predicament, and consenting to it would also foreclose adequate encounters with A.I.” The Anthropocene in this formulation names the emergence of environmental or planetary consciousness, an emergence sometimes coupled to the global circulation of the image of the fragility and interdependence of the whole earth as seen by humans from outer space. It is the recognition that the world in which we evolved to flourish might be impacted by our collective actions in ways that threaten us all. Notice, by the way, that multiculture and historical struggle are figured as just another “algorithm” here.

    I do not agree that planetary catastrophe inevitably followed from the conception of the earth as a gift besetting us to sustain us, indeed this premise understood in terms of stewardship or commonwealth would go far in correcting and preventing such careless destruction in my opinion. It is the false and facile (indeed infantile) conception of a finite world somehow equal to infinite human desires that has landed us and keeps us delusive ignoramuses lodged in this genocidal and suicidal predicament. Certainly I agree with Bratton that it would be wrong to attribute the waste and pollution and depletion of our common resources by extractive-industrial-consumer societies indifferent to ecosystemic limits to “technology run amok.” The problem of so saying is not that to do so disrespects “technology” — as presumably in his view no longer treating machines as properly “subservient to the needs and wishes of humanity” would more wholesomely respect “technology,” whatever that is supposed to mean — since of course technology does not exist in this general or abstract way to be respected or disrespected.

    The reality at hand is that humans are running amok in ways that are facilitated and mediated by certain technologies. What is demanded in this moment by our predicament is the clear-eyed assessment of the long-term costs, risks, and benefits of technoscientific interventions into finite ecosystems to the actual diversity of their stakeholders and the distribution of these costs, risks, and benefits in an equitable way. Quite a lot of unsustainable extractive and industrial production as well as mass consumption and waste would be rendered unprofitable and unappealing were its costs and risks widely recognized and equitably distributed. Such an understanding suggests that what is wanted is to insist on the culpability and situation of actually intelligent human actors, mediated and facilitated as they are in enormously complicated and demanding ways by technique and artifice. The last thing we need to do is invest technology-in-general or environmental-forces with alien intelligence or agency apart from ourselves.

    I am beginning to wonder whether the unavoidable and in many ways humbling recognition (unavoidable not least because of environmental catastrophe and global neoliberal precarization) that human agency emerges out of enormously complex and dynamic ensembles of interdependent/prostheticized actors gives rise to compensatory investments of some artifacts — especially digital networks, weapons of mass destruction, pandemic diseases, environmental forces — with the sovereign aspect of agency we no longer believe in for ourselves? It is strangely consoling to pretend our technologies in some fancied monolithic construal represent the rise of “alien intelligences,” even threatening ones, other than and apart from ourselves, not least because our own intelligence is an alienated one and prostheticized through and through. Consider the indispensability of pedagogical techniques of rote memorization, the metaphorization and narrativization of rhetoric in songs and stories and craft, the technique of the memory palace, the technologies of writing and reading, the articulation of metabolism and duration by timepieces, the shaping of both the body and its bearing by habit and by athletic training, the lifelong interplay of infrastructure and consciousness: all human intellect is already technique. All culture is prosthetic and all prostheses are culture.

    Bratton wants to narrate as a kind of progressive enlightenment the mystification he recommends that would invest computation with alien intelligence and agency while at once divesting intelligent human actors, coders, funders, users of computation of responsibility for the violations and abuses of other humans enabled and mediated by that computation. This investment with intelligence and divestment of responsibility he likens to the Copernican Revolution in which humans sustained the momentary humiliation of realizing that they were not the center of the universe but received in exchange the eventual compensation of incredible powers of prediction and control. One might wonder whether the exchange of the faith that humanity was the apple of God’s eye for a new technoscientific faith in which we aspired toward godlike powers ourselves was really so much a humiliation as the exchange of one megalomania for another. But what I want to recall by way of conclusion instead is that the trope of a Copernican humiliation of the intelligent human subject is already quite a familiar one:

    In his Introductory Lectures on Psychoanalysis Sigmund Freud notoriously proposed that

    In the course of centuries the naive self-love of men has had to submit to two major blows at the hands of science. The first was when they learnt that our earth was not the center of the universe but only a tiny fragment of a cosmic system of scarcely imaginable vastness. This is associated in our minds with the name of Copernicus… The second blow fell when biological research de­stroyed man’s supposedly privileged place in creation and proved his descent from the animal kingdom and his ineradicable animal nature. This revaluation has been accomplished in our own days by Darwin… though not without the most violent contemporary opposition. But human megalomania will have suffered its third and most wounding blow from the psychological research of the present time which seeks to prove to the ego that it is not even master in its own house, but must content itself with scanty information of what is going on un­consciously in the mind.

    However we may feel about psychoanalysis as a pseudo-scientific enterprise that did more therapeutic harm than good, Freud’s works considered instead as contributions to moral philosophy and cultural theory have few modern equals. The idea that human consciousness is split from the beginning as the very condition of its constitution, the creative if self-destructive result of an impulse of rational self-preservation beset by the overabundant irrationality of humanity and history, imposed a modesty incomparably more demanding than Bratton’s wan proposal in the same name. Indeed, to the extent that the irrational drives of the dynamic unconscious are often figured as a brute machinic automatism, one is tempted to suggest that Bratton’s modest proposal of alien artifactual intelligence is a fetishistic disavowal of the greater modesty demanded by the alienating recognition of the stratification of human intelligence by unconscious forces (and his moniker a symptomatic citation). What is striking about the language of psychoanalysis is the way it has been taken up to provide resources for imaginative empathy across the gulf of differences: whether in the extraordinary work of recent generations of feminist, queer, and postcolonial scholars re-orienting the project of the conspicuously sexist, heterosexist, cissexist, racist, imperialist, bourgeois thinker who was Freud to emancipatory ends, or in the stunning leaps in which Freud identified with neurotic others through psychoanalytic reading, going so far as to find in the paranoid system-building of the psychotic Dr. Schreber an exemplar of human science and civilization and a mirror in which he could see reflected both himself and psychoanalysis itself. Freud’s Copernican humiliation opened up new possibilities of responsiveness in difference out of which could be built urgently necessary responsibilities otherwise. I worry that Bratton’s Copernican modesty opens up new occasions for techno-fetishistic fables of history and disavowals of responsibility for its actual human protagonists.
    _____

    Dale Carrico is a member of the visiting faculty at the San Francisco Art Institute as well as a lecturer in the Department of Rhetoric at the University of California at Berkeley from which he received his PhD in 2005. His work focuses on the politics of science and technology, especially peer-to-peer formations and global development discourse and is informed by a commitment to democratic socialism (or social democracy, if that freaks you out less), environmental justice critique, and queer theory. He is a persistent critic of futurological discourses, especially on his Amor Mundi blog, on which an earlier version of this post first appeared.

    Back to the essay