boundary 2

Tag: singularity

  • Anthony Galluzzo — The Singularity in the I790s: Toward a Prehistory of the Present With William Godwin and Thomas Malthus

    Anthony Galluzzo — The Singularity in the I790s: Toward a Prehistory of the Present With William Godwin and Thomas Malthus

    Anthony Galluzzo

    I

    Victor Frankenstein, the titular character and “Modern Prometheus” of Mary Shelley’s 1818 novel, drawing on his biochemical studies at the University of Ingolstadt, creates life by reanimating the dead. While the gothic elements of Shelley’s narrative ensure its place, or those of its twentieth-century film adaptations, in the pantheons of popular horror, it is also arguably the first instance of science fiction, used by its young author to interrogate the Prometheanism that animated the intellectual culture of her day.

    Prometheus—the titan who steals fire from the Olympian gods and for humankind, suffering imprisonment and torture at the hands of Zeus as a result —was an emblem for both socio-political emancipation and techno-scientific mastery during the European enlightenment. These two overlapping, yet distinct, models of progress are nonetheless confused, one with the other, then and now, with often disastrous results, as Shelley dramatizes over the course of her novel.

    Frankenstein embarks on his experiment to demonstrate that “life and death” are merely “ideal bounds” that can be surpassed, to conquer death and “pour a torrent of light into our dark world.” Frankenstein’s motives are not entirely beneficent, as we can see in the lines that follow:

    A new species would bless me as its creator and source; many happy and excellent natures would owe their being to me. No father could claim the gratitude of his child so completely as I should deserve their’s. Pursuing these reflections, I thought, that if I could bestow animation upon lifeless matter, I might in process of time (although I now found it impossible) renew life where death had apparently devoted the body to corruption. (Shelley 1818, 80-81)

    The will to Promethean mastery, over nature, merges here with a will to power over other humanoid, if not entirely human, beings. Frankenstein abandons his creation, with disastrous results for the creature, his family, and himself. Over the course of the two centuries since its publication, “The Modern Prometheus” has been read, too simply, as a cautionary tale regarding the pitfalls of techno-scientific hubris, invoked in regard to the atomic bomb or genetic engineering, for example, which it is in part.

    If we survey the history of the twentieth century, this caution is understandable. Even in the twenty-first century, a new Frankensteinism has taken hold among the digital overlords of Silicon Valley. Techno-capitalists from Elon Musk to Peter Thiel to Ray Kurzweil and their transhumanist fellow travelers now literally pursue immortality and divinity, strive to build indestructible bodies or merge with their supercomputers; preferably on their own high-tech floating island, or perhaps off-world, as the earth and its masses burn in a climate catastrophe entirely due to the depredations of industrial capitalism and its growth imperative.

    This last point is significant, as it represents the most recent example of the way progress-as-emancipation—social and political freedom and equality for all, including non-human nature—is distinct from and often at odds with progress as technological development: a distinction that many of today’s techno-utopians embrace under the rubric of a “dark enlightenment,” in a seemingly deliberate echo of Victor Frankenstein. Mary Shelley’s great theme is the substantive distinction of these two models of progress and enlightenment, which are intertwined for historical and ideological reasons: a tragic marriage. It is no coincidence that she chose to explore this problem in a tale of tortured familial relationships, which includes the fantasy of male birth alongside immortality. It was both a personal and family matter for her, as the daughter of radical enlightenment intellectuals Mary Wollstonecraft and William Godwin. While her mother died a few days after Mary’s birth, she was raised according to strict radical enlightenment principles, by her father, who in his 1793 Enquiry Concerning Political Justice and its Influence on Morals and Happiness argues against the state, private property, and marriage; a text in which Godwin also predicts a future when human beings, perfected through the force of reason, would achieve a sexless, sleepless, god-like immortality, in what is a 1790s-era version of the technological Singularity. Godwin’s daughter drew on this vision in crafting her own Victor Frankenstein.

    While Godwin would later modify these early proto-futurist views—in the wake of his wife’s death and a debate with the Reverend Thomas Malthus—even as he maintained his radical political commitments, his early work demonstrates the extent to which radical enlightenment thinking was entwined, from the very start, with “dark enlightenment” in today’s parlance, ranging from accelerationism to singulatarianism and ecomodernism.[1]  His subsequent revision of his earlier views offers us an early example of how we might separate an emancipatory social and political program from those Promethean dreams of technological mastery used by capitalist and state socialist ideologues to justify development at any cost. In early Godwinism we find one prototype for today’s Promethean techno-utopianism. His subsequent debate with Thomas Malthus and concomitant retreat from his own earlier futurist Prometheanism illuminates how we might combine radical, or even utopian, political commitments with an awareness of biophysical limits in our own moment of ecological collapse.

    Godwin defines the “justice” that animates his 1793 Enquiry Concerning Political Justice as that “which benefits the whole, because individuals are parts of the whole. Therefore to do it is just, and to forbear it is unjust. If justice have any meaning, it is just that I should contribute every thing in my power to the benefit of the whole” (Godwin 1793, 52). Godwin illustrates his definition with a hypothetical scenario that provoked accusations of heartlessness among both conservative detractors and radical allies at the time. Godwin asks us to imagine a fire striking the palace of François Fénelon, the progressive archbishop of Cambray, author of an influential attack on absolute monarchy:

    In the same manner the illustrious archbishop of Cambray was of more worth than his chambermaid, and there are few of us that would hesitate to pronounce, if his palace were in flames, and the life of only one of them could be preserved, which of the two ought to be preferred. But there is another ground of preference, beside the private consideration of one of them being farther removed from the state of a mere animal. We are not connected with one or two percipient beings, but with a society, a nation, and in some sense with the whole family of mankind. Of consequence that life ought to be preferred which will be most conducive to the general good. In saving the life of Fénelon, suppose at the moment when he was conceiving the project of his immortal Telemachus, I should be promoting the benefit of thousands, who have been cured by the perusal of it of some error, vice and consequent unhappiness. Nay, my benefit would extend farther than this, for every individual thus cured has become a better member of society, and has contributed in his turn to the happiness, the information and improvement of others. (Godwin 1793, 55)

    This passage illustrates the consequentialist perfectibilism that distinguished the philosopher’s theories from those of his better-known contemporaries, such as Thomas Paine, with his theory of natural right and social contract, or even utilitarian Jeremy Bentham, to whom Godwin is sometimes compared. In the words of Mark Philp, “only by improving people’s understanding can they become more fully virtuous, and only as they become more fully virtuous will the highest and greatest pleasures be realized in society” (Philp 1986, 84). In other words, the unfortunate chambermaid must be sacrificed if that is what it takes to save the philosophe whose written output will benefit multitudes by sharpening their rational capacities, congruent with the triumph of reason, virtue, and human emancipation.

    Godwin goes on to make this line of reasoning clear:

    Supposing I had been myself the chambermaid, I ought to have chosen to die, rather than that Fénelon should have died. The life of Fénelon was really preferable to that of the chambermaid. But understanding is the faculty that perceives the truth of this and similar propositions; and justice is the principle that regulates my conduct accordingly. It would have been just in the chambermaid to have preferred the archbishop to herself. To have done otherwise would have been a breach of justice. Supposing the chambermaid had been my wife, my mother or my benefactor. This would not alter the truth of the proposition. The life of Fénelon would still be more valuable than that of the chambermaid; and justice, pure, unadulterated justice, would still have preferred that which was most valuable. Justice would have taught me to save the life of Fénelon at the expence of the other. What magic is there in the pronoun “my,” to overturn the decisions of everlasting truth? (Godwin 1793, 55)

    Godwin amends the puritan rigor of these positions in subsequent editions of his work, as he came to recognize the value of affective bonds and personal attachments. But here in the first edition of Political Justice we see a pristine expression of his rationalist radicalism, for which the good of the whole necessitates the sacrifice of a chambermaid, a mother, and one’s own self to Reason, which Godwin equates with the greatest good.

    The early Godwin here exemplifies a central antinomy of the European enlightenment, as he strives to yoke an inadvertently inhuman plan for human perfection and mastery to an emancipatory vision of egalitarian social relations. Godwin pushes the Enlightenment-era deification of ratiocination to a visionary extreme in presenting very real inequities as so many cases of benighted judgment waiting for a personified, yet curiously disembodied, Reason’s correction in the fullness of time and entirely by way of debate. It was this aspect of Godwin’s project that inspired John Thelwall, the radical writer and public speaker, to declare that while Godwin recommends “the most extensive plan of freedom and innovation ever discussed by a writer in English,” he “reprobate {s} every measure from which even the most moderate reform can be rationally expected” (Thelwall 2008, 122). E.P. Thompson would later echo this verdict in his Poverty of Theory, when he compared the vogue for structuralist—or Althussererian—Marxism among certain segments of the 1970s-era New Left to Godwinism, which he described as another “moment of intellectual extremism, divorced from correlative action or actual social commitment” (Thompson 1978, 244).

    Godwin blends a necessitarian theory of environmental influence, a belief in the perfectibility of the human race, a perfectionist version of the utilitarian calculus, and a quasi-idealist model of objective reason into an incongruous and extravagantly speculative rationalist metaphysics. The Godwinian system, in its first iteration at least, resembles Kantian and post-Kantian German idealism as much as it does the systems of Locke, Hume, and Helvetius, Godwin’s acknowledged sources. So, according to Godwin’s syllogistic precepts, it is only through the exercise of private judgment and a process of rational debate—“the clash of mind with mind”—that Truth will emerge, and with Truth, Political Justice; here is a model of enlightenment that resonates with Kant’s roughly contemporaneous ideal-type of progress and Jürgen Habermas’s twentieth century reconstruction of that ideal in the form of a “liberal-bourgeois public sphere.” It is for this reason, and in spite of his conflicted sympathies with French revolutionaries and British radicals alike, that the philosopher rejects both violent revolution and the kind of mass political action exemplified by Thelwall and the London Corresponding Society, hence Thelwall’s and Thompson’s damning judgments. Rational persuasion is the only feasible way of effecting the wholesale revolutionary transformation of “things as they are” for Godwin.

    But this precise reconstruction of Godwin’s philosophical and political system does not capture the striking novelty of Godwin’s project. In the example above, we find a supplementary argument of sorts running underneath the consequentialist perfectibilism. Although we can certainly read in Godwin’s disparagement and hypothetical sacrifice of both a chambermaid and his own mother a historically typical, if unconscious, example of the class prejudice and misogyny the radical philosopher otherwise attacks at length in this same treatise, I would instead call attention to the implicit metaphor of embodiment and natality that unites maid, mother, and Godwin’s own unperfected self. The chambermaid is one step closer to the “mere animal” from which Fénelon, or his significantly disembodied work, offers an escape. If the chambermaid were rational in the Godwinian sense, she would easily offer herself as sacrifice to Fénelon and the Reason that finds a fitting emblem in the flames that consume our hypothetical building. While Godwin underlines the disinterested character of this choice in next substituting himself for the chambermaid, his willingness to hypothetically sacrifice his mother points to his rigid rejection of personal attachments and emotional ties. Godwin would substantially modify this viewpoint a few years later in the wake of his relationship with first feminist Mary Wollstonecraft.

    The figure of the mother—whose embodied life Godwin would consign to the fire for the sake of Fénelon’s future intellectual output and its refining effects on humanity—is an overdetermined symbol that unites affective ties with the irrational fact of our bodily and sexual life: all of which must and will be mastered through a Promethean process of ratiocination indistinguishable from justice and reason. If one function of metaphor, according to Hans Blumenberg, is to provide the seedbed for conceptual thought, Godwin translates these subtexts into an explicit vision of a totally rational and rationalized future in the final, speculative, chapter of Political Justice.[2] It is in this chapter, as we shall see below, that Godwin responds to those critics who argued that population growth and material scarcity made perfectibilism impossible with a vision of humans made superhuman through reason.

    Here is the characteristically Godwinian combination of “striking insight” and “complete wackiness,” which emerges from the “science fictional quality of his imagination” in the words of Jenny Davidson.[3] Godwin moves from a prescient critique of oppressive human institutional arrangements, motivated by the radical desire for a substantively just and free form of social organization under which all human beings can realize their capacities, to a rationalist metaphysics that enshrines Reason as a theological entity that realizes itself through a teleological human history. Reason reaches its apotheosis at that point when human beings become superhuman, transcending contingent and creaturely qualities, such as sexual desire, physical reproduction, and death, eviscerated like so many animal bodies thrown into a great fire.

    We can see in Godwin’s early rationalist radicalism a significant antinomy. Godwin oscillates between a radical enlightenment critique that uses ratiocination to expose unjust institutional arrangements—from marriage to private property and the state—and a positive, even theological, version of Reason, for which creaturely limitations and human needs are not only secondary considerations, but primary obstacles to be surpassed on the way to a rationalist super-humanity that resembles nothing so much as a supercomputer, avant la lettre.

    Many critics of the European Enlightenment—from an older Godwin and his younger romantic contemporaries through twentieth-century feminist, post-colonial, and ecological critics—have underlined the connection between these Promethean metaphysics, ostensibly in the service of human liberation, and various projects of domination. Western Marxists, like Max Horkheimer and Theodor Adorno (1947) overlap with later feminist critics of the scientific revolution, such as Carolyn Merchant (1980), in naming instrumental rationality as the problem. As opposed to an ends-oriented version of reason—the ends being emancipation or human flourishing— rationalism as technical means for dominating the natural world, or managing populations, or disciplining labor, became the dominant model of rationality during and after the European enlightenment in keeping with the ideological requirements of a nascent capitalism and colonialism. But in the case of the early Godwin and other Prometheans, we can see a substantive version of reason, reified as an end-in-itself, which overlaps with the critical philosophy of Hegel, the philosophical foundation of Marxism and the Frankfurt School variant on display in the work of Adorno and Horkheimer.[4] The problem with Prometheanism is that its proponents’ ideal-type of technological rationality is not instrumental enough: rather than a reason or technology subordinate to human flourishing and collective human agency, the proponents of Prometheus subordinate collective human (and creaturely) ends to a vision of reason indistinguishable from a fantasy of an autonomous technology with its own imperatives.

    Langdon Winner, in analyzing autonomous technology as idea and ideology in twentieth-century industrial capitalist (and state socialist) societies, underlines this reversal of means and ends or what he calls “reverse adaptation”: “The adjustment of human ends to match the character of available means. We have already seen arguments to the effect that persons adapt themselves to the order, discipline, and pace of the organizations in which they work. But even more significant is the state of affairs in which people come to accept the norms and standards of technical processes as central to their lives as a whole” (Winner 1977, 229). Winner’s critique of “rationality in technological thinking” is made even more striking when we consider that the early Godwin’s Promethean force of reason, as evinced in by the final chapter of the 1793 Political Justice—in contradistinction to the ethical and political rationalism that is also present in the text—anticipates twentieth and twenty-first century techno-utopianism. For Winner, “if one takes rationality to mean the accommodation of means to ends, then surely reverse-adapted systems represent the most flagrant violation of rationality” (Winner 1977, 229).

    This version of rationality, still inchoate in the eighteenth-century speculations of Godwin, takes mega-technological systems as models, rather than tools, for human beings, as Günther Anders argues—against those who depict anti-Prometheans as bio-conservative defenders of things as they are just because they are that way. The problem with Prometheanism does not reside in its adherents’ endorsement of technological possibilities as such so much as their embrace of the “machine as measure” of individual and collective human development (Anders 2016). Anders converges with Adorno and Horkheimer, his Marxist contemporaries, for whom this “machine” is a mystified metonym for irrational capitalist imperatives.

    Rationalist humanism becomes technological inhumanism under the sign of Prometheus, which, according to present day “accelerationism” enthusiast Ray Brassier, must recognize “the disturbing consequences of our technological ingenuity” and extol “the fact that progress is savage and violent” (Brassier 2013). Brassier, operating from a radically different, avowedly nihilist, set of presuppositions than William Godwin, nonetheless recalls the 1793 Political Justice in once again defining rationalism as a reinvigorated Promethean “project of re-engineering ourselves and our world on a more rational basis.” Accelerationists strive to revive both rationalist radicalism—with the omniscient algorithm standing in for the perfectibilists’ reason—and the Promethean imperative to reengineer society and the natural world, because or in spite of the ongoing global climate change catastrophe. Rather than the great driver of an ecologically catastrophic growth,  self-described “left” accelerationists Nick Williams and Alex Srnicek argue that capitalism must be dismantled because it “cannot be identified as the agent of true acceleration,” or #accelerate (Williams and Srnicek 2013, 486-7): a shorthand for their attempt to reboot a version of progress that arguably finds its first apotheosis in the 1790s. Brassier’s defense of Prometheanism takes the form of an extended reply to various critics, whose emphasis on limits and equilibrium, the given and the made, he rejects as in thrall to religious, and specifically Christian, notions. Brassier, who outlines his rationalism as systematic method or technique without presupposition or limits—along the lines of “God is dead, anything is possible”—seems unaware of actual material limitations and the theological, specifically Gnostic, origins of a very old human deification fantasy, the Enlightenment-era secularization of which was arguably first recognized by Godwin’s daughter in her Frankenstein (Shelley 1818).

    The Godwin of 1793 in this way also and more dramatically looks forward to our own transhumanist devotees of the coming technological singularity, who claim that human beings will soon merge with immensely powerful and intelligent supercomputers, becoming something else entirely in the process, hence “transhumanism.” According to prominent Silicon Valley “singulatarian” Ray Kurzweil, “The Singularity will allow us to transcend these limitations of our biological bodies and brains. We will gain power over our fates. Our mortality will be in our own hands. We will be able to live as long as we want (a subtly different statement from saying we will live forever)” (Kurzweil 2006, 25).

    Kurzweil explicitly frames this transformation as the inevitable culmination of a mechanically teleological movement; and, like many futurists and their eighteenth-century perfectibilist forerunners, human perfection necessitates the supersession of the human. Kurzweil illustrates the paradoxical character of a Promethean futurism that, in seeking both human perfection and mastery, seeks to dispense with the human altogether: “The Singularity will represent the culmination of the merger of our biological thinking and existence with our technology, resulting in a world that is still human but that transcends our biological roots. There will be no distinction, post-Singularity, between human and machine or between physical and virtual reality. If you wonder what will remain unequivocally human in such a world, it’s simply this quality: ours is the species that inherently seeks to extend its physical and mental reach beyond current limitations” (25).

    Even more than the accelerationists, Kurzweil’s Singulatarianism illustrates the “hubristic humility” that defines twentieth- and twenty-first century Prometheanism, according to Anders. Writing in the wake of the atomic bomb, the cybernetic revolution, and the mass-produced affluence exemplified by the post-war United States, Anders recognized how certain self-described rationalist techno-utopians combined a hubristic faith in technological achievement with a “Promethean shame” before these same technological creations. This shame arises from the perceived gap between human beings and their technological products; how, unlike our machines, we are “pre-given,” saddled with contingent bodies we neither choose nor design, bodies that are fragile, needy, and mortal. The mechanical reproducibility of the technological system or industrial artifact represents a virtual immortality that necessarily eludes unique and perishable human beings, according to Anders. Here Anders seemingly develops the earlier work of Walter Benjamin, his cousin, on aura and mechanical reproduction—but in a very different direction, as Anders writes: “in contrast to the light bulb or the vinyl record, none of us has the opportunity to outlive himself or herself in a new copy. In short: we must continue to live our lifetimes in obsolete singularity and uniqueness. For those who recognize the machine-world as exemplary, this is a flaw and as such a reason for shame.”[5]

    Although we can situate the work of Godwin at the intersection of various eighteenth- and nineteenth-century discourses, including perfectibilist rationalism, civic republicanism, and Sandemanian Calvinism, on the one hand, or anarchism and romanticism, on the other, I will argue here that in juxtaposing Godwinism with present-days analogs like the transhumanism or accelerationism briefly described above, we can see the extent to which older—late eighteenth- and early nineteenth- century—utopian forms are returning, lending some credence to Alain Badiou’s claim that

    We are much closer to the 19th century than to the last century. In the dialectical division of history we have, sometimes, to move ahead of time. Just like maybe around 1840, we are now confronted with an absolutely cynical capitalism, more and more inspired by the ideas that only work backwards: poor are justly poor, the Africans are underdeveloped, and that the future with no discernable limit belongs to the civilized bourgeoisie of the Western world. (Badiou 2008)

    We can also see in recent conflicts between accelerationists and certain partisans of radical ecology the return of another seeming antinomy—one which pits cornucopian futurists against Malthusians, or at least those who emphasize the material limits to growth and how human beings might reconcile ourselves to those limits— that has its origin point in the Reverend Thomas Malthus’s anonymously published An Essay on the Principle of Population, as it affects the Future Improvement of Society with remarks on the Speculations of Mr. Godwin, M. Condorcet, and Other Writers (1798). Malthus’s demographic response to Godwinism led in turn to a long running debate and the rise of an ostensibly empirical political economy that took material scarcity as its starting point. Yet, if we examine Malthus’s initial response, alongside the Political Justice of 1793, we can observe several shared assumptions and lines of continuity that unite these seemingly opposed perspectives, as each of these thinkers delineates a recognizably bio-political project, for human improvement and population management, in left and right variants. Each of these variants obscures the social determinants of revolutionary movements and technological progress. Finally, it was as much Godwin’s debate with Malthus as the philosopher’s tumultuous and tragic relationship with Mary Wollstonecraft that precipitated a shift in his perspective regarding the value of emotional bonds, personal connections, and material limits: seemingly disparate concerns linked in Godwin’s imagination through the sign of the body. The body also functions as metonym for that same natural world, the limits of which Malthus brandished in order to discredit the utopian aspirations that drove the revolutionary upheavals of the 1790s; Godwin later sought to reconcile his utopianism with these limits.[6] This intellectual reconciliation—which was very much in line with the English romantics’ own version of a more benign natural world threatened by incipient industrialism, as opposed to Malthus’s brutally utilitarian nature—was a response to Malthus and the early Godwin’s own early Prometheanism, best exemplified in the final section of 1793 Political Justice, to which we will turn below.

    Two generations of Romantics—from Wordsworth and Coleridge through De Quincey, Hazlitt, and Shelley—sought to counter Malthus’s version of the natural world as resource stock and constraint with a holistic and dynamic model of nature, under which natural limits and possibilities are not inconsistent with human aspirations and utopian hopes. Malthus offered the Romantics “a malign muse,” in the words of his biographer Robert Mayhew, who writes of two exemplary Romantic figures from this period: “ if nature is made of antipathies, Blake and Hegel in their different ways suggest that such binaries can be productive of a dialectic advance in our reasoning” as “we look for ways to respect nature and to use it with a population of 7 billion” (Mayhew 2014, 86).

    One irony of intellectual and political history is how often our new Prometheans—transhumanists, singulatarians, accelerationists, and others—lump both narrowly Malthusian and more expansive “Romantic” ecologies under the rubric of Malthusianism, which is nowadays more slur than accurate description of Malthus’s project. Malthus wielded the threat of natural scarcity or “the Principle of Population” as an ideological tool against reform, revolution, or “the Future Improvement of Society,” as evinced in the very title of his long essay. In the words of Kerryn Higgs, to follow Malthus involves “several key elements” above a concern with overpopulation, such as “a resistance to notions of social improvement and social welfare, punitive policies for the poor, a tendency to blame the poor for their own plight, and recourse to speculative theorizing in the service of an essentially politically argument” (Higgs 2014, 43). It is among eugenicists, social Darwinists, but also today’s cornucopian detractors of Malthusianism, that we find Malthus’s heirs, if we attend to his and now their instrumental view of the natural world as factor in capitalist economic calculation—as exemplified by the rhetoric of “ecosystem services” and “decoupling”—in addition to a shared faith in material growth, to which Malthus was not opposed. While self-declared Malthusians, like Paul and Anne Ehrlich, in their misplaced focus on overpopulation, often in the developing world, conveniently avoid any discussion of consumption in the developed world, let alone the unsustainable growth imperative built into capitalism itself. In fact, for Malthus and his epigones, necessity—growth outstripping available resources—functions as spur for technological innovation, mirroring, in negative form, the teleological trajectory of the early Godwin—a telling convergence I will explore at length in the latter part of this essay.

    “Malthusianism” is a shorthand used by orthodox economists and apologists for capitalist growth to dismiss ecological concerns. Marxists and other radicals—heirs to Godwin’s project, in ways good and bad, despite their protestations of materialism—too often share this investment in growth and techno-science as an end in itself. While John Bellamy Foster and others have made a persuasive case for Marx’s ecology—to be found in his work on nineteenth-century soil exhaustion, inspired by Liebig, and the town/country rift under capitalism—we can also find a broadly Promethean rejection of anything resembling a discourse of natural limits within various orthodox Marxisms, beginning in the later nineteenth century. Yet to recognize both the possibilities and limits of our situation—which must include the biophysical conditions of possibility for capitalist accumulation and any program that aims to supplant it— is, for me, the foundation for any radical and materialist approach to the world and politics, against Malthus and the young, futurist Godwin, to whom we now move.

    II

    Godwin translates this metaphorical substrate of his Fénelon thought experiment into an explicitly conceptual and argumentative form.  He pushes the logic of eighteenth-century perfectibilism to a spectacular, speculative, and science-fictional extreme in Chapter 12 of Political Justice’s final volume on “property.” It is in this chapter that the philosopher outlines a future utopia on the far side of rational perfection. Beginning with the remark, attributed to Benjamin Franklin by Richard Price, that “mind will one day become omnipotent over matter,” Godwin offers us a series of methodical speculations as to how this might literally come to pass. He begins with the individual mind’s power to either exacerbate or alleviate illness or the effects of age, in order to illustrate his central contention: that we can overcome apparently hard and fast physical limits and subject ostensibly involuntary physiological processes to the dictates of our rational will.  It is on this basis that Godwin concludes: “if we have in any respect a little power now, and if mind be essentially progressive…that power may…and inevitably will, extend beyond any bounds we are able to ascribe to it” (Godwin 1793, 455).

    Godwin marries magical voluntarism on the ontogenetic level to the teleological arc of Reason on the phylogenetic level, all of which culminates in perhaps the first—Godwinian—articulation of the singularity: “The men who therefore exist when the earth shall refuse itself to a more extended population will cease to propagate, for they will no longer have any motive, either of error or duty, to induce them. In addition to this they will perhaps be immortal. The whole will be a people of men, and not of children. Generation will not succeed generation, nor truth have in a certain degree to recommence at the end of every thirty years. There will be no war, no crimes, no administration of justice as it is called, and no government” (Godwin 1793, 458).

    James Preu (1959) long ago established Godwin’s peculiar intellectual debt to Jonathan Swift, and we can discern some resemblance between Godwin’s future race of hyper-rational, sexless immortals and the Houyhnhnms; as with Godwin’s other misprisions of Swift, the differences are as telling as are the similarities. Godwin transforms Swift’s ambiguous, arguably dystopian and misanthropic, depiction of equine ultra-rationalists, and their animalistic Yahoo humanoid stock, into an unequivocally utopian sketch of future possibility. For Godwin, it is our Yahoo-like “animal nature” that must be subdued or even exterminated, as Gulliver’s Houynnhnm master at one point suggests in a coolly calculating way that begs comparison to Swift’s “Modest Proposal,” even as both texts look forward to the utilitarian discourse of population control that finds its apotheosis in Malthus’s 1798 response to Godwin. Godwin would later embrace some version of heritable characteristics, or at least innate human inclinations, but despite his revisions of his views through subsequent editions of Political Justice and beyond, he is very still much the Helvetian environmentalist in the 1793 disquisition. He was therefore free of, or even at odds with, a proto-eugenic eighteenth-century discourse of breeding—after Jenny Davidson’s (2008) formulation—that overlapped with other variants of perfectibilism.

    But as Davidson and others note, Godwin shares with his antagonist Malthus a Swiftian aversion to sex, which we can also see in the Godwinian critique of marriage and the family. This critique begins with a still-radical indictment of marriage as a proprietary relationship under which men exercise “the most odious of monopolies over women” (Godwin 1793, 447). Godwin predicts that marriage, and the patriarchal family it safeguards, will be abolished alongside other modes of unequal property. But, rather than inaugurating a regime of free love and license, as conservative critics of Godwinism contended at the time, the philosopher predicts that this “state of equal property would destroy the relish for luxury, would decrease our inordinate appetites of every kind, and lead us universally to prefer the pleasures of intelligence to the pleasures of the sense” (Godwin 1793, 447). Rather than simply ascribing this sentiment to a residual Calvinism on Godwin’s part, this programmatic elimination of sexual desire is of a piece with “killing the Yahoo,” consigning the maid-servant’s, his mother’s, his own body to the fires for Fénelon and a perfectly rational future state, i.e., the biopolitical rationalization of human bodies for Promethean reason and Promethean shame. The early Godwin here again suggests our own transhumanist devotees who, on the one hand, embrace the sexual possibilities supposedly afforded by AI while they manifest a “complete disgust with actual human bodies,” exemplifying Anders’s Promethean shame according to Michael Hauskeller (2014). From the messy body to virtual bodies, from the uncertainties and coercions of cooperation to self-sex, finally from sex to orgasmic cognition, transhumanists—in an echo of the young Godwin, who predicted sexual intercourse would give way to rational intercourse with the triumph of Reason—want to “make the pleasures of mind as intense and orgiastic as … certain bodily pleasures as they hope for a new and improved rational intercourse with a new and improved, virtual body, in the future” (Hauskeller, 2014).

    In this speculative, coda to his visionary political treatise, Godwin’s predictive sketch of human rationalization as transformation, from Yahoo to Houyhnhnm and/or post-human, represents a disciplinary program in Michel Foucault’s sense: “a technique” that “centers on the body, produces individualizing effects, and manipulates the body as a source of forces that have to be rendered both useful and docile” (Foucault 2003, 249).[7]  We can trace the intersection between perfectibilist, even transhumanist, dreams and disciplinary program in Godwin’s comments regarding the elimination of sleep, an apparent prerequisite for overcoming death, which he describes as “one of the most conspicuous infirmities of the human frame, specifically “because it is…not a suspension of thought, but an irregular and distempered state of the faculty” (456).

    Dreams, or the unregulated and irrational affective processes they embody, provoke a panicked response on the part of Godwin at this point in the text. Godwin’s response accords with the consistent rejection of sensibility and sentimental attachments of all kinds—seen throughout the 1793 PJ—from romantic love to familial bonds. We can find in Godwin’s account of sleep and his plan for its elimination through an exertion of rational will and attention—something like an internal monitor in the mold of a Benthamite watchman presiding over a 24/7 panoptic mind—the exertion of an internalized disciplinary power indistinguishable from our new, wholly rational and rationalized, subject’s private judgment operating in a system without external authority, government, or disciplinary power. And it is no coincidence that Godwin’s proposed subjugation of sleep immediately follows a curiously contemporary passage: “If we can have three hundred and twenty successive ideas in a second of time, why should it be supposed that we should not hereafter arrive at the skill of carrying on a great number of contemporaneous processes without disorder” (456).

    It should be noted again here that Godwin’s futurist idyll, which includes sexless immortals engaged in purely rational intercourse, specifically responds to earlier eighteenth-century arguments regarding human population and the resource constraints that limit population growth and, by extension, the wide abundance promised in various perfectibilist plans for the future. This new focus on demography and the management of populations during the latter half of the eighteenth century in the Euro-American world is a second technology of power, for Foucault, that “centers not upon the body but upon life: a technology that brings together the mass effects  characteristic of a population,” in order to “to establish a sort of homeostasis, not by training individuals, not by training individuals, but by achieving an overall equilibrium that protects the security of the whole from internal dangers” (Foucault 2003, 249). This is the biopolitical mode of governance—the regulation of the masses’ bio-social processes—that characterizes the modern epoch for Foucault and his followers.

    Yet, while Foucault admits that both technologies—disciplinary and biopolitical—are “technologies of the body,” he nonetheless counterpoises the “individualizing” to the demographic technique. But, as we can see in the 1793 PJ, in which Godwin proffers a largely disciplinary program as solution to the original bio-political problem—a solution that would inspire Thomas Malthus’s classic formulation of the population problem a few years later, as we shall explore below—these two technologies were intertwined from the start. The subsequent history of futurism in the west marries various disciplinary programs, powered by Promethean shame and its fantasies of becoming “man-machine,” to narrowly bio-political campaigns. These campaigns range from the exterminationist eugenicism of the twentieth century interwar period to more recent techno-survivalist responses to the ecological crisis on the part of Silicon Valley’s Singulatarian elites, some of whom look forward to immortality on Mars while the Earth and its masses burn.[8]

    Godwin further highlights these futurist hopes in the second revision of Political Justice (1798), in which he underlines the central role of mechanical invention—in keeping with the general principle enshrined at the first volume of the treatise under the title, “Human Inventions Capable of Perpetual Improvement—making the technological prostheses implicit in these early speculations explicit. In predicting an ever-accelerating multiplication of cognitive processes—assuming these processes are delinked from disorder, human or Yahoo—Godwin anticipates both the discourse of cybernetics and its more recent accelerationist successors, for whom the dream of perfectibility—and Godwin’s sexless, sleepless rationalist immortals—can only be achieved through AI and the machinic supersession of the human Yahoo.

    In fact, our new futurists frequently invoke the methodologically dubious Moore’s Law in defense of their claims for acceleration and its inevitability. Moore’s Law—named after Intel founder Gordon Moore, who, in 1965, predicted that the number of transistors, with their processing power, in an integrated circuit increases exponentially every two years or so—revives Godwin’s prophecy in a cybernetic register. It also suggests Thomas Malthus’s “iron” law of population. Malthus argued that “Population, when unchecked, increases in a geometrical ratio,” while “subsistence”—by which he denotes agricultural yield—increases only in arithmetical ratio. Malthus rendered this dubious “law” as a mathematical formula, thereby making it indisputable, although he makes his motivations clear when he writes, explicitly in response to Godwin’s speculations in the last chapter of the 1793 Political Justice, that his law “is decisive against the possible existence of a society, all of the members of which should live in ease, happiness, and comparative leisure; and feel no anxiety about providing the means of subsistence for themselves and families” (Malthus 1798, 16-17; see also Engels 1845).

    Godwinism was for Malthus a synecdoche for both 1790s radicalism and radical egalitarianism generally, while the first Essay on Population is arguably the late, “proto-scientific,” entry in the paper war between English radicals and counterrevolutionary antijacobins—initiated by Edmund Burke’s Reflections on the Revolution in France (1790) and to which Godwin’s treatise was one among many responses—that defined literary and political debate in the wake of the French Revolution. Rather than simply arguing for biophysical limits, Malthus reveals his ideological hand in his discussion of the poor and the Poor Laws—the parish-based system of charity established in late medieval England to alleviate extreme distress among the poorest classes—against which he railed in the several editions of the Essay.  Whereas in the past, population was maintained through “positive” checks, such as pestilence, famine, or warfare, for Malthus, the growth of civilization introduced “preventive” checks, including chastity, restraint within marriage, or even the conscious decision to delay or forego marriage and reproduction due to the “foresight of  the difficulties attending the rearing of a family,” often prompted by “the actual distresses of some of the lower classes, by which they are disabled from giving the proper food and attention to their children” (35).

    Parson Malthus largely ascribes this decidedly Christian and specifically protestant capacity for “rational” self-restraint, to his own industrious middle class; and, insofar as the peasantry possessed this preventive urge, alongside a “spirit of independence,” it was undermined by the eighteenth-century British Poor Laws. Malthus provides the template for what are now standard issue conservative attacks on social provision in his successful attacks on the Poor Laws, which in providing a safety net eradicated restraint among the poor, leading them to marry, reproduce, and “increase reproduction without increasing the food for its support.” Malthus invokes the same absolute limit in his naturalistic rejection of Godwin’s (and others’) egalitarian radicalism, foreclosing any examination of the production distribution of surplus and scarcity in a class society.  Even more than this, Malthus uses his natural laws to rationalize all of those institutional arrangements under threat during the French Revolutionary Period, from the existing division of property to traditional marriage arrangements, but in an ostensibly objective manner that distinguished his approach from Burke’s earlier encomia to a dying age of chivalry.  It is arguably for this reason that the idea of natural limits, in general, is a suspect one among subsequent left-wing formations, for good and ill.

    III

    Malthus, who anonymously published his Essay on Population in 1798, proclaims at the outset that his “argument is “conclusive against the perfectibility of the mass of mankind” (Malthus 1798, 95). And if there were any doubt as to the identity of Malthus’s target we need only look to the work’s subtitle, of which Malthus explains in the book’s first sentence that “the following essay owes its origin to a conversation with a friend on the subject of Mr. Godwin’s essay on avarice and profusion, in his Enquirer.” The friend was Thomas’s father, Daniel Malthus, an admiring acquaintance of Godwin’s who nonetheless encouraged (and subsidized) his son’s writing on this topic. The parson dedicates six chapters (chapters 10-15) to a refutation of Godwinism, often larded with mockery of Godwin, against whose speculative rationalism Malthus counterpoises his own supposedly empirical method; the same method that allowed him to discover that “lower classes of people” should never be “sufficiently free from want and labour, to attain any high degree of intellectual improvement” (95). And although Malthus explicitly names The Enquirer—the 1797 collection of essays in which Godwin admits to changing his mind on a variety of positions, as Malthus acknowledges at one point—as the impetus for his Essay, the work primarily responds to the earlier Political Justice and its final chapter in particular, because Godwin’s futurist speculations (and their more ominous biopolitical subtexts) respond to an “Objection to This System From the Principle of Population,” in the words of the chapter subheading. Godwin replies to this hypothetical objection several years prior to Malthus’s critique, the originality of which was said at the time to consist in his break with eighteenth-century doxa regarding population. Despite their differences, Montesquieu, Hume, Franklin, and Price all agreed that a growing population is the indisputable marker of progress, and the primary sign of a successful nation, since “the more men in the state, the more it flourishes.” And while Johann Süssmilch, an early pioneer of statistical demography, argued for a fixed limit to the planetary carrying capacity in regard to human population, he also inferred that the planet could hold up to six times as many people than the total global population offered by Süssmilch at the time.

    Only Robert Wallace, in his 1761 Various Prospects of Mankind, Nature, and Providence—which Marx and Engels would accuse Malthus of plagiarizing—argued that excessive population is an obstacle to human improvement. Wallace offers a prototype for the Godwin/Malthus debate in constructing an elaborate argument for a proto-communist utopian social arrangement, only to undermine his own argument via recourse to the limits of population growth. Godwin invokes Wallace by name, before adverting to Süssmilch and a far-flung future when human beings will have transcended the limits of finitude. Immortals won’t have to reproduce, a point Godwin makes even clearer in both the final edition of Political Justice (1798) and in his first response to the Essay on The Population—in an 1801 pamphlet entitled Thoughts Occasioned By The Perusal of Dr. Parr’s Spital Sermon—in which he opts for a minimal population of perfected human beings living in a utopian society, rather than an ever- expanding human population.

    While contemporary scholars still read the Godwin/Malthus Debate as a simple conflict between progressive optimism and conservative pessimism, we can still discern some peculiar commonalities between the early Godwin of the 1793 Political Justice and Malthus. Godwin’s speculations on human perfectibility represent a bio-perfectionist solution to the problems of population, sex, and embodiment generally—a Promethean program for overcoming Promethean shame—as I sketch above. Malthus rejects perfectibility along with the feasibility of physical immortality and pure rationality, adverting to humanity’s “compound nature,” a variation on original sin. In this vein, he also rejects Godwin’s prediction regarding “the extinction of the passion between the sexes,” which has not “taken place in the five or six thousand years that world has existed” (92).  Yet Malthus—in proffering disciplinary self-restraint in the service of a biopolitical equilibrium between population and food supply—offers another such solution, motivated by antithetical political principles, while operating from a common set of  Enlightenment-era assumptions regarding the need to regulate bodies and populations (Foucault 2003). The overlap between these ostensible antagonists should not surprise us, since, as Fredrik Albritton Jonsson notes in his critical genealogy of cornucopianism, “cornucopianism and environmental anxieties have been closely intertwined in theory and practice from the eighteenth century onward” (2014). Albritton Jonsson connects the alternation between cornucopian fantasy and environmental anxiety to the booms and busts of environmental appropriation and capitalist accumulation, while he locates the roots of cornucopia “in the realm of alchemy and natural theology. To overcome the effects of the Fall, Francis Bacon hoped to remake the natural order into a second, artificial world. Such theological and alchemical aspirations were intertwined with imperial ideology” (Albritton Jonsson 2014, 167). This strange convergence is most evident in Malthus’s own vision of progress and growth—driven exactly by the population pressure and scarcity that serve as analogue for the early Godwin’s reason—which Malthus, a pioneering apologist for industrial capitalism, did not reject, despite later misrepresentations.

    IV

    Both Marx and Engels would later discern in Malthus’s ostensibly scientific outline of nature’s positive checks on the poor—aimed at both eighteenth century British poor laws and various enlightenment era visions of social improvement—the primacy of surplus population and a reserve army of the unemployed for a nascent industrial capitalism, as Engels notably “summarizes” Malthus’s argument in his Condition of the Working Class in England (1845):

    If, then, the problem is not to make the ‘surplus population’ useful, … but merely to let it starve to death in the least objectionable way, … this, of course, is simple enough, provided the surplus population perceives its own superfluousness and takes kindly to starvation. There is, however, in spite of the strenuous exertions of the humane bourgeoisie, no immediate prospect of its succeeding in bringing about such a disposition among the workers. The workers have taken it into their heads that they, with their busy hands, are the necessary, and the rich capitalists, who do nothing, the surplus population.

    Despite the transparently political impetus behind Malthus’s Essay, his work was taken up by a certain segment of the environmental movement in the twentieth century. These same environmentalists often read and reject both Marx’s and Engels’s critiques of Malthusian political economy, with the disastrous environmental record of orthodox communist and specifically Soviet Prometheanism in mind. John Bellamy Foster notes that many “ecological socialists,” have gone so far as to argue that Marx and Engels were guilty of “a Utopian overreaction to Malthusian epistemic conservatism” which led them to downplay (or deny) “any ultimate natural limits to population” and indeed natural limits in general. Faced with Malthusian natural limits, we are told, Marx and Engels responded with “‘Prometheanism’—a blind faith in the capacity of technology to overcome all ecological barriers” (Foster 1998).

    While Marx rejected a fixed and universal law of population growth or food production, stressing instead how population increases and agricultural yields vary from one socio-material context to another, he accepted ecological limits—to soil fertility, for example—in his theory of metabolic rift, as both Foster (2000) and Kohei Saito (2017) demonstrate in their respective projects on Marx’s ecology.

    This perspective was arguably anticipated by the later Godwin himself, in the long and now forgotten Enquiry Concerning Population (1820), written at the urging of his son-in-law Percy Shelley, in order to salvage his reputation from Malthus’s attacks; Malthus was awarded the first chair in political economy at the East India Company College in Hertfordshire, while Godwin’s utopian philosophy was fading from the public consciousness, when it was not an  explicit object of ridicule. Godwin returned to the absurdity of Malthus’s theological fixation on the human inability to resist the sexual urge, with a special emphasis on the poor, which we can see in first response to Malthus’s Essay in the 1790s, although in a more openly vitriolic fashion, perhaps at the urging of Shelley, for whom the Malthusian emphasis on abstinence and chastity among the poor was “seeking to deny them even the comfort of sexual love” in addition to “keeping them ignorant, miserable, and subservient” (St. Clair 1989, 464). Shelley, unlike the young Godwin of the 1793 Political Justice that influenced the poet’s radical political development, saw in unrestrained sexual intercourse a vehicle of communion with nature.

    The older Godwin offers, in his Of Population, 600 pages of historical accounts and reports regarding population and agriculture—an empiricist overcorrection to Malthus’s accusations of visionary rationalism—in order to show us the variability of different social metabolisms, the efficacy of birth control, and, most importantly, how utopian social organization can and must be built with biophysical limits in mind against “the occult and mystical state of population” in Malthus’s thinking (Godwin 1820, 476). More than a response to Malthus, this later work also represents a rejoinder to the young proto-accelerationist Godwin and that nevertheless retains most of his radical social and political commitments. Of Population troubles the earlier Malthusian-Godwinian binary that arguably still underwrites our present-day Anthropocene narrative and the standard historiography of the English Industrial Revolution.

    In 1798, Malthus argued in favor of population and resource constraints, for largely ideological reasons, at the exact moment that the steam engine and the widespread adoption of fossil energy, in the form of coal, enabled what seemed like self-sustaining growth, seemingly rendering that paradigm obsolete. But Malthus also argues, toward the end of the Essay, that as just the “first great awakeners of the mind seem to be the wants of the body,” so necessity is “the mother of invention” (Malthus 1793, 95) and progress Malthus’s myth of capitalist modernity, the negative image of perfectibilism, underwrites the political economy of industrialization. Malthus stressed the power of natural necessity—scarcity and struggle—to compel human accomplishment, against the universal luxury proffered by the perfectibilists.

    Like the good bourgeois moralist he was, Malthus saw in the individual and collective struggle against scarcity—laws of population that function as secularized analogues for original sin—the drivers of technological development and material growth. This is a familiar story of progress and one that, no less than the perfectibilists’ teleological arc of history, elides conflict and contingency in rendering the rise of industrial capitalism and Euro-American capitalism as both natural and inevitable. For example, E. A. Wrigley argues, in a substantively Malthusian vein, that it was overpopulation, land exhaustion, and food scarcity in eighteenth-century England that necessitated the use of coal as an engine for growth, the invention of the steam engine in 1784, and widespread adoption of fossil power over the next century. Prometheans left and right nonetheless use the term “Malthusian” as synonym for (equally imprecise) “primitivist” or Luddite. But, as Andreas Malm persuasively contends, our dominant narratives of technological progress proceed from assumptions inherited from Malthus (and his disciple Ricardo): “Coal resolved a crisis of overpopulation. Like all innovations that composed the Industrial Revolution, the outcome was a valiant struggle of ‘a society with its back to ecological wall’” (Malm 2016, 23).

    Malthus’s force of necessity is here indistinguishable from Godwinian Progress, spurring on the inevitable march of innovation, without any mention of the extent to which technological development, in England and the capitalist west, was and is shaped by capitalist imperatives, such as the quest for profit or labor discipline. We can see this same dynamic at play in much present-day Anthropocene discourse, some of whose exponents trace a direct line from the discovery of fire to the human transformation of the biosphere. These “Anthropocenesters” oscillate between a Godwinian-accelerationist pole—best exemplified by would-be accelerationists and ecomodernists like Mark Lynas (2011), who wholeheartedly embraces the role of Godwin avatar Victor Frankenstein in arguing how we must assume our position as the God species and completely reengineer the planet we have remade in our own image—and a Malthusian-pessimist pole, according to which all we can do now is learn how to die with the planet we have undone, to paraphrase the title of Roy Scranton’s popular Learning How to Die in the Anthropocene (2015).[9]

    Rather than the enforced austerity conjured up by cornucopians and neo-Prometheans across the ideological spectrum when confronted with the biophysical limits now made palpable by our global ecological catastrophe, we must pursue a radical social and political project under these limits and conditions. Indeed, a decelerationist socialism might be the only way to salvage human civilization and creaturely life while repairing the biosphere of which both are parts: utopia among the ruins. While all the grand radical programs of the modern era, including Godwin’s own early perfectibilism, have been oriented toward the future, this project must contend with the very real burden of the past, as Malm notes: “every conjuncture now combines relics and arrows, loops and postponements that stretch from the deepest past to the most distant future, via a now that is non-contemporaneous with itself” (Malm 2016, 8).

    The warming effects of coal or oil burnt in the past misshape our collective present and future, due to the cumulative effects of CO2 in the atmosphere, even if—for example—all carbon emissions were to stop tomorrow. Global warming in this way represents the weight of those dead generations and a specific tradition—fossil capitalism and its self-sustaining growth— as literal gothic nightmare; one that will shape any viable post-carbon and post-capitalist future.

    Perhaps the post-accelerationist Godwin of the later 1790s and afterward is instructive in this regard. Although chastened by the death of his wife, the collapse of the French Revolution, and the campaign of vilification aimed at him and fellow radicals—in addition to the debate with Malthus outlined here—Godwin nonetheless retained the most important of his emancipatory commitments, as outlined in the 1793 Political Justice, even as he recognized physical constraints, the value of the past, and the primacy of affective bonds in building communal life. In a long piece, published in 1809, entitled Essay On Sepulchres, Or, A Proposal For Erecting Some Memorial of the Illustrious Dead in All Ages on the Spot , for example, Godwin reveals his new intellectual orientation in arguing for the physical commemoration of the dead; against a purely rationalist or moral remembrance of the deceased’s accomplishments and qualities, and against the younger Godwin’s horror of the body and its imperfections, the older man underlines the importance of our physical remains and our the visceral attachments they engender: “It is impossible therefore that I should not follow by sense the last remains of my friend; and finding him no where above the surface of the earth, should not feel an attachment to the spot where his body has been deposited in the earth” (Godwin 1809, 4).

    These ruminations follow a preface in which Godwin reaffirms his commitment to the utopian anarchism of Political Justice, with the caveat that any radical future must recognize both the past and remember the dead. He draws a tacitly anti-Promethean line between our embodied mortality and utopian political aspiration, severing the two often antithetical modes of progress that constitute a dialectic of European enlightenment. While first-generation Romantics, such as Wordsworth and Coleridge, abandoned the futurist Godwinism of their youth, alongside their “Jacobin” political sympathies, for an ambivalent conservatism, the second generation of Romantics, including the extended Godwin-Shelley circle, combine the emancipatory social and political commitments of Political Justice with an appreciation of the natural world and its limits. One need look no further than Frankenstein and Prometheus Unbound—the Shelleys’ revisionist interrogations of the Prometheus myth and modern Prometheanism, which should be read together—to see how this radical romantic constellation represents a bridge between 1790s-era utopianism and later radicalisms, including Marxism and ecosocialism.[10] And if we group the later Godwin with these second-generation Romantics,  then Michael Löwy and Robert Sayre’s reading of radical Romanticism as critical supplement to enlightenment makes perfect sense (see Löwy and Sayre 2001).

    Instead of the science fictional fantasies of total automation and decoupling, largely derived from the pre-Marxist socialist utopianisms that drive today’s various accelerationisms, this Romanticism provides one historical  resource for thinking through a decelerationist radicalism that dispenses with the grand progressive narrative: the linear, self-sustaining, and teleological model of improvement, understood in the quantitative terms of more, shared by capitalist and state socialist models of development. Against Prometheanism both old and new, let us reject the false binaries and shared assumptions inaugurated by the Godwin/Malthus debate, and instead join hands with the Walter Benjamin of the “Theses on the Philosophy of History” (1940) in order to better pull the emergency brake on a runaway capitalist modernity rushing headlong into the precipice.

    _____

    Anthony Galluzzo earned his PhD in English Literature at UCLA. He specializes in radical transatlantic English-language literary cultures of the late eighteenth and nineteenth centuries. He has taught at the United States Military Academy at West Point, Colby College, and NYU.

    Back to the essay

    _____

    Notes

    [1] The “Dark Enlightenment” is a term coined by accelerationist and “neo-reactionary” Nick Land, to describe his own orientation as well as that of authoritarian futurist, Curtis Yarvin. The term is often used to describe a range of technofuturist discourses that blend libertarian, authoritarian, and post-Marxist elements, in the case of “left” accelerationist, with a belief in technological transcendence. For a good overview, see Haider (2017).

    [2] This is a simplification of Blumenberg’s point in his Paradigms for a Metaphorology:

    Metaphors can first of all be leftover elements, rudiments on the path from mythos to logos; as such, they indicate the Cartesian provisionality of the historical situation in which philosophy finds itself at any given time, measured against the regulative ideality of the pure logos. Metaphorology would here be a critical reflection charged with unmasking and counteracting the inauthenticity of figurative speech. But metaphors can also—hypothetically, for the time being—be foundational elements of philosophical language, ‘translations’ that resist being converted back into authenticity and logicality. If it could be shown that such translations, which would have to be called ‘absolute metaphors’, exist, then one of the essential tasks of conceptual history (in the thus expanded sense) would be to ascertain and analyze their conceptually irredeemable expressive function. Furthermore, the evidence of absolute metaphors would make the rudimentary metaphors mentioned above appear in a different light, since the Cartesian teleology of logicization in the context of which they were identified as ‘leftover elements’ in the first place would already have foundered on the existence of absolute translations. Here the presumed equivalence of figurative and ‘inauthentic’ speech proves questionable; Vico had already declared metaphorical language to be no less ‘proper’ than the language commonly held to be such,4 only lapsing into the Cartesian schema in reserving the language of fantasy for an earlier historical epoch. Evidence of absolute metaphors would force us to reconsider the relationship between logos and the imagination. The realm of the imagination could no longer be regarded solely as the substrate for transformations into conceptuality—on the assumption that each element could be processed and converted in turn, so to speak, until the supply of images was used up—but as a catalytic sphere from which the universe of concepts continually renews itself, without thereby converting and exhausting this founding reserve. (Blumenberg 2010, 3-4)

    [3] Davidson situates Godwin, and his ensuing debate with Thomas Malthus on the limits to human population growth and improvement, within a longer eighteenth-century argument regarding perfectibility, the nature of human nature, and the extent to which we are constrained by our biological inheritance. Preceding Darwin and Mendel by more than a century, Davidson contends later models of eugenics and recognizably modern schemes for human enhancement or perfection emerge in the eighteenth, rather than the nineteenth, century. See Davidson (2009), 165.

    [4] Horkheimer and Adorno’s Dialectic of Enlightenment (1947), with its critique of enlightenment as domination and instrumental rationality, is the classic text here.

    [5] Benjamin famously argues for the emancipatory potential of mechanical reproducibility—of the image—in new visual media, such as film, against the unique “aura” of the original artwork. Benjamin sees in artistic aura a secularized version of the sacred object at the center of religious ritual. I am, of course, simplifying a complex argument that Benjamin himself would later qualify, especially as regards modern industrial technology, new media, and revolution. Anders—Benjamin’s husband and first husband of Hannah Arendt, who introduced Benjamin’s work to the English-speaking world—pushes this line of argument in a radically different direction, as human beings in the developed world increasingly feel “obsolete” on account of their perishable irreplaceability—a variation and inversion of artistic and religious aura, since “singularity” here is bound up with transience and imperfection—as compared to the assembly line proliferation of copies, all of which embody an immaterial model in the service of “industrial-Platonism” in Anders’s coinage. See Anders (2016), 53. See also Benjamin, “The Work of Art in The Age of Mechanical Reproduction” (1939).

    [6] This shift, which includes a critique of what I am calling the Promethean or proto-futurist dimension of the early Godwin, is best exemplified in Godwin’s St. Leon, his second novel, which recounts the story of an alchemist who sacrifices his family and his sanity for the sake of immortality and supernatural power: a model for his daughter’s Frankenstein (Shelley 1818).

    [7] I use Foucault’s descriptive models here with the caveat that, unlike Foucault, these techniques—of the sovereign, disciplinary, or biopolitical sort—should be anchored in specific socio-economic modes of organization as opposed to his diffuse “power.” Nor is this list of techniques exhaustive.

    [8] One recent example of this is the vogue for a reanimated Russian Cosmism among Silicon Valley technologists and the accelerationists of the art and para-academic worlds alike.  The original cosmists of the early Soviet period managed to recreate heterodox Christian and Gnostic theologies in secular and ostensibly materialist and/or Marxist-Leninist forms, i.e., God doesn’t exist, but we will become Him; with our liberated forces of production, we will make the universal resurrection of the dead a reality. The latter is now an obsession of various tech entrepreneurs such as Peter Thiel, who have invested money in “parabiosis” start-ups, for instance. One contribution to recent e-flux collection on  (neo)cosmism and resurrection admits that cosmism is “biopolitics because it is concerned with the administration of life, rejuvenation, and even resurrection. Furthermore, it is radicalized biopolitics because its goals are ahead of the current normative expectations and extend even to the deceased” (Steyerl and Vidokle 2018, 33). Frankensteinism is real apparently. But Frankensteinism in the service of what? For a good overview of the newest futurism and its relationship to social and ecological catastrophe, see Pein (2015).

    [9] Lynas and Scranton arguably exemplify these antithetical poles, although the latter has recently expressed some sympathy for something like revolutionary pessimism, very much in line with the decelerationist perspective that animates this essay. In a 2018 New York Times editorial called “Raising My Child in a Doomed World,” he writes: “there is some narrow hope that revolutionary socio-economic transformation today might save billions of human lives and preserve global civilization as we know it in more or less recognizable form, or at least stave off human extinction” (Scranton 2018). Also see Scranton, Learning to Die in the Anthropocene (2015) and Lynas (2011).

    The eco-modernists, who include Ted Nordhaus, Michael Shellenberger, and Stewart Brand, are affiliated with the Breakthrough Institute, a California-based environmental think tank. They are, according to their mission statement, “progressives who believe in the potential of human development, technology, and evolution to improve human lives and create a beautiful world.” The development of this potential is, in turn, predicated on “new ways of thinking about energy and the environment.” Luckily, these ecomoderns have published their own manifesto in which we learn that these new ways include embracing “the Anthropocene” as a good thing.

    This “good Anthropocene” provides human beings a unique opportunity to improve human welfare, and protect the natural world in the bargain, through a further “decoupling” from nature, at least according to the ecomodernist manifesto. The ecomodenists extol the “role that technology plays” in making humans “less reliant upon the many ecosystems that once provided their only sustenance, even as those same ecosystems have been deeply damaged.” The ecomodernists reject natural limits of any sort. They recommend our complete divorce from the natural world, like soul from body, although, as they constantly reiterate, this time it is for nature’s own good. How can human beings completely “decouple” from a natural world that is, in the words of Marx, our “inorganic body” outside of species-wide self-extinction, which is current policy? The eco-modernists’ policy proposals run the gamut from a completely nuclear energy economy and more intensified industrial agriculture to insufficient or purely theoretical (non-existent) solutions to our environmental catastrophe, such as geoengineering or cold fusion reactors (terraforming Mars, I hope, will appear in the sequel). This rebooted Promethean vision is still ideologically useful, while the absence of any analysis of modernization as a specifically capitalist process is telling. In the words of Chris Smaje (2015),

    Ecomodernists offer no solutions to contemporary problems other than technical innovation and further integration into private markets which are structured systematically by centralized state power in favour of the wealthy, in the vain if undoubtedly often sincere belief that this will somehow help alleviate global poverty. They profess to love humanity, and perhaps they do, but the love seems to curdle towards those who don’t fit with its narratives of economic, technological and urban progress. And, more than humanity, what they seem to love most of all is certain favoured technologies, such as nuclear power.

    [10] Terrence Hoagwood (1988), for example, argues for Shelley’s philosophical significance as bridge between 1790s radicalism and dialectical materialism.

    __

    Works Cited

    • Anders, Günther. 2016. Prometheanism: Technology, Digital Culture, and Human Obsolescence, ed. Christopher John Müller. London: Rowman and Littlefield.
    • Badiou, Alain. 2008. “Is the Word Communism Forever Doomed?Lacanian Ink lecture (Nov).
    • Benjamin, Walter. (1939) 1969. “The Work of Art in The Age of Mechanical Reproduction.” In Illuminations. New York:  Shocken Books. 217-241.
    • Benjamin, Walter. (1940) 1969. “Theses on the Philosophy of History.” In Illuminations. New York:  Shocken Books. 253-265.
    • Blumenberg, Hans. 2010. Paradigms for a Metaphorology, trans. Robert Savage. Ithaca: Cornell UP.
    • Brassier, Ray. 2014. “Prometheanism and Its Critics.” In Mackay and Avanessian (2014). 467-488.
    • Davidson, Jenny. 2008. Breeding: A Partial History of the Eighteenth Century. New York: Columbia UP.
    • Engels, Friedrich. (1845) 2009. The Condition of the Working Class in England. Ed. David McLellan. New York: Penguin.
    • Foster, John Bellamy. 1998. “Malthus’ Essay on Population at Age 200.” Monthly Review (Dec 1).
    • Foster, John Bellamy. 2000. Marx’s Ecology: Materialism and Nature. New York: Monthly Review Press.
    • Foucault, Michel. 2003. “Society Must Be Defended”: Lectures At The College de France 1975-1976. Trans. David Macey. New York: Picador.
    • Haider, Shuja. 2017. “The Darkness at the End of the Tunnel: Artificial Intelligence and Neoreaction.” Viewpoint (Mar 28).
    • Hauskeller, Michael. 2014. Sex and the Posthuman Condition. Hampshire: Palgrave.
    • Higgs, Kerryn. 2014. Collision Course: Endless Growth on a Finite Planet. Cambridge: The MIT Press.
    • Hoagwood, Terrence Allan. 1988. Skepticism and Ideology: Shelley’s Political Prose and Its Philosophical Context From Bacon to Marx. Iowa City: University of Iowa Press.
    • Horkheimer, Max, and Theodor Adorno. (1947) 2007. Dialectic of Enlightenment. Stanford: Stanford University Press.
    • Jonsson, Fredrik Albritton. 2014. “The Origins of Cornucopianism: A Preliminary Genealogy.” Critical Historical Studies (Spring).
    • Kurzweil, Ray. 2006. The Singularity Is Near: When Humans Transcend Biology. New York: Penguin Books.
    • Löwy Michael, and Robert Sayre. 2001. Romanticism Against the Tide of Modernity. Durham: Duke University Press.
    • Lynas, Mark. 2011. The God Species: Saving the Planet in the Age of Humans. Washington DC: National Geographic Press.
    • Mackay, Robin and Armen Avanessian, eds. 2014. #Accelerate: The Accelerationist Reader. Falmouth, UK: Urbanomic.
    • Malm, Andreas. 2016. Fossil Capital. London: Verso.
    • Malthus, Thomas. (1798) 2015. An Essay on the Principle of Population. In An Essay on the Principle of Population and Other Writings, ed. Robert Mayhew. New York: Penguin.
    • Mayhew, Robert. 2014. Malthus: The Life and Legacies of an Untimely Prophet. Cambridge: Harvard University Press.
    • Merchant, Carolyn. 1980. The Death of Nature: Women, Ecology, and the Scientific Revolution. Reprint edition, New York: HarperOne, 1990.
    • Pein, Corey. 2015. “Cyborg Soothsayers of the High-Tech Hogwash Emporia: In Amsterdam with the Singularity.” The Baffler 28 (July).
    • Philp, Mark. 1986. Godwin’s Political Justice. Ithaca: Cornell UP.
    • Preu, James. 1959. The Dean and the Anarchist. Tallahassee: Florida State University.
    • Saito, Kohei. 2017. Karl Marx’s Ecosocialism: Capital, Nature, and the Unfinished Critique of Political Economy. New York: Monthly Review Press.
    • Scranton, Roy. 2018. “Raising My Child in a Doomed World.” The New York Times (Jul 16).
    • Scranton, Roy. 2015. Learning to Die in the Anthropocene. San Francisco: City Lights.
    • Shelley, Mary. (1818) 2012. Frankenstein: The Original 1818 Edition, eds. D.l. Macdonald and Kathleen Scherf. Ontario: Broadview Press.
    • Smaje, Chris. 2015. “Ecomodernism: A Response to My Critics.” Resilience (Sep 10).
    • St. Clair, William. 1989. The Godwins and The Shelleys: A Biography of a Family. Baltimore, Johns Hopkins UP.
    • Steyerl, Hito and Anton Vidokle. 2018. “Cosmic Catwalk and the Production of Time.” In Art Without Death: Conversations on Cosmism. New York: Sternberg Press/e-Flux.
    • Thelwall John. 2008. Selected Political Writings of John Thelwall, Volume Two. London: Pickering & Chatto.
    • Thompson, E. P. 1978. The Poverty of Theory: or An Orrery of Errors. London: Merlin Press.
    • Williams, Nick and Alex Srnicek, 2013. “#Accelerate: A Manifesto for Accelerationist Politics.” Also in Mackay and Avanessian (2014). 347-362.
    • Winner, Langdon. 1977. Autonomous Technology: Technics-out-of-Control as a Theme in Political Thought. Cambridge, MA: The MIT Press.

    works by William Godwin

    • Godwin, William. (1793) 2013. An Enquiry Concerning Political Justice, ed. Mark Philp. Oxford: Oxford UP.
    • Godwin, William. (1797) 1971. The Enquirer: Reflections on Education, Manners and Literature. New York: Garland Publishers.
    • Godwin, William. 1798. An Enquiry Concerning Political Justice. Third edition. London: G.G and J. Robinson.
    • Godwin, William. 1801. Thoughts occasioned by the Perusal of Dr. Parr’s Spital Sermon, preached at Christ Church, April I5, 1800: being a Reply to the Attacks of Dr. Parr, Mr. Mackintosh, the Author of an Essay on Population, and Others, London: G. G. & J. Robinson.
    • Godwin, William. 1809. Essay On Sepulchres, Or, A Proposal For Erecting Some Memorial of the Illustrious Dead in All Ages on the Spot Where Their Remains Have Been Interred. London: W. Miller.
    • Godwin, William. 1820. Of Population. An Enquiry concerning the Power of Increase in the Numbers of Mankind, being an Answer to Mr. Malthus’s Essay on that Subject. London: Longman, Hurst, Rees, Ornie & Brown.

    For a more complete bibliography see the William Godwin entry in the Stanford Encyclopedia of Philosophy.

     

  • tante — Artificial Saviors

    tante — Artificial Saviors

    tante

    Content Warning: The following text references algorithmic systems acting in racist ways towards people of color.

    Artificial intelligence and thinking machines have been key components in the way Western cultures, in particular, think about the future. From naïve positivist perspectives, as illustrated by the Rosie the Robot maid from 1962’s TV show The Jetsons, to ironic reflections on the reality of forced servitude to one’s creator and quasi-infinite lifespans in Douglas Adams’s Hitchhiker’s Guide to the Galaxy’s Marvin the Paranoid Android, as well as the threatening, invisible, disembodied, cruel HAL 9000 in Arthur C. Clarke’s Space Odyssey series and its total negation in Frank Herbert’s Dune books, thinking machines have shaped a lot of our conceptions of society’s future. Unless there is some catastrophic event, the future seemingly will have strong Artificial Intelligences (AI). They will appear either as brutal, efficient, merciless entities of power or as machines of loving grace serving humankind to create a utopia of leisure, self-expression and freedom from the drudgery of labor.

    Those stories have had a fundamental impact on the perception of current technologic trends and developments. The digital turn has increasingly made growing parts of our social systems accessible to automation and software agents. Together with a 24/7 onslaught of increasingly optimistic PR messages by startups, the accompanying media coverage has prepared the field for a new kind of secular techno-religion: The Church of AI.

    A Promise Fulfilled?

    For more than half a century, experts in the field have maintained that genuine, human-level artificial intelligence has always been just around the corner, has been “about 10 to 20 years away.” Today’s experts and spokespeople continue to express the same timeline for their hopes. Asking experts and spokespeople in the field, that number has stayed mostly unchanged until today.

    In 2017 AI is the battleground that the current IT giants are fighting over: for years, Google has developed machine learning techniques and has integrated them into their conversational assistant which people carry around installed in their smart devices. It’s gotten quite good at answering simple questions or triggering simple tasks: “OK Google, how far is it from here to Hamburg,” tells me that given current traffic it will take me 1 hour and 43 minutes to get there. Google’s assistant also knows how to use my calendar and email to warn me to leave the house in time for my next appointment or tell me that a parcel I was expecting has arrived.

    Facebook and Microsoft are experimenting with and propagating intelligent chat bots as the future of computer interfaces. Instead of going to a dedicated web page to order flowers, people will supposedly just access a chat interface of a software service that dispatches their request in the background. But this time, it will be so much more pleasant than the experience everyone is used to when calling automated calling systems. Press #1 if you believe.

    Old science fiction tropes get dusted off and re-released with a snazzy iPhone app to make them seem relevant again on an almost daily basis.

    Nonetheless, the promise is always the same: given the success that automation of manufacturing and information processing has had in the last decades, AI is considered to be not only plausible or possible but, in fact, almost a foregone conclusion. In support of this, advocates (such as Google’s Ray Kurzweil) typically cite “Moore’s Law,”[1] an observation about the increasing quantity and quality of transistors, as being in direct correlation to the growing “intelligence” in digital services or cyber-physical systems like thermostats or “smart” lights.

    Looking at other recent reports, a pattern emerges. Google’s AI lab recently trained a neural network to do lip-reading and found it better than human lip-readers (Chung, et al. 2016): where human experts were only able to pick the right word 12.4% of the time, Google’s neural network could reach 52.3% when being pointed at footage from BBC politics shows.

    Another recent example from Google’s research department, which just shows how many resources Google invests into machine learning and AI: Google has trained a system of neural networks to translate different human languages (in their example, English, Japanese and Korean) into one another (Schuster, Johnson and Thorat 2016). This is quite the technical feat, given that most translation engines have to be meticulously tweaked to translate between two languages. But Google’s researchers finish their report with a very different proposition:

    The success of the zero-shot translation raises another important question: Is the system learning a common representation in which sentences with the same meaning are represented in similar ways regardless of language — i.e. an “interlingua”?….This means the network must be encoding something about the semantics of the sentence rather than simply memorizing phrase-to-phrase translations. We interpret this as a sign of existence of an interlingua in the network. (Schuster, Johnson and Thorat 2016)

    Google’s researchers interpret the capabilities of the neural network as expressions of the neural network creating a common super-language, one language to finally express all other languages.

    These current examples of success stories and narratives illustrate a fundamental shift in the way scientists and developers think about AI, a shift that perfectly resonates with the idea that AI has spiritual and transcendent properties. AI developments used to focus on building structured models of the world to enable reasoning. Whether researchers used logic or sets or newer modeling frameworks like RDF,[2] the basic idea was to construct “Intelligence” on top of a structure of truths and statements about the world. Intentionally modeled not by accident on basic logic, a lot of it looked like the first sessions in a traditional logic 101 lecture: All humans die. Aristotle is a human. Therefore, Aristotle will die.

    But all these projects failed. Explicitly modeling the structures of the world hit a wall of inconsistencies rather early when natural language and human beings got involved. The world didn’t seem to follow the simple hierarchic structures some computer scientists hoped it would. And even when it came to very structured, abstract areas of life, the approach never took off. Projects like, for example, expressing the Canadian income tax in a prolog[3] model (Sherman 1987) never got past the abstract planning stage. RDF and the idea of the “semantic web,” the web of structured data allowing software agents to gather information and reason based on it, are still somewhat relevant in academic circles, but have failed to capture wide adoption in real world use cases.

    And then came neural networks.

    Neural networks are the structure behind most of the current AI projects having any impact, whether it’s translation of human language, self-driving cars or recognizing objects and people on pictures and video. Neural networks work in a fundamentally different way from the traditional bottom-up approaches that defined much of the AI research in the last decades of the 20th century. Based on a simplified mathematical model of human neurons, networks of said neurons can be “trained” to react in a certain way.

    Say you need a neural network to automatically detect cats on pictures. First, you need an input layer with enough neurons to assign one to every pixel of the pictures you want to feed it. You add an output layer with two neurons, one signaling “cat” and one signaling “not a cat.” Now you add a few internal layers of neurons and connect them to each other. Input gets fed into the network through the input layer. The internal layers do their thing and make the neurons in the output layer “fire.” But the necessary knowledge is not yet ingrained into the network—it needs to be trained.

    There are different ways of training these networks, but they all come down to letting the network process a large amount of training data with known properties. For our example, a substantial set of pictures with cats would be necessary. When processing these pictures, the network gets positive feedback if the right neuron (the one signaling the detection of a cat) fires and strengthens the connections that lead to this result. Where it has a 50/50 chance of being right on the first try, that chance will quickly improve to the point that it will reach very good results, given that the set of training data is good enough. To evaluate the quality of the network, it then is tested against different pictures of cats and pictures without cats.

    Neural networks are really good at learning to detect structures (objects in images, sound patterns, connections in data streams) but there’s a catch: even when a neural network is really good at its task, it’s largely impossible for humans to say why: neural networks are just sets of neurons and their weighted connections. But what does the weight of 1.65 say about a connection? What are its semantics? What do the internal layers and neurons actually mean? Nobody knows.

    Many currently available services based on these technologies can achieve impressive results. Cars are able to drive as well if not better and safer than human drivers (given Californian conditions of light, lack of rain or snow and sizes of roads), automated translations of language can almost instantly give people at least an idea of what the rest of the world is talking about, and Google’s photo service allows me to search for “mountain” and shows me pictures of mountains in my collection. Those services surely feel intelligent. But are they really?

    Despite optimistic reports about another big step towards “true” AI (like in the movies!) that tech media keeps churning out like a machine, just looking at recent months the trouble with the current mainstream in AI has recently become quite obvious.

    In June 2015, Google’s Photos service was involved in a scandal: their AI was tagging faces of people of color with the term “gorilla” (Bergen 2015). Google quickly pointed out how difficult image recognition was and “fixed” the issue by blocking its AI from applying that specific tag, promising a “long term solution.” And even just staying with the image detection domain, there have been, in fact, numerous examples of algorithms acting in ways that don’t imply too much intelligence: cameras trained on Western, white faces detect people with Asian descent as “blinking” (Rose 2010); algorithms employed as impartial “beauty judges” seemingly don’t like dark skin (Levin 2016). The list goes on and on.

    While there seems to be a big consensus among thought leaders, AI companies, and tech visionaries that AI is inevitable and imminent, the definition of “intelligence” seems to be less than obvious. Is an entity intelligent if it can’t explain its reasoning?

    John Searle already explained this argument in the “Chinese Room“ thought experiment (Searle 1980): Searle proposes a computer program that can act convincingly as if it understands Chinese by taking in Chinese input, transforming it in some algorithmic way to output a response of Chinese characters. Does that machine really understand Chinese? Or is it just an automaton simulating understanding Chinese? Searle continues the experiment by assuming that the rules used by the machine get translated to readable English for a person to follow. A person locked in a room with these rules, pencil and paper could respond to every Chinese text given to that person as convincingly as the machine could. But few would propose that that person does now “understand” Chinese in the sense that a human being who knows Chinese does.

    Current trends in the reception of AI seem to disagree: if a machine can do something that used to be only possible for human cognition, it surely must be intelligent. This assumption of Intelligence serves as foundation for a theory of human salvation: if machines are already a little intelligent (putting them into the same category as humans) and machines only get faster and more efficient, isn’t it reasonable to assume that they will solve the issues that humans have struggled with for ages?

    But how can a neural network save us if it can’t even distinguish monkeys from humans?

    Thy Kingdom Come 2.0

    The story of AI is a technology narrative only at first glance. While it does depend on technology and technological progress, faster processors, and cleverer software libraries (ironically written and designed by human beings), it is really a story about automation, biases and implicit structures of power.

    Technologists who have traditionally been very focused on the scientific method, on verifiable processes and repeatable experiments have recently opened themselves to more transcendent arguments: the proposition of a neural network, of an AI creating a generic ideal language to express different human languages as one structure (Schuster, Johnson and Thorat 2016) is a first very visible step of “upgrading” an automated process to becoming more than meets the eye. The multi-language-translation network is not an interesting statistical phenomenon that needs reflection by experts in the analyzed languages and the cultures using them with regards to their structural, social similarities and the way they influence(d) one another. Rather, it is but a miraculous device making steps towards creating an ideal language that would have made Ludwig Wittgenstein blush.[4]

    But language and translation isn’t the only area in which these automated systems are being tested. Artificial intelligences are being trained to predict people’s future economic performance, their shopping profile, and their health. Other machines are deployed to predict crime hotspots, to distribute resources and to optimize production of goods.

    But while predicting crimes still gets most people feeling uncomfortable, the idea that machines are the supposedly objective arbiters of goods and services is met with far less skepticism. But “goods and services” can include a great deal more than ordinary commercial transactions. If the machine gives one candidate a 33% chance of survival and the other one 45%, who should you give the heart transplant to?

    "Through AI, Human Might Literally Create God" (image source: video by Big Think (IBM) and pho.to)
    “Through AI, Human Might Literally Create God” (image source: video by Big Think (IBM) and pho.to)

    Computers cannot lie, they just act according to their programming. They don’t discriminate against people based on their gender, race or background. At least that’s the popular opinion that very happily assigns computers and software systems the role of the objective arbiter of truth and fairness. People are biased, imperfect, and error-prone, so why shouldn’t we find the best processes and decision algorithms and put them into machines to dispose fair and optimal rulings efficiently and correctly? Isn’t that the utopian ideal of a fair and just society in which machines automate not just manual labor but also the decisions that create conflict and attract corruption and favoritism?

    The idea of computers as machines of truth is being challenged more and more each day, especially given new AI trends; in traditional algorithmic systems, implicit biases were hard-coded into the software. They could be analyzed, patched. Closely mirroring the scientific method, this ideal world view saw algorithms getting better, becoming fairer with every iteration. But how to address implicit biases or discriminations when the internal structure of a system cannot be effectively analyzed or explained? When AI systems make predictions based on training data, who can check whether the original data wasn’t discriminatory or whether it’s still suitable for use today?

    One original promise of computers—amongst others—had to do with accountability: code could be audited to legitimize its application within sociotechnical systems of power. But current AI trends have replaced this fundamental condition for the application of algorithms with belief.

    The belief is that simple simulacra of human neurons will—given enough processing power and learning data—evolve to be Superman. We can characterize this approach as a belief system because it has immunized itself against criticism: when an AI system fails horribly, creates or amplifies existing social discrimination or violence, the dogma of AI proponents often tends to be that it just needs more training, needs to be fed more random data to create better internal structures, better “truths.” Faced with a world of inconsistencies and chaos, the hope is that some neural network, given enough time and data, will make sense of it, even though we might not be able to truly understand it.

    Religion is a complex topic without one simple definition to apply to things to decide whether they are, in fact, religions. Religions are complex social systems of behaviors, practices and social organization. Following Wittgenstein’s ideas about language games, it might not even be possible to completely and selectively define religion. But there are patterns that many popular religions share.

    Many do, for example, share the belief in some form of transcendental power such as a god or a pantheon or even more abstract conceptual entities. Religions also tend to provide a path towards achieving greater, previously unknowable truths, truths about the meaning of life, of suffering, of Good itself. Being social structures, there often is some form of hierarchy or a system to generate and determine status and power within the group. This can be a well-defined clergy or less formal roles based on enlightenment, wisdom, or charity.

    While this is in no way anywhere close to a comprehensive list of attributes of religions, these key aspects can help analyze the religiousness of the AI narrative.

    Singulatarianism

    Here I want to focus on one very specific, influential sub-group within the whole AI movement. And no other group within tech displays religious structure more explicitly than the singulatarians.

    Singulatarians believe that the creation of adaptable AI systems will spark a rapid and ever increasing growth in these systems’ capabilities. This “runaway reaction” of cycles of self-improvement will lead to one or more artificial super-intelligences surpassing all human mental and cognitive capabilities. This point is called “the Singularity” which will be—according to singulatarians—followed by a phase of extremely rapid technological developments whose speed and structure will be largely incomprehensible to human consciousness. At this point the AI(s) will (and according to most singulatarians shall) take control of most aspects of society. While the possibility of the Super-AI taking over by force is always lingering around in the back of singulatarians’ minds, the dominant position is that humans will and should hand over power to the AI for the good of the people, for the good of society.

    Here we see singulatarianism taking the idea that computers and software are machines of truth to its extreme. Whether it’s the distribution of resources and wealth, or the structure of the law and regulation, all complex questions are reduced to a system of equations that an AI will solve perfectly, or at least so close to perfectly that human beings might not even understand said perfection.

    According to the “gospel” as taught by the many proponents of the Singularity, the explosive growth in technology will provide machines that people can “upload” their consciousness to, thus providing human beings with durable, replaceable bodies. The body, and with it death itself, are supposedly being transcended, creating everlasting life in the best of all possible worlds watched over by machines of loving grace, at least in theory.

    While the singularity has existed as an idea (if not the name) since at least the 1950s, only recently did singulatarians gain “working prototypes.” Trained AI systems are able to achieve impressive cognitive feats even today and the promise of continuous improvement that’s—seemingly—legitimized by references to Moore’s Law makes this magical future almost inevitable.

    It’s very obvious how the Singularity can be, no, must be characterized as a religious idea: it presents an ersatz-god in the form of a super-AI that is beyond all human understanding and reasoning. Quoting Ray Kurzweil from his The Age of Spiritual Machines: “Once a computer achieves human intelligence it will necessarily roar past it” (Kurzweil 1999). Kurzweil insists that surpassing human capabilities is a necessity. Computers are the newborn gods of silicon and code that—once awakened—will leave us, its makers, in the dust. It’s not a question of human agency but a law of the universe, a universal truth. (Not) coincidentally Kurzweil’s own choice of words in this book is deeply religious, starting with the title of the book.

    With humans therefore unable to challenge an AI’s decisions, human beings’ goal is to be to work within the world as defined and controlled by the super-AI. The path to enlightenment lies in the acceptance of the super-AI and by helping every form of scientific progress to finally achieve everlasting life through digital uploads of consciousness on to machines. Again quoting Kurzweil: “The ethical debates are like stones in a stream. The water runs around them. You haven’t seen any biological technologies held up for one week by any of these debates” (Kurzweil 2003 ).  Ethical debates are in Kurzweil’s perception fundamentally pointless with the universe and technology as god necessarily moving past them—regardless of what the result of such debates might ever be. Technology transcends every human action, every decision, every wish. Thy will be done.

    Because the intentions and reasoning of the super-AI being are opaque to human understanding, society will need people to explain, rationalize, and structure the AI’s plans for the people. The high-priests of the super-AI (such as Ray Kurzweil) are already preparing their churches and sermons.

    Not every proponent of AI goes as far as the singulatarians go. But certain motifs keep appearing even in supposedly objective and scientific articles about AI, the artificial control system for (parts of) human society probably being the most popular: AIs are supposed to distribute power in smart grids for example (Qudaih and Mitani 2011) or decide fully automatically where police should focus their attention (Perry et al 2013). The second example (usually referred to as “predictive policing”) illustrates this problem probably the best: all training data used to build the models that are supposed to help police be more “efficient” is soaked in structural racism and violence. A police trained on data that always labels people of color as suspect will keep on seeing innocent people of color as suspect.

    While there is value to automating certain dangerous or error-prone processes, like for example driving cars in order to protect human life or protect the environment, extending that strategy to society as a whole is a deeply problematic approach.

    The leap of faith that is required to truly believe in not only the potential but also the reality of these super-powered AIs doesn’t only leave behind the idea of human exceptionalism, (which in itself might not even be too bad), but the idea of politics as a social system of communication. When decisions are made automatically without any way for people to understand the reasoning, to check the way power acts and potentially discriminates, there is no longer any political debate apart from whether to fall in line or to abolish the system all together. The idea that politics is an equation to solve, that social problems have an optimal or maybe even a correct solution is not only a naïve technologist’s dream but, in fact, a dangerous and toxic idea making the struggle of marginalized groups, making any political program that’s not focused on optimizing[5] the status quo, unthinkable.

    Singulatarianism is the most extreme form, but much public discourse about AI is based on quasi-religious dogmas of the boundless realizable potential of AIs and life. These dogmas understand society as an engineering problem looking for an optimal solution.

    Daemons in the Digital Ether

    Software services on Unix systems are traditionally called “daemons,” a word from mythology that refers to god-like forces of nature. It’s an old throw-away programmer joke that seems like a precognition of sorts looking at today.

    Even if we accept that AI has religious properties, that it serves as a secular ersatz-religion for the STEM-oriented crowd, why should that be problematic?

    Marc Andreessen, venture capitalist and one of the louder proponents of the new religion, claimed in 2011 that “software is eating the world.” (Andreessen 2011) And while statements about the present and future from VC leaders should always be taken with a grain of salt, given that they are probably pitching their latest investment, in this case Andreessen was right: software and automation are slowly swallowing increasing aspects of everyday life. The digitalization of even mundane actions and structures, the deployment of “smart” devices in private homes and the public sphere, the reality of social life happening on technological platforms all help to give algorithmic systems more and more access to people’s lives and realities. Software is eating the world, and what it gnaws on, it standardizes, harmonizes, and structures in ways that ease further software integration

    The world today is deeply cyber-physical. The separation of the digital and the “real” worlds that sociologist Nathan Jurgenson fittingly called “digital dualism” (Jurgenson 2011) can these days be called an obvious fallacy. Virtual software systems, hosted “in the cloud,” define whether we will get health care, how much we’ll have to pay for a loan and in certain cases even whether we may cross a border or not. These processes of power were traditionally “running on” social systems, on government organs or organizations; or maybe just individuals are now moving into software agents, removing the risky, biased human factor, as well as checks and balances.

    The issue at hand is not the forming of a new tech-based religion itself. The problem emerges from the specific social group promoting it, its ignorance towards this matter and the way that group and its paradigms and ideals are seen in the world. The problem is not the new religion but the way its supporters propose it as science.

    Science, technology, engineering, math—abbreviated as STEM—currently take center stage when it comes to education but also when it comes to consulting the public on important matters. Scientists, technologists, engineers and mathematicians are not only building their own models in the lab but are creating and structuring the narratives that are debatable. Science as a tool to separate truth from falsehood is always deeply political, even more so in a democracy. By defining the world and what is or is not, science does not just structure a society’s model of the world but also elevates its experts to high and esteemed social positions.

    With the digital turn transforming and changing so many aspects of everyday life, the creators and designers of digital tools are—in tandem with a society hungry for explanations of the ongoing economic, technological and social changes—forming their own privileged caste, a caste whose original defining characteristic was its focus on the scientific method.

    When AI morphed from idea or experiment to belief system, hackers, programmers, “data scientists,”[6] and software architects became the high priests of a religious movement that the public never identified and parsed as such. The public’s mental checks were circumvented by the hidden switch of categories. In Western democracies the public is trained to listen to scientists and experts in order to separate objective truth from opinion. Scientists are perceived as impartial, only obligated to the truth and the scientific method. Technologists and engineers inherited that perceived neutrality and objectivity given their public words, a direct line into the public’s collective consciousness.

    On the other hand, the public does have mental guards against “opinion” and “belief” in place that get taught to each and every child in school from a very young age. Those things are not irrelevant in the public discourse—far from it—but the context they are evaluated in is different, more critical. This protection, this safeguard is circumvented when supposedly objective technologists propose their personal tech-religion as fact.

    Automation has always both solved and created problems: products became easier, safer, quicker or mainly cheaper to produce, but people lost their jobs and often the environment suffered. In order to make a decision, in order to evaluate the good and bad aspects of automation, society always relied on experts analyzing these systems.

    Current AI trends turn automation into a religion, slowly transforming at least semi-transparent systems into opaque systems whose functionality and correctness can neither be verified nor explained. By calling these systems “intelligent” a certain level of agency is implied, a kind of intentionality and personalization.[7] Automated systems whose neutrality and fairness is constantly implied and reaffirmed through ideas of godlike machines governing the world with trans-human intelligence are being blessed with agency and given power removing the actual entities of power from the equation.

    But these systems have no agency. Meticulously trained in millions of iterations on carefully chosen and massaged data sets, these “intelligences” just automate the application of the biases and values of the organizations deploying and developing them as many scientists as Cathy O’Neil in her book Weapons of Math Destruction illustrates:

    Here we see that models, despite their reputation for impartiality, reflect goals and ideology. When I removed the possibility of eating Pop-Tarts at every meal, I was imposing my ideology on the meals model. It’s something we do without a second thought. Our own values and desires influence our choices, from the data we choose to collect to the questions we ask. Models are opinions embedded in mathematics. (O’Neil 2016, 21)

    For many years, Facebook has refused all responsibility for the content on their platform and the way it is presented; the same goes for Google and its search products. Whenever problems emerge, it is “the algorithm” that “just learns from what people want.” AI systems serve as useful puppets doing their masters’ bidding without even requiring visible wires. Automated systems predicting areas of crime claim not to be racist despite targeting black people twice as often as white ones (Pulliam-Moore 2016). The technologist Maciej Cegłowski probably said it best: “Machine learning is like money laundering for bias.”

    Amen

    The proponents of AI aren’t just selling their products and services. They are selling a society where they are in power, where they provide the exegesis for the gospel of what “the algorithm” wants: Kevin Kelly, founder of Wired magazine, leading technologist and evangelical Christian, even called his book on this issue What Technology Wants (Kelly 2011) imbuing technology itself with agency and a will. And all that without taking responsibility for it. Because progress and—in the end—the singularity are inevitable.

    But this development is not a conspiracy or an evil plan. It grew from a society desperately demanding answers and scientists and technologists eagerly providing them. From deeply rooted cultural beliefs in the general positivity of technological progress, and from the trust in the powers of truth creation of the artifacts the STEM sector produces.

    The answer to the issue of an increasingly powerful and influential social group hardcoding its biases into the software actually running our societies cannot be to turn back time and de-digitalize society. Digital tools and algorithmic systems can serve a society to create fairer, more transparent processes that are, in fact, not less but more accountable.

    But these developments will require a reevaluation of the positioning, status and reception of the tech and science sectors. The answer will require the development of social and political tools to observe, analyze and control the power wielded by the creators of the essential technical structures that our societies rely on.

    "Through AI, Human Might Literally Create God" (image source: video by Big Think (IBM) and pho.to)
    “Through AI, Human Might Literally Create God” (image source: video by Big Think (IBM) and pho.to)

    Current AI systems can be useful for very specific tasks, even in matters of governance. The key is to analyze, reflect, and constantly evaluate the data used to train these systems. To integrate perspectives of marginalized people, of people potentially affected negatively even in the first steps of the process of training these systems. And to stop offloading responsibility for the actions of automated systems to the systems themselves, instead of holding accountable the entities deploying them, the entities giving these systems actual power.

    Amen.

    _____

    tante (tante@tante.cc) is a political computer scientist living in Germany. His work focuses on sociotechnical systems and the technological and economic narratives shaping them. He has been published in WIRED, Spiegel Online, and VICE/Motherboard among others. He is a member of the other wise net work, otherwisenetwork.com.

    Back to the essay

    _____

    Notes

    [1] Moore’s Law describes the observation that the number of transistors per square inch doubles roughly every 2 years (or every 18 months, depending on which version of the law is cited) made popular by Intel co-founder Gordon Moore.

    [2] https://www.w3.org/RDF/.

    [3] Prolog is a purely logical programming language that expresses problems as resolutions to logical expressions

    [4] In the Philosophical Investigations (1953) Ludwig Wittgenstein argued against language somehow corresponding to reality in a simple way. He used the concept of “language games” illustrating that meanings of language overlap and are defined by the individual use of language rejecting the idea of an ideal, objective language.

    [5] Optimization always operates in relationship to a specific goal codified in the metric the optimization system uses to compare different states and outcomes with one another. “Objective” or “general” optimizations of social systems are therefore by definition impossible.

    [6] We used to call them “statisticians.”

    [7] The creation of intelligence, of life as a feat is traditionally reserved to the gods of old. This is another link to religious world views as well as a rejection of traditional religions which is less than surprising in a subculture that’s most of the fan base of current popular atheists such as Richard Dawkins or Sam Harris. Vocal atheist Sam Harris himself being an open supporter of the new Singularity religion is just the cherry on top of this inconsistency sundae.

    _____

    Works Cited

  • Curatorialism as New Left Politics

    Curatorialism as New Left Politics

    by David Berry

    ~
    It is often argued that the left is left increasingly unable to speak a convincing narrative in the digital age. Caught between the neoliberal language of contemporary capitalism and its political articulations linked to economic freedom and choice, and a welfare statism that appears counter-intuitively unappealing to modern political voters and supporters, there is often claimed to be a lacuna in the political imaginary of the left. Here, I want to explore a possible new articulation for a left politics that moves beyond the seeming technophilic and technological determinisms of left accelerationisms and the related contradictions of “fully automated luxury communism”. Broadly speaking, these positions tend to argue for a post-work, post-scarcity economy within a post-capitalist society based on automation, technology and cognitive labour. Accepting these are simplifications of the arguments of the proponents of these two positions the aim is to move beyond the assertion that the embracing of technology itself solves the problem of a political articulation that has to be accepted and embraced by a broader constituency within the population. Technophilic politics is not, of itself, going to be enough to convince an electorate, nor a population, to move towards leftist conceptualisations of possible restructuring or post-capitalist economics. However, it seems to me that the abolition of work is not a desirable political programme for the majority of the population, nor does a seemingly utopian notion of post-scarcity economics make much sense under conditions of neoliberal economics. Thus these programmes are simultaneously too radical and not radical enough. I also want to move beyond the staid and unproductive arguments often articulated in the UK between a left-Blairism and a more statist orientation associated with a return to traditional left concerns personified in Ed Miliband.

    Instead, I want to consider what a politics of the singularity might be, that is, to follow Fredric Jameson’s conceptualisation of the singularity as “a pure present without a past or a future” such that,

    today we no longer speak of monopolies but of transnational corporations, and our robber barons have mutated into the great financiers and bankers, themselves de-individualized by the massive institutions they manage. This is why, as our system becomes ever more abstract, it is appropriate to substitute a more abstract diagnosis, namely the displacement of time by space as a systemic dominant, and the effacement of traditional temporality by those multiple forms of spatiality we call globalization. This is the framework in which we can now review the fortunes of singularity as a cultural and psychological experience (Jameson 2015: 128).

    That is the removal of temporality of a specific site of politics as such, or the successful ideological deployment of a new framework of understand of oneself within temporality, whether through the activities of the media industries, or through the mediation of digital technologies and computational media. This has the effect of the transformation of temporal experience into new spatial experiences, whether through translating media, or through the intensification of a now that constantly presses upon us and pushes away both historical time, but also the possibility for political articulations of new forms of futurity. Thus the politics of singularity point to spatiality as the key site of political deployment within neoliberalism, and by this process undercuts the left’s arguments which draw simultaneously on a shared historical memory of hard-won rights and benefits, but also the notion of political action to fight for a better future. Indeed, one might ask if green critique of the anthropocene, with its often misanthropic articulations, in some senses draws on some notion of a singularity produced by humanity which has undercut the time of geological or planetary scale change. The only option remaining then is to seek to radically circumscribe, if not outline a radical social imaginary that does not include humans in its conception, and hence to return the planet to the stability of a geological time structure no longer undermined by human activity. Similarly, neoliberal arguments over political imaginaries highlight the intensity and simultaneity of the present mode of capitalist competition and the individualised (often debt-funded) means of engagement with economic life.

    What then might be a politics of the singularity which moved beyond politics that drew on forms of temporality for its legitimation? In other words, how could a politics of spatiality be articulated and deployed which re-enabled the kind of historical project towards a better future for all that was traditionally associated with leftist thought?

    To do this I want to think through the notion of the “curator” that Jameson disparagingly thinks is an outcome of the singularity in terms of artistic practice and experience. He argues, that today we are faced with the “emblematic figure of the curator, who now becomes the demiurge of those floating and dissolving constellations of strange objects we still call art.” Further,

    there is a nastier side of the curator yet to be mentioned, which can be easily grasped if we look at installations, and indeed entire exhibits in the newer postmodern museums, as having their distant and more primitive ancestors in the happenings of the 1960s—artistic phenomena equally spatial, equally ephemeral. The difference lies not only in the absence of humans from the installation and, save for the curator, from the newer museums as such. It lies in the very presence of the institution itself: everything is subsumed under it, indeed the curator may be said to be something like its embodiment, its allegorical personification. In postmodernity, we no longer exist in a world of human scale: institutions certainly have in some sense become autonomous, but in another they transcend the dimensions of any individual, whether master or servant; something that can also be grasped by reminding ourselves of the dimension of globalization in which institutions today exist, the museum very much included (Jameson 2015: 110-111).

    However, Jameson himself makes an important link between spatiality as the site of a contestation and the making-possible of new spaces, something curatorial practice, with its emphasis on the construction, deployment and design of new forms of space points towards. Indeed, Jameson argues in relation to theoretical constructions, “perhaps a kind of curatorial practice, selecting named bits from our various theoretical or philosophical sources and putting them all together in a kind of conceptual installation, in which we marvel at the new intellectual space thereby momentarily produced” (Jameson 2015: 110).

    In contrast, the question for me is the radical possibilities suggested by this event-like construction of new spaces, and how they can be used to reverse or destabilise the time-axis manipulation of the singularity. The question then becomes: could we tentatively think in terms of a curatorial political practice, which we might call curatorialism? Indeed, could we fill out the ways in which this practice could aim to articulate, assemble and more importantly provide a site for a renewal and (re)articulation of left politics? How could this politics be mobilised into the nitty-gritty of actual political practice, policy, and activist politics, and engender the affective relation that inspires passion around a political programme and suggests itself to the kinds of singularities that inhabit contemporary society? To borrow the language of the singularity itself, how could one articulate a new disruptive left politics?

    dostoevsky on curation
    image source: Curate Meme

    At this early stage of thinking, it seems to me that in the first case we might think about how curatorialism points towards the need to move away from concern with internal consistency in the development of a political programme. Curatorialism gathers its strength from the way in which it provides a political pluralism, an assembling of multiple moments into a political constellation that takes into account and articulates its constituent moments. This is the first step in the mapping of the space of a disruptive left politics. This is the development of a spatial politics in as much as, crucially, the programme calls for a weaving together of multiplicity into this constellational form. Secondly, we might think about the way in which this spatial diagram can then be  translated into a temporal project, that is the transformation of a mapping program into a political programme linked to social change. This requires the capture and illumination of the multiple movements of each moment and re-articulation through a process of reframing the condition of possibility in each constellational movement in terms of a political economy that draws from the historical possibilities that the left has made possible previously, but also the need for new concepts and ideas to link the political of necessity to the huge capacity of a left project towards mitigating/and or replacement of a neoliberal capitalist economic system. Lastly, it seems to me that to be a truly curatorial politics means to link to the singularity itself as a force of strength for left politics, such that the development of a mode of the articulation of individual political needs, is made possible through the curatorial mode, and through the development of disruptive left frameworks that links individual need, social justice, institutional support, and left politics that reconnects the passions of interests to the passion for justice and equality with the singularity’s concern with intensification.[1] This can, perhaps, be thought of as the replacement of a left project of ideological purity with a return to the Gramscian notions of strategy and tactics through the deployment of what he called a passive revolution, mobilised partially in the new forms of civil society created through collectivities of singularities within social media, computational devices and the new infrastructures of digital capitalism but also within the through older forms of social institutions, political contestations and education.[2]
    _____

    David M. Berry is Reader in the School of Media, Film and Music at the University of Sussex. He writes widely on computation and the digital and blogs at Stunlaw. He is the author of Critical Theory and the Digital, The Philosophy of Software: Code and Mediation in the Digital Age , Copy, Rip, Burn: The Politics of Copyleft and Open Source, editor of Understanding Digital Humanities and co-editor of Postdigital Aesthetics: Art, Computation And Design. He is also a Director of the Sussex Humanities Lab.

    Back to the essay
    _____

    Notes

    [1] This remains a tentative articulation that is inspired by the power of knowledge-based economies both to create the conditions of singularity through the action of time-axis manipulation (media technologies), but also their (arguably) countervailing power to provide the tools, spaces and practices for the contestation of the singularity connected only with a neoliberal political moment. That is, how can these new concept and ideas, together with the frameworks that are suggested in their mobilisation, provide new means of contestation, sociality and broader connections of commonality and political praxis.

    [2] I leave to a later paper the detailed discussion of the possible subjectivities both in and for themselves within a framework of a curatorial politics. But here I am gesturing towards political parties as the curators of programmes of political goals and ends, able then to use the state as a curatorial enabler of such a political programme. This includes the active development of the individuation of political singularities within such a curatorial framework.

    Bibliography

    Jameson, Fredric. 2015. “The Aesthetics of Singularity.” New Left Review, No. 92 (March-April 2015).

    Back to the essay

  • Artificial Intelligence as Alien Intelligence

    Artificial Intelligence as Alien Intelligence

    By Dale Carrico
    ~

    Science fiction is a genre of literature in which artifacts and techniques humans devise as exemplary expressions of our intelligence result in problems that perplex our intelligence or even bring it into existential crisis. It is scarcely surprising that a genre so preoccupied with the status and scope of intelligence would provide endless variations on the conceits of either the construction of artificial intelligences or contact with alien intelligences.

    Of course, both the making of artificial intelligence and making contact with alien intelligence are organized efforts to which many humans are actually devoted, and not simply imaginative sites in which writers spin their allegories and exhibit their symptoms. It is interesting that after generations of failure the practical efforts to construct artificial intelligence or contact alien intelligence have often shunted their adherents to the margins of scientific consensus and invested these efforts with the coloration of scientific subcultures: While computer science and the search for extraterrestrial intelligence both remain legitimate fields of research, both AI and aliens also attract subcultural enthusiasms and resonate with cultic theology, each attracts its consumer fandoms and public Cons, each has its True Believers and even its UFO cults and Robot cults at the extremities.

    Champions of artificial intelligence in particular have coped in many ways with the serial failure of their project to achieve its desired end (which is not to deny that the project has borne fruit) whatever the confidence with which generation after generation of these champions have insisted that desired end is near. Some have turned to more modest computational ambitions, making useful software or mischievous algorithms in which sad vestiges of the older dreams can still be seen to cling. Some are simply stubborn dead-enders for Good Old Fashioned AI‘s expected eventual and even imminent vindication, all appearances to the contrary notwithstanding. And still others have doubled down, distracting attention from the failures and problems bedeviling AI discourse simply by raising its pitch and stakes, no longer promising that artificial intelligence is around the corner but warning that artificial super-intelligence is coming soon to end human history.

    alien planet

    Another strategy for coping with the failure of artificial intelligence on its conventional terms has assumed a higher profile among its champions lately, drawing support for the real plausibility of one science-fictional conceit — construction of artificial intelligence — by appealing to another science-fictional conceit, contact with alien intelligence. This rhetorical gambit has often been conjoined to the compensation of failed AI with its hyperbolic amplification into super-AI which I have already mentioned, and it is in that context that I have written about it before myself. But in a piece published a few days ago in The New York Times, “Outing A.I.: Beyond the Turing Test,” Benjamin Bratton, a professor of visual arts at U.C. San Diego and Director of a design think-tank, has elaborated a comparatively sophisticated case for treating artificial intelligence as alien intelligence with which we can productively grapple. Near the conclusion of his piece Bratton declares that “Musk, Gates and Hawking made headlines by speaking to the dangers that A.I. may pose. Their points are important, but I fear were largely misunderstood by many readers.” Of course these figures made their headlines by making the arguments about super-intelligence I have already rejected, and mentioning them seems to indicate Bratton’s sympathy with their gambit and even suggests that his argument aims to help us to understand them better on their own terms. Nevertheless, I take Bratton’s argument seriously not because of but in spite of this connection. Ultimately, Bratton makes a case for understanding AI as alien that does not depend on the deranging hyperbole and marketing of robocalypse or robo-rapture for its force.

    In the piece, Bratton claims “Our popular conception of artificial intelligence is distorted by an anthropocentric fallacy.” The point is, of course, well taken, and the litany he rehearses to illustrate it is enormously familiar by now as he proceeds to survey popular images from Kubrick’s HAL to Jonze’s Her and to document public deliberation about the significance of computation articulated through such imagery as the “rise of the machines” in the Terminator franchise or the need for Asimov’s famous fictional “Three Laws of Robotics.” It is easy — and may nonetheless be quite important — to agree with Bratton’s observation that our computational/media devices lack cruel intentions and are not susceptible to Asimovian consciences, and hence thinking about the threats and promises and meanings of these devices through such frames and figures is not particularly helpful to us even though we habitually recur to them by now. As I say, it would be easy and important to agree with such a claim, but Bratton’s proposal is in fact somewhat a different one:

    [A] mature A.I. is not necessarily a humanlike intelligence, or one that is at our disposal. If we look for A.I. in the wrong ways, it may emerge in forms that are needlessly difficult to recognize, amplifying its risks and retarding its benefits. This is not just a concern for the future. A.I. is already out of the lab and deep into the fabric of things. “Soft A.I.,” such as Apple’s Siri and Amazon recommendation engines, along with infrastructural A.I., such as high-speed algorithmic trading, smart vehicles and industrial robotics, are increasingly a part of everyday life.

    Here the serial failure of the program of artificial intelligence is redeemed simply by declaring victory. Bratton demonstrates that crying uncle does not preclude one from still crying wolf. It’s not that Siri is some sickly premonition of the AI-daydream still endlessly deferred, but that it represents the real rise of what robot cultist Hans Moravec once promised would be our “mind children” but here and now as elfen aliens with an intelligence unto themselves. It’s not that calling a dumb car a “smart” car is simply a hilarious bit of obvious marketing hyperbole, but represents the recognition of a new order of intelligent machines among us. Rather than criticize the way we may be “amplifying its risks and retarding its benefits” by reading computation through the inapt lens of intelligence at all, he proposes that we should resist holding machine intelligence to the standards that have hitherto defined it for fear of making its recognition “too difficult.”

    The kernel of legitimacy in Bratton’s inquiry is its recognition that “intelligence is notoriously difficult to define and human intelligence simply can’t exhaust the possibilities.” To deny these modest reminders is to indulge in what he calls “the pretentious folklore” of anthropocentrism. I agree that anthropocentrism in our attributions of intelligence has facilitated great violence and exploitation in the world, denying the dignity and standing of Cetaceans and Great Apes, but has also facilitated racist, sexist, xenophobic travesties by denigrating humans as beastly and unintelligent objects at the disposal of “intelligent” masters. “Some philosophers write about the possible ethical ‘rights’ of A.I. as sentient entities, but,” Bratton is quick to insist, “that’s not my point here.” Given his insistence that the “advent of robust inhuman A.I.” will force a “reality-based” “disenchantment” to “abolish the false centrality and absolute specialness of human thought and species-being” which he blames in his concluding paragraph with providing “theological and legislative comfort to chattel slavery” it is not entirely clear to me that emancipating artificial aliens is not finally among the stakes that move his argument whatever his protestations to the contrary. But one can forgive him for not dwelling on such concerns: the denial of an intelligence and sensitivity provoking responsiveness and demanding responsibilities in us all to women, people of color, foreigners, children, the different, the suffering, nonhuman animals compels defensive and evasive circumlocutions that are simply not needed to deny intelligence and standing to an abacus or a desk lamp. It is one thing to warn of the anthropocentric fallacy but another to indulge in the pathetic fallacy.

    Bratton insists to the contrary that his primary concern is that anthropocentrism skews our assessment of real risks and benefits. “Unfortunately, the popular conception of A.I., at least as depicted in countless movies, games and books, still seems to assume that humanlike characteristics (anger, jealousy, confusion, avarice, pride, desire, not to mention cold alienation) are the most important ones to be on the lookout for.” And of course he is right. The champions of AI have been more than complicit in this popular conception, eager to attract attention and funds for their project among technoscientific illiterates drawn to such dramatic narratives. But we are distracted from the real risks of computation so long as we expect risks to arise from a machinic malevolence that has never been on offer nor even in the offing. Writes Bratton: “Perhaps what we really fear, even more than a Big Machine that wants to kill us, is one that sees us as irrelevant. Worse than being seen as an enemy is not being seen at all.”

    But surely the inevitable question posed by Bratton’s disenchanting expose at this point should be: Why, once we have set aside the pretentious folklore of machines with diabolical malevolence, do we not set aside as no less pretentiously folkloric the attribution of diabolical indifference to machines? Why, once we have set aside the delusive confusion of machine behavior with (actual or eventual) human intelligence, do we not set aside as no less delusive the confusion of machine behavior with intelligence altogether? There is no question were a gigantic bulldozer with an incapacitated driver to swerve from a construction site onto a crowded city thoroughfare this would represent a considerable threat, but however tempting it might be in the fraught moment or reflective aftermath poetically to invest that bulldozer with either agency or intellect it is clear that nothing would be gained in the practical comprehension of the threat it poses by so doing. It is no more helpful now in an epoch of Greenhouse storms than it was for pre-scientific storytellers to invest thunder and whirlwinds with intelligence. Although Bratton makes great play over the need to overcome folkloric anthropocentrism in our figuration of and deliberation over computation, mystifying agencies and mythical personages linger on in his accounting however he insists on the alienness of “their” intelligence.

    Bratton warns us about the “infrastructural A.I.” of high-speed financial trading algorithms, Google and Amazon search algorithms, “smart” vehicles (and no doubt weaponized drones and autonomous weapons systems would count among these), and corporate-military profiling programs that oppress us with surveillance and harass us with targeted ads. I share all of these concerns, of course, but personally insist that our critical engagement with infrastructural coding is profoundly undermined when it is invested with insinuations of autonomous intelligence. In “Art in the Age of Mechanical Reproducibility,” Walter Benjamin pointed out that when philosophers talk about the historical force of art they do so with the prejudices of philosophers: they tend to write about those narrative and visual forms of art that might seem argumentative in allegorical and iconic forms that appear analogous to the concentrated modes of thought demanded by philosophy itself. Benjamin proposed that perhaps the more diffuse and distracted ways we are shaped in our assumptions and aspirations by the durable affordances and constraints of the made world of architecture and agriculture might turn out to drive history as much or even more than the pet artforms of philosophers do. Lawrence Lessig made much the same point when he declared at the turn of the millennium that “Code Is Law.”

    It is well known that special interests with rich patrons shape the legislative process and sometimes even explicitly craft legislation word for word in ways that benefit them to the cost and risk of majorities. It is hard to see how our assessment of this ongoing crime and danger would be helped and not hindered by pretending legislation is an autonomous force exhibiting an alien intelligence, rather than a constellation of practices, norms, laws, institutions, ritual and material artifice, the legacy of the historical play of intelligent actors and the site for the ongoing contention of intelligent actors here and now. To figure legislation as a beast or alien with a will of its own would amount to a fetishistic displacement of intelligence away from the actual actors actually responsible for the forms that legislation actually takes. It is easy to see why such a displacement is attractive: it profitably abets the abuses of majorities by minorities while it absolves majorities from conscious complicity in the terms of their own exploitation by laws made, after all, in our names. But while these consoling fantasies have an obvious allure this hardly justifies our endorsement of them.

    I have already written in the past about those who want to propose, as Bratton seems inclined to do in the present, that the collapse of global finance in 2008 represented the working of inscrutable artificial intelligences facilitating rapid transactions and supporting novel financial instruments of what was called by Long Boom digerati the “new economy.” I wrote:

    It is not computers and programs and autonomous techno-agents who are the protagonists of the still unfolding crime of predatory plutocratic wealth-concentration and anti-democratizing austerity. The villains of this bloodsoaked epic are the bankers and auditors and captured-regulators and neoliberal ministers who employed these programs and instruments for parochial gain and who then exonerated and rationalized and still enable their crimes. Our financial markets are not so complex we no longer understand them. In fact everybody knows exactly what is going on. Everybody understands everything. Fraudsters [are] engaged in very conventional, very recognizable, very straightforward but unprecedentedly massive acts of fraud and theft under the cover of lies.

    I have already written in the past about those who want to propose, as Bratton seems inclined to do in the present, that our discomfiture in the setting of ubiquitous algorithmic mediation results from an autonomous force over which humans intentions are secondary considerations. I wrote:

    [W]hat imaginary scene is being conjured up in this exculpatory rhetoric in which inadvertent cruelty is ‘coming from code’ as opposed to coming from actual persons? Aren’t coders actual persons, for example? … [O]f course I know what [is] mean[t by the insistence…] that none of this was ‘a deliberate assault.’ But it occurs to me that it requires the least imaginable measure of thought on the part of those actually responsible for this code to recognize that the cruelty of [one user’s] confrontation with their algorithm was the inevitable at least occasional result for no small number of the human beings who use Facebook and who live lives that attest to suffering, defeat, humiliation, and loss as well as to parties and promotions and vacations… What if the conspicuousness of [this] experience of algorithmic cruelty indicates less an exceptional circumstance than the clarifying exposure of a more general failure, a more ubiquitous cruelty? … We all joke about the ridiculous substitutions performed by autocorrect functions, or the laughable recommendations that follow from the odd purchase of a book from Amazon or an outing from Groupon. We should joke, but don’t, when people treat a word cloud as an analysis of a speech or an essay. We don’t joke so much when a credit score substitutes for the judgment whether a citizen deserves the chance to become a homeowner or start a small business, or when a Big Data profile substitutes for the judgment whether a citizen should become a heat signature for a drone committing extrajudicial murder in all of our names. [An] experience of algorithmic cruelty [may be] extraordinary, but that does not mean it cannot also be a window onto an experience of algorithmic cruelty that is ordinary. The question whether we might still ‘opt out’ from the ordinary cruelty of algorithmic mediation is not a design question at all, but an urgent political one.

    I have already written in the past about those who want to propose, as Bratton seems inclined to do in the present, that so-called Killer Robots are a threat that must be engaged by resisting or banning “them” in their alterity rather than by assigning moral and criminal responsibility on those who code, manufacture, fund, and deploy them. I wrote:

    Well-meaning opponents of war atrocities and engines of war would do well to think how tech companies stand to benefit from military contracts for ‘smarter’ software and bleeding-edge gizmos when terrorized and technoscientifically illiterate majorities and public officials take SillyCon Valley’s warnings seriously about our ‘complacency’ in the face of truly autonomous weapons and artificial super-intelligence that do not exist. It is crucial that necessary regulation and even banning of dangerous ‘autonomous weapons’ proceeds in a way that does not abet the mis-attribution of agency, and hence accountability, to devices. Every ‘autonomous’ weapons system expresses and mediates decisions by responsible humans usually all too eager to disavow the blood on their hands. Every legitimate fear of ‘killer robots’ is best addressed by making their coders, designers, manufacturers, officials, and operators accountable for criminal and unethical tools and uses of tools… There simply is no such thing as a smart bomb. Every bomb is stupid. There is no such thing as an autonomous weapon. Every weapon is deployed. The only killer robots that actually exist are human beings waging and profiting from war.

    “Arguably,” argues Bratton, “the Anthropocene itself is due less to technology run amok than to the humanist legacy that understands the world as having been given for our needs and created in our image. We hear this in the words of thought leaders who evangelize the superiority of a world where machines are subservient to the needs and wishes of humanity… This is the sentiment — this philosophy of technology exactly — that is the basic algorithm of the Anthropocenic predicament, and consenting to it would also foreclose adequate encounters with A.I.” The Anthropocene in this formulation names the emergence of environmental or planetary consciousness, an emergence sometimes coupled to the global circulation of the image of the fragility and interdependence of the whole earth as seen by humans from outer space. It is the recognition that the world in which we evolved to flourish might be impacted by our collective actions in ways that threaten us all. Notice, by the way, that multiculture and historical struggle are figured as just another “algorithm” here.

    I do not agree that planetary catastrophe inevitably followed from the conception of the earth as a gift besetting us to sustain us, indeed this premise understood in terms of stewardship or commonwealth would go far in correcting and preventing such careless destruction in my opinion. It is the false and facile (indeed infantile) conception of a finite world somehow equal to infinite human desires that has landed us and keeps us delusive ignoramuses lodged in this genocidal and suicidal predicament. Certainly I agree with Bratton that it would be wrong to attribute the waste and pollution and depletion of our common resources by extractive-industrial-consumer societies indifferent to ecosystemic limits to “technology run amok.” The problem of so saying is not that to do so disrespects “technology” — as presumably in his view no longer treating machines as properly “subservient to the needs and wishes of humanity” would more wholesomely respect “technology,” whatever that is supposed to mean — since of course technology does not exist in this general or abstract way to be respected or disrespected.

    The reality at hand is that humans are running amok in ways that are facilitated and mediated by certain technologies. What is demanded in this moment by our predicament is the clear-eyed assessment of the long-term costs, risks, and benefits of technoscientific interventions into finite ecosystems to the actual diversity of their stakeholders and the distribution of these costs, risks, and benefits in an equitable way. Quite a lot of unsustainable extractive and industrial production as well as mass consumption and waste would be rendered unprofitable and unappealing were its costs and risks widely recognized and equitably distributed. Such an understanding suggests that what is wanted is to insist on the culpability and situation of actually intelligent human actors, mediated and facilitated as they are in enormously complicated and demanding ways by technique and artifice. The last thing we need to do is invest technology-in-general or environmental-forces with alien intelligence or agency apart from ourselves.

    I am beginning to wonder whether the unavoidable and in many ways humbling recognition (unavoidable not least because of environmental catastrophe and global neoliberal precarization) that human agency emerges out of enormously complex and dynamic ensembles of interdependent/prostheticized actors gives rise to compensatory investments of some artifacts — especially digital networks, weapons of mass destruction, pandemic diseases, environmental forces — with the sovereign aspect of agency we no longer believe in for ourselves? It is strangely consoling to pretend our technologies in some fancied monolithic construal represent the rise of “alien intelligences,” even threatening ones, other than and apart from ourselves, not least because our own intelligence is an alienated one and prostheticized through and through. Consider the indispensability of pedagogical techniques of rote memorization, the metaphorization and narrativization of rhetoric in songs and stories and craft, the technique of the memory palace, the technologies of writing and reading, the articulation of metabolism and duration by timepieces, the shaping of both the body and its bearing by habit and by athletic training, the lifelong interplay of infrastructure and consciousness: all human intellect is already technique. All culture is prosthetic and all prostheses are culture.

    Bratton wants to narrate as a kind of progressive enlightenment the mystification he recommends that would invest computation with alien intelligence and agency while at once divesting intelligent human actors, coders, funders, users of computation of responsibility for the violations and abuses of other humans enabled and mediated by that computation. This investment with intelligence and divestment of responsibility he likens to the Copernican Revolution in which humans sustained the momentary humiliation of realizing that they were not the center of the universe but received in exchange the eventual compensation of incredible powers of prediction and control. One might wonder whether the exchange of the faith that humanity was the apple of God’s eye for a new technoscientific faith in which we aspired toward godlike powers ourselves was really so much a humiliation as the exchange of one megalomania for another. But what I want to recall by way of conclusion instead is that the trope of a Copernican humiliation of the intelligent human subject is already quite a familiar one:

    In his Introductory Lectures on Psychoanalysis Sigmund Freud notoriously proposed that

    In the course of centuries the naive self-love of men has had to submit to two major blows at the hands of science. The first was when they learnt that our earth was not the center of the universe but only a tiny fragment of a cosmic system of scarcely imaginable vastness. This is associated in our minds with the name of Copernicus… The second blow fell when biological research de­stroyed man’s supposedly privileged place in creation and proved his descent from the animal kingdom and his ineradicable animal nature. This revaluation has been accomplished in our own days by Darwin… though not without the most violent contemporary opposition. But human megalomania will have suffered its third and most wounding blow from the psychological research of the present time which seeks to prove to the ego that it is not even master in its own house, but must content itself with scanty information of what is going on un­consciously in the mind.

    However we may feel about psychoanalysis as a pseudo-scientific enterprise that did more therapeutic harm than good, Freud’s works considered instead as contributions to moral philosophy and cultural theory have few modern equals. The idea that human consciousness is split from the beginning as the very condition of its constitution, the creative if self-destructive result of an impulse of rational self-preservation beset by the overabundant irrationality of humanity and history, imposed a modesty incomparably more demanding than Bratton’s wan proposal in the same name. Indeed, to the extent that the irrational drives of the dynamic unconscious are often figured as a brute machinic automatism, one is tempted to suggest that Bratton’s modest proposal of alien artifactual intelligence is a fetishistic disavowal of the greater modesty demanded by the alienating recognition of the stratification of human intelligence by unconscious forces (and his moniker a symptomatic citation). What is striking about the language of psychoanalysis is the way it has been taken up to provide resources for imaginative empathy across the gulf of differences: whether in the extraordinary work of recent generations of feminist, queer, and postcolonial scholars re-orienting the project of the conspicuously sexist, heterosexist, cissexist, racist, imperialist, bourgeois thinker who was Freud to emancipatory ends, or in the stunning leaps in which Freud identified with neurotic others through psychoanalytic reading, going so far as to find in the paranoid system-building of the psychotic Dr. Schreber an exemplar of human science and civilization and a mirror in which he could see reflected both himself and psychoanalysis itself. Freud’s Copernican humiliation opened up new possibilities of responsiveness in difference out of which could be built urgently necessary responsibilities otherwise. I worry that Bratton’s Copernican modesty opens up new occasions for techno-fetishistic fables of history and disavowals of responsibility for its actual human protagonists.
    _____

    Dale Carrico is a member of the visiting faculty at the San Francisco Art Institute as well as a lecturer in the Department of Rhetoric at the University of California at Berkeley from which he received his PhD in 2005. His work focuses on the politics of science and technology, especially peer-to-peer formations and global development discourse and is informed by a commitment to democratic socialism (or social democracy, if that freaks you out less), environmental justice critique, and queer theory. He is a persistent critic of futurological discourses, especially on his Amor Mundi blog, on which an earlier version of this post first appeared.

    Back to the essay

  • The Man Who Loved His Laptop

    The Man Who Loved His Laptop

    Her (2013)a review of Spike Jonze (dir.), Her (2013)
    by Mike Bulajewski
    ~
    I’m told by my sister, who is married to a French man, that the French don’t say “I love you”—or at least they don’t say it often. Perhaps they think the words are superfluous and it’s the behavior of the person you are in a relationship with tells you everything. Americans, on the other hand, say it to everyone—lovers, spouses, friends, parents, grandparents, children, pets—and as often as possible, as if quantity matters most. The declaration is also an event. For two people beginning a relationship, it marks a turning point and a new stage in the relationship.

    If you aren’t American, you may not have realized that relationships have stages. In America, they do. It’s complicated. First there are the three main thresholds of commitment: Dating, Exclusive Dating, then of course Marriage. There are three lesser pre-Dating stages: Just Talking, Hooking Up and Friends with Benefits; and one minor stage between Dating and Exclusive called Pretty Much Exclusive. Within Dating, there are several minor substages: number of dates (often counted up to the third date) and increments of physical intimacy denoted according to the well-known baseball metaphor of first, second, third and home base.

    There are also a number of rituals that indicate progress: updating of Facebook relationship statuses; leaving a toothbrush at each other’s houses; the aforementioned exchange of I-love-you’s; taking a vacation together; meeting the parents; exchange of house keys; and so on. When people, especially unmarried people talk about relationships, often the first questions are about these stages and rituals. In France the system is apparently much less codified. One convention not present in the United States is that romantic interest is signaled when a man invites a woman to go for a walk with him.

    The point is two-fold: first, although Americans admire and often think of French culture as holding up a standard for what romance ought to be, Americans act nothing like the French in relationships and in fact know very little about how they work in France. Second and more importantly, in American culture love is widely understood as spontaneous and unpredictable, and yet there is also an opposite and often unacknowledged expectation that relationships follow well-defined rules and rituals.

    This contradiction might explain the great public clamor over romance apps like Romantimatic and BroApp that automatically send your significant other romantic messages, either predefined or your own creation, at regular intervals—what philosopher of technology Evan Selinger calls (and not without justification) apps that outsource our humanity.

    Reviewers of these apps were unanimous in their disapproval, disagreeing only on where to locate them on a spectrum between pretty bad and sociopathic. Among all the labor-saving apps and devices, why should this one in particular be singled out for opprobrium?

    Perhaps one reason for the outcry is that they expose an uncomfortable truth about how easily romance can be automated. Something we believe is so intimate is revealed as routine and predictable. What does it say about our relationship needs that the right time to send a loving message to your significant other can be reduced to an algorithm?

    The routinization of American relationships first struck me in the context of this little-known fact about how seldom French people say “I love you.” If you had to launch one of these romance apps in France, it wouldn’t be enough to just translate the prewritten phrases into French. You’d have to research French romantic relationships and discover what are the most common phrases—if there are any—and how frequently text messages are used for this purpose. It’s possible that French people are too unpredictable, or never use text messages for romantic purposes, so the app is just not feasible in France.

    Romance is culturally determined. That American romance can be so easily automated reveals how standardized and even scheduled relationships already are. Selinger’s argument that automated romance undermines our humanity has some merit, but why stop with apps? Why not address the problem at a more fundamental level and critique the standardized courtship system that regulates romance. Doesn’t this also outsource our humanity?

    The best-selling relationship advice book The 5 Love Languages claims that everyone understands one of five love “languages” and the key to a happy relationship for each partner to learn to express love in the correct language. Should we be surprised if the more technically minded among us concludes that the problem of love can be solved with technology? Why not try to determine the precise syntax and semantics of these love languages, and attempt to express them rigorously and unambiguously in the same way that computer languages and communications protocols are? Can love be reduced to grammar?

    Spike Jonze’s Her (2013) tells the story of Theodore Twombly, a soon-to-be divorced writer who falls in love with Samantha, an AI operating system who far exceeds the abilities of today’s natural language assistants like Apple’s Siri or Microsoft’s Cortana. Samantha is not only hyper-intelligent, she’s also capable of laughter, telling jokes, picking up on subtle unspoken interpersonal cues, feeling and communicating her own emotions, and so on. Theodore falls in love with her, but there is no sense that their relationship is deficient because she’s not human. She is as emotionally expressive as any human partner, at least on film.

    Theodore works for a company called BeautifulHandwrittenLetters.com as a professional Cyrano de Bergerac (or perhaps a human Romantimatic), ghostwriting heartfelt “handwritten” letters on behalf of this clients. It’s an ironic twist: Samantha is his simulated girlfriend, a role which he himself adopts at work by simulating the feelings of his clients. The film opens with Theodore at his desk at work, narrating a letter from a wife to her husband on the occasion of their 50th wedding anniversary. He is a master of the conventions of the love letter. Later in the film, his work is discovered by a literary agent, and he gets an offer to have book published of his best work.

    [youtube https://www.youtube.com/watch?v=CxahbnUCZxY&w=560&h=315]

    But for all his (alleged) expertise as a romantic writer, Theodore is lonely, emotionally stunted, ambivalent towards the women in his life, and—at least before meeting Samantha—apparently incapable of maintaining relationships since he separated from his ex-wife Catherine. Highly sensitive, he is disturbed by encounters with women that go off the script: a phone sex encounter goes awry when the woman demands that he enact her bizarre fantasy of being choked with a dead cat; and on a date with a woman one night, she exposes a little too much vulnerability and drunkenly expresses her fear that he won’t call her. He abruptly and awkwardly ends the date.

    Theodore wanders aimlessly through the high tech city as if it is empty. With headphones always on, he’s withdrawn, cocooned in a private sonic bubble. He interacts with his device through voice, asking it to play melancholy songs and skipping angry messages from his attorney demanding that he sign the divorce papers already. At times, he daydreams of happier times when he and his ex-wife were together and tells Samantha how much he liked being married. At first it seems that Catherine left him. We wonder if he withdrew from the pain of his heartbreak. But soon a different picture emerges. When they finally meet to sign the divorce papers over lunch, Catherine accuses him of not being able to handle her emotions and reveals that he tried to get her on Prozac. She says to him “I always felt like you wished I could just be a happy, light, everything’s great, bouncy L.A. wife. But that’s not me.”

    So Theodore’s avoidance of real challenges and emotions in relationships turns out to be an ongoing problem—the cause, not the consequence, of his divorce. Starting a relationship with his operating systems Samantha is his latest retreat from reality—not from physical reality, but from the virtual reality of authentic intersubjective contact.

    Unlike his other relationships, Samantha is perfectly customized to his needs. She speaks his “love language.” Today we personalize our operating system and fill out online dating profile specifying exactly what kind of person we’re looking for. When Theodore installs Samantha on his computer for the first time, the two operations are combined with a single question. The system asks him how he would describe his relationship with his mother. He begins to reply with psychological banalities about how she is insufficiently attuned to his needs, and it quickly stops him, already knowing what he’s about. And so do we.

    That Theodore is selfish doesn’t mean that he is unfeeling, unkind, insensitive, conceited or uninterested in his new partners thoughts, feelings and goals. His selfishness is the kind that’s approved and even encouraged today, the ethically consistent selfishness that respects the right of others to be equally selfish. What he wants most of all is to be comfortable, to feel good, and that requires a partner who speaks his love language and nothing else, someone who says nothing that would veer off-script and reveal too many disturbing details. More precisely, Theodore wants someone who speaks what Lacan called empty speech: speech that obstructs the revelation of the subject’s traumatic desire.

    Objectification is a traditional problem between men and women. Men reduce women to mere bodies or body parts that exist only for sexual gratification, treating them as sex objects rather than people. The dichotomy is between the physical as the domain of materiality, animality and sex on one hand, and the spiritual realm of subjectivity, personality, agency and the soul on the other. If objectification eliminates the soul, then Theodore engages in something like the opposite, a subjectification which eradicates the body. Samantha is just a personality.

    Technology writer Nicholas Carr‘s new book The Glass Cage: Automation and Us (Norton, 2014) investigates the ways that automation and artificial intelligence dull our cognitive capacities. Her can be read as a speculative treatment of the same idea as it relates to emotion. What if the difficulty of relationships could be automated away? The film’s brilliant provocation is that it shows us a lonely, hollow world mediated through technology but nonetheless awash in sentimentality. It thwarts our expectations that algorithmically-generated emotion would be as stilted and artificial as today’s speech synthesizers. Samantha’s voice is warm, soulful, relatable and expressive. She’s real, and the feelings she triggers in Theodore are real.

    But real feelings with real sensations can also be shallow. As Maria Bustillo notes, Theodore is an awful writer, at least by today’s standards. Here’s the kind of prose that wins him accolades from everyone around him:

    I remember when I first started to fall in love with you like it was last night. Lying naked beside you in that tiny apartment, it suddenly hit me that I was part of this whole larger thing, just like our parents, and our parents’ parents. Before that I was just living my life like I knew everything, and suddenly this bright light hit me and woke me up. That light was you.

    In spite of this, we’re led to believe that Theodore is some kind of literary genius. Various people in his life compliment him on his skill and the editor of the publishing company who wants to publish his work emails to tell him how moved he and his wife were when they read them. What kind of society would treat such pedestrian writing as unusual, profound or impressive? And what is the average person’s writing like if Theodore’s services are worth paying for?

    Recall the cult favorite Idiocracy (2006) directed by Mike Judge, a science fiction satire set in a futuristic dystopia where anti-intellectualism is rampant and society has descended into stupidity. We can’t help but conclude that Her offers a glimpse into a society that has undergone a similar devolution into both emotional and literary idiocy.

    _____

    Mike Bulajewski (@mrteacup) is a user experience designer with a Master’s degree from University of Washington’s Human Centered Design and Engineering program. He writes about technology, psychoanalysis, philosophy, design, ideology & Slavoj Žižek at MrTeacup.org, where an earlier version of this review first appeared.

    Back to the essay

  • The Eversion of the Digital Humanities

    The Eversion of the Digital Humanities

    image
    by Brian Lennon

    on The Emergence of the Digital Humanities by Steven E. Jones

    1

    Steven E. Jones begins his Introduction to The Emergence of the Digital Humanities (Routledge, 2014) with an anecdote concerning a speaking engagement at the Illinois Institute of Technology in Chicago. “[M]y hosts from the Humanities department,” Jones tells us,

    had also arranged for me to drop in to see the fabrication and rapid-prototyping lab, the Idea Shop at the University Technology Park. In one empty room we looked into, with schematic drawings on the walls, a large tabletop machine jumped to life and began whirring, as an arm with a router moved into position. A minute later, a student emerged from an adjacent room and adjusted something on the keyboard and monitor attached by an extension arm to the frame for the router, then examined an intricately milled block of wood on the table. Next door, someone was demonstrating finely machined parts in various materials, but mostly plastic, wheels within bearings, for example, hot off the 3D printer….

    What exactly, again, was my interest as a humanist in taking this tour, one of my hosts politely asked?1

    It is left almost entirely to more or less clear implication, here, that Jones’s humanities department hosts had arranged the expedition at his request, and mainly or even only to oblige a visitor’s unusual curiosity, which we are encouraged to believe his hosts (if “politely”) found mystifying. Any reader of this book must ask herself, first, if she believes this can really have occurred as reported: and if the answer to that question is yes, if such a genuinely unlikely and unusual scenario — the presumably full-time, salaried employees of an Institute of Technology left baffled by a visitor’s remarkable curiosity about their employer’s very raison d’être — warrants any generalization at all. For that is how Jones proceeds: by generalization, first of all from a strained and improbably dramatic attempt at defamiliarization, in the apparent confidence that this anecdote illuminating the spirit of the digital humanities will charm — whom, exactly?

    It must be said that Jones’s history of “digital humanities” is refreshingly direct and initially, at least, free of obfuscation, linking the emergence of what it denotes to events in roughly the decade preceding the book’s publication, though his reading of those events is tendentious. It was the “chastened” retrenchment after the dot-com bubble in 2000, Jones suggests (rather, just for example, than the bubble’s continued inflation by other means) that produced the modesty of companies like our beloved Facebook and Twitter, along with their modest social networking platform-products, as well as the profound modesty of Google Inc. initiatives like Google Books (“a development of particular interest to humanists,” we are told2) and Google Maps. Jones is clearer-headed when it comes to the disciplinary history of “digital humanities” as a rebaptism of humanities computing and thus — though he doesn’t put it this way — a catachrestic asseveration of traditional (imperial-nationalist) philology like its predecessor:

    It’s my premise that what sets DH apart from other forms of media studies, say, or other approaches to the cultural theory of computing, ultimately comes through its roots in (often text-based) humanities computing, which always had a kind of mixed-reality focus on physical artifacts and archives.3

    Jones is also clear-headed on the usage history of “digital humanities” as a phrase in the English language, linking it to moments of consolidation marked by Blackwell’s Companion to Digital Humanities, the establishment of the National Endowment for the Humanities Office for the Digital Humanities, and higher-education journalism covering the annual Modern Language Association of America conventions. It is perhaps this sensitivity to “digital humanities” as a phrase whose roots lie not in original scholarship or cultural criticism itself (as was still the case with “deconstruction” or “postmodernism,” even at their most shopworn) but in the dependent, even parasitic domains of reference publishing, grant-making, and journalism that leads Jones to declare “digital humanities” a “fork” of humanities computing, rather than a Kuhnian paradigm shift marking otherwise insoluble structural conflict in an intellectual discpline.

    At least at first. Having suggested it, Jones then discards the metaphor drawn from the tree structures of software version control, turning to “another set of metaphors” describing the digital humanities as having emerged not “out of the primordial soup” but “into the spotlight” (Jones, 5). We are left to guess at the provenance of this second metaphor, but its purpose is clear: to construe the digital humanities, both phenomenally and phenomenologically, as the product of a “shift in focus, driven […] by a new set of contexts, generating attention to a range of new activities” (5).

    Change; shift; new, new, new. Not a branch or a fork, not even a trunk: we’re now in the ecoverse of history and historical time, in its collision with the present. The appearance and circulation of the English-language phrase “digital humanities” can be documented — that is one of the things that professors of English like Jones do especially well, when they care to. But “changes in the culture,” much more broadly, within only the last ten years or so? No scholar in any discipline is particularly well trained, well positioned, or even well suited to diagnosing those; and scholars in English studies won’t be at the top of anyone’s list. Indeed, Jones very quickly appeals to “author William Gibson” for help, settling on the emergence of the digital humanities as a response to what Gibson called “the eversion of cyberspace,” in its ostensibly post-panopticist colonization of the physical world.6 It makes for a rather inarticulate and self-deflating statement of argument, in which on its first appearance eversion, ambiguously, appears to denote the response as much as its condition or object:

    My thesis is simple: I think that the cultural response to changes in technology, the eversion, provides an essential context for understanding the emergence of DH as a new field of study in the new millennium.7

    Jones offers weak support for the grandiose claim that “we can roughly date the watershed moment when the preponderant collective perception changed to 2004–2008″ (21). Second Life “peaked,” we are told, while World of Warcraft “was taking off”; Nintendo introduced the Wii; then Facebook “came into its own,” and was joined by Twitter and Foursquare, then Apple’s iPhone. Even then (and setting aside the question of whether such benchmarking is acceptable evidence), for the most part Jones’s argument, such as it is, is that something is happening because we are talking about something happening.

    But who are we? Jones’s is the typical deference of the scholar to the creative artist, unwilling to challenge the latter’s utter dependence on meme engineering, at least where someone like Gibson is concerned; and Jones’s subsequent turn to the work of a scholar like N. Katherine Hayles on the history of cybernetics comes too late to amend the impression that the order of things here is marked first by gadgets, memes, and conversations about gadgets and memes, and only subsequently by ideas and arguments about ideas. The generally unflattering company among whom Hayles is placed (Clay Shirky, Nathan Jurgenson) does little to move us out of the shallows, and Jones’s profoundly limited range of literary reference, even within a profoundly narrowed frame — it’s Gibson, Gibson, Gibson all the time, with the usual cameos by Bruce Sterling and Neal Stephenson — doesn’t help either.

    Jones does have one problem with the digital humanities: it ignores games. “My own interest in games met with resistance from some anonymous peer reviewers for the program for the DH 2013 conference, for example,” he tells us (33). “[T]he digital humanities, at least in some quarters, has been somewhat slow to embrace the study of games” (59). “The digital humanities could do worse than look to games” (36). And so on: there is genuine resentment here.

    But nobody wants to give a hater a slice of the pie, and a Roman peace mandates that such resentment be sublated if it is to be, as we say, taken seriously. And so in a magical resolution of that tension, the digital humanities turns out to be constituted by what it accidentally ignores or actively rejects, in this case — a solution that sweeps antagonism under the rug as we do in any other proper family. “[C]omputer-based video games embody procedures and structures that speak to the fundamental concerns of the digital humanities” (33). “Contemporary video games offer vital examples of digital humanities in practice” (59). If gaming “sounds like what I’ve been describing as the agenda of the digital humanities, it’s no accident” (144).

    Some will applaud Jones’s niceness on this count. It may strike others as desperately friendly, a lingering under a big tent as provisional as any other tent, someday to be replaced by a building, if not by nothing. Few of us will deny recognition to Second Life, World of Warcraft, Wii, Facebook, Twitter, etc. as cultural presences, at least for now. But Jones’s book is also marked by slighter and less sensibly chosen benchmarks, less sensibly chosen because Jones’s treatment of them, in a book whose ambition is to preach to the choir, simply imputes their cultural presence. Such brute force argument drives the pathos that Jones surely feels, as a scholar — in the recognition that among modern institutions, it is only scholarship and the law that preserve any memory at all — into a kind of melancholic unconscious, from whence his objects return to embarrass him. “[A]s I write this,” we read, “QR codes show no signs yet of fading away” (41). Quod erat demonstrandum.

    And it is just there, in such a melancholic unconscious, that the triumphalism of the book’s title, and the “emergence of the digital humanities” that it purports to mark, claim, or force into recognition, straightforwardly gives itself away. For the digital humanities will pass away, and rather than being absorbed into the current order of things, as digital humanities enthusiasts like to believe happened to “high theory” (it didn’t happen), the digital humanities seems more likely, at this point, to end as a blank anachronism, overwritten by the next conjuncture in line with its own critical mass of prognostications.

    2

    To be sure, who could deny the fact of significant “changes in the culture” since 2000, in the United States at least, and at regular intervals: 2001, 2008, 2013…? Warfare — military in character, but when that won’t do, economic; of any interval, but especially when prolonged and deliberately open-ended; of any intensity, but especially when flagrantly extrajudicial and opportunistically, indeed sadistically asymmetrical — will do that to you. No one who sets out to historicize the historical present can afford to ignore the facts of present history, at the very least — but the fact is that Jones finds such facts unworthy of comment, and in that sense, for all its pretense to worldliness, The Emergence of the Digital Humanities is an entirely typical product of the so-called ivory tower, wherein arcane and plain speech alike are crafted to euphemize and thus redirect and defuse the conflicts of the university with other social institutions, especially those other institutions who command the university to do this or do that. To take the ambiguity of Jones’s thesis statement (as quoted above) at its word: what if the cultural response that Jones asks us to imagine, here, is indeed and itself the “eversion” of the digital humanities, in one of the metaphorical senses he doesn’t quite consider: an autotomy or self-amputation that, as McLuhan so enjoyed suggesting in so many different ways, serves to deflect the fact of the world as a whole?

    There are few moments of outright ignorance in The Emergence of the Digital Humanities — how could there be, in the security of such a narrow channel?6 Still, pace Jones’s basic assumption here (it is not quite an argument), we might understand the emergence of the digital humanities as the emergence of a conversation that is not about something — cultural change, etc. — as much as it is an attempt to avoid conversing about something: to avoid discussing such cultural change in its most salient and obvious flesh-and-concrete manifestations. “DH is, of course, a socially constructed phenomenon,” Jones tells us (7) — yet “the social,” here, is limited to what Jones himself selects, and selectively indeed. “This is not a question of technological determinism,” he insists. “It’s a matter of recognizing that DH emerged, not in isolation, but as part of larger changes in the culture at large and that culture’s technological infrastructure” (8). Yet the largeness of those larger changes is smaller than any truly reasonable reader, reading any history of the past decade, might have reason to expect. How pleasant that such historical change was “intertwined with culture, creativity, and commerce” (8) — not brutality, bootlicking, and bank fraud. Not even the modest and rather opportunistic gloom of Gibson’s 2010 New York Times op-ed entitled “Google’s Earth” finds its way into Jones’s discourse, despite the extended treatment that Gibson’s “eversion” gets here.

    From our most ostensibly traditional scholarly colleagues, toiling away in their genuine and genuinely book-dusty modesty, we don’t expect much respect for the present moment (which is why they often surprise us). But The Emergence of the Digital Humanities is, at least in ambition, a book about cultural change over the last decade. And such historiographic elision is substantive — enough so to warrant impatient response. While one might not want to say that nothing good can have emerged from the cultural change of the period in question, it would be infantile to deny that conditions have been unpropitious in the extreme, possibly as unpropitious as they have ever been, in U.S. postwar history — and that claims for the value of what emerges into institutionality and institutionalization, under such conditions, deserve extra care and, indeed defense in advance, if one wants not to invite a reasonably caustic skepticism.

    When Jones does engage in such defense, it is weakly argued. To construe the emergence of the digital humanities as non-meaninglessly concurrent with the emergence of yet another wave of mass educational automation (in the MOOC hype that crested in 2013), for example, is wrong not because Jones can demonstrate that their concurrence is the concurrence of two entirely segregated genealogies — one rooted in Silicon Valley ideology and product marketing, say, and one utterly and completely uncaused and untouched by it — but because to observe their concurrence is “particularly galling” to many self-identified DH practitioners (11). Well, excuse me for galling you! “DH practitioners I know,” Jones informs us, “are well aware of [the] complications and complicities” of emergence in an age of precarious labor, “and they’re often busy answering, complicating, and resisting such opportunistic and simplistic views” (10). Argumentative non sequitur aside, that sounds like a lot of work undertaken in self-defense — more than anyone really ought to have to do, if they’re near to the right side of history. Finally, “those outside DH,” Jones opines in an attempt at counter-critique, “often underestimate the theoretical sophistication of many in computing,” who “know better than many of their humanist critics that their science is provisional and contingent” (10): a statement that will only earn Jones super-demerits from those of such humanist critics — they are more numerous than the likes of Jones ever seem to suspect — who came to the humanities with scientific and/or technical aptitudes, sometimes with extensive educational and/or professional training and experience, and whose “sometimes world-weary and condescending skepticism” (10) is sometimes very well-informed and well-justified indeed, and certain to outlive Jones’s winded jabs at it.

    Jones is especially clumsy in confronting the charge that the digital humanities is marked by a forgetting or evasion of the commitment to cultural criticism foregrounded by other, older and now explicitly competing formations, like so-called new media studies. Citing the suggestion by “media scholar Nick Montfort” that “work in the digital humanities is usually considered to be the digitization and analysis of pre-digital cultural artifacts, not the investigation of contemporary computational media,” Jones remarks that “Montfort’s own work […] seems to me to belie the distinction,”7 as if Montfort — or anyone making such a statement — were simply deluded about his own work, or about his experience of a social economy of intellectual attention under identifiably specific social and historical conditions, or else merely expressing pain at being excluded from a social space to which he desired admission, rather than objecting on principle to a secessionist act of imagination.8

    3

    Jones tells us that he doesn’t “mean to gloss over the uneven distribution of [network] technologies around the world, or the serious social and political problems associated with manufacturing and discarding the devices and maintaining the server farms and cell towers on which the network depends” — but he goes ahead and does it anyway, and without apology or evident regret. “[I]t’s not my topic in this book,” we are told, “and I’ve deliberately restricted my focus to the already-networked world” (3). The message is clear: this is a book for readers who will accept such circumscription, in what they read and contemplate. Perhaps this is what marks the emergence of the digital humanities, in the re-emergence of license for restrictive intellectual ambition and a generally restrictive purview: a bracketing of the world that was increasingly discredited, and discredited with increasing ferocity, just by the way, in the academic humanities in the course of the three decades preceding the first Silicon Valley bubble. Jones suggests that “it can be too easy to assume a qualitative hierarchical difference in the impact of networked technology, too easy to extend the deeper biases of privilege into binary theories of the global ‘digital divide’” (4), and one wonders what authority to grant to such a pronouncement when articulated by someone who admits he is not interested, at least in this book, in thinking about how an — how any — other half lives. It’s the latter, not the former, that is the easy choice here. (Against a single, entirely inconsequential squib in Computer Business Review entitled “Report: Global Digital Divide Getting Worse,” an almost obnoxiously perfunctory footnote pits “a United Nations Telecoms Agency report” from 2012. This is not scholarship.)

    Thus it is that, read closely, the demand for finitude in the one capacity in which we are non-mortal — in thought and intellectual ambition — and the more or less cheerful imagination of an implied reader satisfied by such finitude, become passive microaggressions aimed at another mode of the production of knowledge, whose expansive focus on a theoretical totality of social antagonism (what Jones calls “hierarchical difference”) and justice (what he calls “binary theories”) makes the author of The Emergence of the Digital Humanities uncomfortable, at least on its pages.

    That’s fine, of course. No: no, it’s not. What I mean to say is that it’s unfair to write as if the author of The Emergence of the Digital Humanities alone bears responsibility for this particular, certainly overdetermined state of affairs. He doesn’t — how could he? But he’s getting no help, either, from most of those who will be more or less pleased by the title of his book, and by its argument, such as it is: because they want to believe they have “emerged” along with it, and with that tension resolved, its discomforts relieved. Jones’s book doesn’t seriously challenge that desire, its (few) hedges and provisos notwithstanding. If that desire is more anxious now than ever, as digital humanities enthusiasts find themselves scrutinized from all sides, it is with good reason.
    _____

    Brian Lennon is Associate Professor of English and Comparative Literature at Pennsylvania State University and the author of In Babel’s Shadow: Multilingual Literatures, Monolingual States (University of Minnesota Press, 2010).
    _____

    notes:
    1. Jones, 1.
    Back to the essay

    2. Jones, 4. “Interest” is presumed to be affirmative, here, marking one elision of the range of humanistic critical and scholarly attitudes toward Google generally and the Google Books project in particular. And of the unequivocally less affirmative “interest” of creative writers as represented by the Authors Guild, just for example, Jones has nothing to say: another elision.
    Back to the essay

    3. Jones, 13.
    Back to the essay

    4. See Gibson.
    Back to the essay

    5. Jones, 5.
    Back to the essay

    6. As eager as any other digital humanities enthusiast to accept Franco Moretti’s legitimation of DH, but apparently incurious about the intellectual formation, career and body of work that led such a big fish to such a small pond, Jones opines that Moretti’s “call for a distant reading” stands “opposed to the close reading that has been central to literary studies since the late nineteenth century” (Jones, 62). “Late nineteenth century” when exactly, and where (and how, and why)? one wonders. But to judge by what Jones sees fit to say by way of explanation — that is, nothing at all — this is mere hearsay.
    Back to the essay

    7. Jones, 5. See also Montfort.
    Back to the essay

    8. As further evidence that Montfort’s statement is a mischaracterization or expresses a misunderstanding, Jones suggests the fact that “[t]he Electronic Literature Organization itself, an important center of gravity for the study of computational media in which Montfort has been instrumental, was for a time housed at the Maryland Institute for Technology in the Humanities (MITH), a preeminent DH center where Matthew Kirschenbaum served as faculty advisor” (Jones, 5–6). The non sequiturs continue: “digital humanities” includes the study of computing and media because “self-identified practitioners doing DH” study computing and media (Jones, 6); the study of computing and media is also “digital humanities” because the study of computing and digital media might be performed at institutions like MITH or George Mason University’s Roy Rosenzweig Center for History and New Media, which are “digital humanities centers” (although the phrase “digital humanities” appears nowhere in their names); “digital humanities” also adequately describes work in “media archaeology” or “media history,” because such work has “continued to influence DH” (Jones, 6); new media studies is a component of the digital humanities because some scholars suggest it is so, and others cannot be heard to object, at least after one has placed one’s fingers in one’s ears; and so on.
    Back to the essay

    (feature image: “Bandeau – Manifeste des Digital Humanities,” uncredited; originally posted on flickr.)