boundary 2

Tag: theory

  • Nitzan Lebovic — Biopolitical Times: The Plague and the Plea

    Nitzan Lebovic — Biopolitical Times: The Plague and the Plea

    This essay is a part of the COVID-19 dossier, edited by the b2o editorial staff. 

    by Nitzan Lebovic

    Related article: Christian Haines — A Lyric Intensity of Thought: On the Potentiality and Limits of Giorgio Agamben’s “Homo Sacer” Project

    “Nous savions alors que notre séparation était destinée à durer et que
    nous devions essayer de nous arranger avec le temps.” (Camus, Le Peste)

     

    Addressing coronavirus disease 2019 is a struggle against time, perhaps the first warning of a future world, or the last our species is going to get before losing to global warming. It is a lesson that is meant to teach us the importance of time, how we’re running out of it.

    The spread of the virus and the global response have illustrated how growth and reduction, acceleration and slowing down, belong to the post-postmodern world. From the jet-speed global spread of the virus, with its exponential expansion, to the governmental and local top-down response—a coordinated effort to slow it down, defer its full effects, and stop it—both problem and solution seemed to move to the rhythm of industrialization and globalization. The attempts to contain this catastrophe resonate with biopolitical control: individual isolation, social separation, governmental control, police and medical surveillance. In short, we are living in a new age of catastrophes. Unlike catastrophic world wars caused by late industrialization and mass mobilization, now we experience the catastrophe brought by profit-based consumption and the destruction of our environment and our world, an existential threat imperiling the very idea of human time.

    A recent analysis by Tomas Pueyo gave a name to the desperate need for more time: by comparing different instances of the spread of the coronavirus and the effectiveness of the response, Pueyo showed that the single most important factor is the time between what he calls “the Hammer” of forceful suppression of the spread and the creation of an effective vaccine. He calls this interim period “the dance of R” and concludes: “What,” he asks, “is the one thing that matters now?” His answer: “Time.

    Pueyo’s analysis emphasizes time because it looks, first and foremost, at life. Ironically, the philosopher of “bare life” (Zoë), Giorgio Agamben, disagrees with such estimates. A panel of experts headed by Agamben recently scrutinized the national emergencies (in Agambenian terms, the “states of exception”) declared by many governments in order to contain the spread of COVID-19. (For a better translation of Agamben’s “clarifications” see  here) In his remarks on the situation, published on February 26, Agamben chose to declare quite dogmatically that any state of emergency, even with lives at stake, was a violation of individual autonomy and the fundamental principles of civil society. After comparing COVID-19 to the flu, he argued that Italians were “faced with the frenetic, irrational, and entirely unfounded emergency measures adopted against an alleged epidemic of coronavirus” and that the “disproportionate response” grew out of “the tendency to use a state of exception as a normal paradigm for government” as well as a “general state of fear” encouraged by Western governments for populist and capitalist reasons. Agamben’s remarks were followed on March 17 by “Clarifications” that made explicit his assumption that “our society no longer believes in anything but naked life.”

    These admonitions are not unfounded; populist regimes, from Orbán to Netanyahu and Modi, have already taken to the emergency declarations in order to tighten the screws of control and anti-democratic measures. Yet, Agamben’s two statements also bring to light an unfortunate structural element that is embedded in his theory: a focus on bare life misses the temporality of life. After all, as Schmitt and Agamben have acknowledged, our understanding of bare life assumes the suspension in toto of democratic constitutions (Homo Sacer, 15. Emphasis in the original). Agamben’s recent attack on nuanced analyses such as Pueyo’s “dance of R” proves that his resistance to the idea of sovereignty has blotted out all consideration for life and politics, incidentally identifying an inherent blind spot within his theory. I mean the absence of temporality, or the lack of interest in living time as such. Without a temporal understanding of the biopolitical apparatus, we cannot estimate the dynamics of management and enforcement. We cannot separate a Merkel from a Modi. More specifically, without a temporal analysis of our reality, we have no way to estimate either the spread or the response of the virus. Furthermore, ignoring the temporal dimension causes Agamben to miss a crucial element for contemporary biopolitical critique: the fact that as we run out of time in our search for a better politea we tend to lose sight of our duty as a species to bring our temporal existence—as individuals and as a political community—in line with the planet, as Dipesh Chakrabarty has shown (in History & Theory and Critical Inquiry).

    Let me explain this by the use of a political and a historical case. The history of plagues is convincingly theorized, in a biopolitical vein, by the political philosopher Adi Ophir—an English version of its first half is expected next year from Fordham University Press. Ophir believes that disasters have gradually been secularized and biopoliticized. While the first half of the book engages with biblical disasters, the second half traces the modern biopolitical mechanisms accompanying crises such as bubonic plagues. Ophir goes back to Daniel Defoe’s Due Preparations for the Plague, as Well for Soul as Body (1722) and A Journal of the Plague Year (1722), and Jean-Pierre Papon’s De la peste, ou Époques mémorables de ce fléau et les moyens de s’en préserver (The plague, or Memorable times of this pestilence and the means to prevent it, 1799). The texts are well known to historians of science and intellectual historians, who have used them to show a growing pressure to regulate the means of prevention. What is new in Ophir’s analysis is the attention he gives to the biopolitical means as a form of secularization. For him, plagues are a typical case of the secularization of divine authority, something quite different from the liberal presentation of the evolution of the state as a necessary, positive development. (This is in line with Walter Benjamin’s thinking about “divine violence.”) From this perspective, Defoe and Papon demonstrate that political authorities must rely on emergency decrees and a swift enforcement of isolation to manage and contain the spread of highly infectious diseases. Yet during the eighteenth century any effort of that kind triggered the flight of elites from infected areas, with the concomitant surrender of position and authority to the middle class, a power reclaimed once the danger passed. Ophir, following Michel Foucault’s analysis in Security, Territory, Population and Agamben’s in Homo Sacer and State of Exception, presents the typical management of a national population in troubled times as a coupling of governmental carelessness and abuse of power, usually in the service of the economic interests of the elites and the divine legitimacy of the ruler. As the evolution of such state institutions shows, it is often difficult to separate incompetence from abuse and procedural authority from divine one; both grew out of the abandonment and consolidation of power by emergency decrees. How does it help us understand the politics of the plague better? Looking at such governmental mechanisms from a nonliberal, nonprogressive point of view, one cannot help but note the practical importance of intervening to slow the spread of a dangerous virus by implementing “systematic territorialization.” Seclusion, closure, isolation, and surveillance in times of troubles enabled the court—operating from a safe distance—to save lives. From a different angle, the operative question asked by governments—these troubled Defoe and Papon in the eighteenth century—related to “proper abandonment.” “From the perspective of the state, it is clear,” writes Ophir, echoing those early plague chroniclers, “abandonment is a form of containment, and the seclusion of infected areas is . . . temporary and partial, an urgent need of the hour and aimed at saving the state as a whole.” The measures, in simple words, may help saving lives, but the we must be able to block emergency measures and divine-like authority from becoming the rule, once the elite decides it’s time to come back home.

    Back to the present, back to Agamben and the problem of leaving out temporality. If the most important question in the present moment is that of gaining time (vis-à-vis both earthly plagues and the environmental apocalypse), then a structural analysis of emergencies cannot suffice. A dogmatic insistence on bare life misses the need to take emergency situations seriously; at certain moment, the Hammer needs to fall, for the benefit of the public. Agamben misses, I believe, the real political point of this situation, which is the critique of proper abandonment” and the temporary use of biopolitical measures. Simply put, our struggle should not be about an affirmation or a negation of the state of emergency as such, but an attempt to realize when such decrees diverge from the temporality of life, rejecting the temporal democratic principles that follow the logic of the public in toto (demos and ochlos, rather than a separation between the two). This need not be about sovereign territorialization, economic interest, or bare life. Yes, such analysis requires a history and an understanding of procedural processes, but where would we be if not for Foucault’s emphasis on the gradual shaping of the biopolitical apparatus? Without time, we are left with nothing but bare life.

    Nitzan Lebovic is an associate professor of history and the Apter Chair of Holocaust Studies and Ethical Values at Lehigh University. He is the author of The Philosophy of Life and Death: Ludwig Klages and the Rise of a Nazi Biopolitics (2013) and Zionism and Melancholy: The Short Life of Israel Zarchi (2019) and the coeditor of The Politics of Nihilism (2014) and Catastrophe: A History and Theory of an Operative Concept (2014) as well as the editor of special issues of Rethinking History (Nihilism), Zmanim: Tel-Aviv University Journal of History (Religion and Power), The New German Critique (Political Theology), Comparative Literature and Culture (Complicity and Dissent), and Political Theology (Prophetic Politics).

  • Eugene Thacker – Weird, Eerie, and Monstrous: A Review of “The Weird and the Eerie” by Mark Fisher

    Eugene Thacker – Weird, Eerie, and Monstrous: A Review of “The Weird and the Eerie” by Mark Fisher

    by Eugene Thacker

    Review of Mark Fisher, The Weird and the Eerie (Repeater, 2017)

    For a long time, the horror genre was not generally considered worthy of critical, let alone philosophical, reflection; it was the stuff of cheap thrills, pulp magazines, B-movies. Much of this has changed in the ensuing years, as a robust and diverse critical literature has emerged around the horror genre, much of which not only considers the horror genre as a reflection of society, but as an autonomous platform for posing far-reaching questions concerning the fate of the humans species, the species that has named itself. These are sentiments that have preoccupied recent writing on the horror genre, much of which borrows from developments in contemporary philosophy, and is attempting to expand the confines of horror beyond the usual fixation on gore, violence, and shock tactics. This hasn’t always been the case. Even today, writing on genre horror often tends towards “list” books (of the type The Top 100 Italian Horror Films From 1977, Volume IV), or books that are basically print-on-demand databases (The Encyclopedia of Asian Ghost Stories from the Beginning of Time, and Before That). These are rounded out by a plethora of introductory textbooks and surveys, usually aimed at film studies undergraduates (e.g. Key Terms in Cultural Studies: Splatterpunk), and opaque academic monographs of Lacanian psychoanalytic semiotic readings of horror film that themselves seem to be part of some kind of academic cult.

    While such books can be informative and helpful, reading them can be akin to the slightly woozy feeling one has after having gone down a combined Google/Wikipedia/YouTube rabbit-hole, emerging with bewildered eyes and terabytes of regurgitated data. However, recent writing on the horror genre takes a different approach, eschewing the poles of either the popular or the academic for a perhaps yet-to-be-named third space. One book that takes up this challenge is Mark Fisher’s The Weird and the Eerie, published this year. (Fisher is likely known to readers through his blog K-punk, which had been running for almost two decades before his untimely death.) What Fisher’s study shares with other like-minded books is an interest in expanding our understanding of the horror genre beyond the genre itself, and he does this by focusing on one of the deepest threads in the horror genre: the limits of human beings living in a human-centric world.

    As a case study, consider the opening passage from H.P. Lovecraft’s well-known short story “The Call of Cthulhu”:

    The most merciful thing in the world, I think, is the inability of the human mind to correlate all its contents. We live on a placid island of ignorance in the midst of black seas of infinity, and it was not meant that we should voyage far. The sciences, each straining in its own direction, have hitherto harmed us little; but some day the piecing together of dissociated knowledge will open up such terrifying vistas of reality, and of our frightful position therein, that we shall either go mad from the revelation or flee from the deadly light into the peace and safety of a new dark age.

    With this – arguably the most foreboding opener ever written for a story – Lovecraft sets the stage for what is really an extended meditation on human finitude. Originally published in the February 1928 issue of the pulp magazine Weird Tales, “Cthulhu” ostensibly brings together the perspectives of deep time and deep space to reflect on the comparatively myopic and humble non-event that is human civilization – at least that’s how Lovecraft himself puts it. It is well known that Lovecraft took cues from the likes of Edgar Allan Poe, Algernon Blackwood, and Arthur Machen – influences that he himself notes. Equally well known is Lovecraft’s notorious xenophobia (often expressed in his correspondence as outright racism). Yet in spite of – or because of – this, Lovecraft remained unambiguous in his own approach to the horror genre. In his numerous essays, notes, and letters, he notes, with an unflinching misanthropy, how a horror story should evoke “an atmosphere of breathless and unexplainable dread of outer, unknown forces,” forces that point towards a “malign and particular suspension or defeat of those fixed laws of Nature which are our only safeguard against the assaults of chaos and the daemons of unplumbed space.” The “monsters” in such tales were far from the usual line-up of vampires, werewolves, zombies, and demons – all of which, for Lovecraft and his colleagues, end up serving as mere solipsistic reflections of human-centric hopes and fears. They are often described in abstract, elemental, almost primordial ways: “the colour out of space,” “the shadow out of time,” or simply “the lurking fear.”

    The story of “Cthulhu” itself  – which details the discovery of a cult devoted to an ancient, malefic, Elder Deity vaguely resembling a oozing winged cephalopod emerging from a hidden tomb of impossibly-shaped Cyclopean black geometry foretelling not only the end of the world but the deeper futility of the entirety of human civilization – the story itself has since obtained a cult status among horror authors, critics, and fans alike. In the early 20th century, like-minded tales of cosmic misanthropy were written by Lovecraft contemporaries Clark Ashton Smith, Robert E. Howard, and Robert Bloch, as well as by later authors of the weird tale such as Ramsey Campbell, Claitlín Kiernan, China Miéville, and Junji Ito. Like a slow-moving, tentacular meme, the Cthulhu “mythos” has reached far beyond the confines of literary horror. Film adaptations abound (the term “straight-to-video” no longer applies, but is still apt here). Video games, which nearly always end in despair and/or death. Role-playing games, complete with impossibly-shaped 10-sided black dice. A visit to any Comic Con will yield a dizzying array of comics, ‘zines, artwork, posters, bumper stickers, hoodies, Miskatonic University course catalogs, editions of the dreaded Necronomicon, and even Cthulhu plushies for the Lovecraft toddler. An industry is born. Today, distant cousins of Cthulhu can be seen in the Academy Award-nominated Arrival (2016), and the distinctly un-nominated burlesque that is Independence Day: Resurgence (2016). Cthulhu, it seems, has gone mainstream.

    Amid all the fondness for such abysmal and tentacular monstrosities, it is easy to overlook the themes that run through Lovecraft’s short tale, themes at once disturbing and compelling, and which mark the tradition often referred to as “supernatural horror” or “cosmic horror.” When Lovecraft characters happen upon strange creatures like Cthulhu (or worse, the Shoggoths), they don’t have the typical reactions. “Fear” is too simple a term to describe it; it encompasses everything without saying anything. But neither are they overcome by the more literary affects of “terror” or “horror,” like the characters of an old gothic novel. They have neither the time nor the patience for the critical distance afforded by a psychoanalytic “uncanny,” or the literary structures of the “fantastic.” Confronted with Cthulhu, Lovecraft’s characters simply freeze. They become numb. They go dark. Frozen thought. They can’t wrap their heads around that is right before them. What they “feel” is exactly this “inability of the human mind to correlate all its contents.” Forget the fear of death, I’ve just discovered a primordial, other-dimensional, slime-ridden necropolis of obsidian blasphemy that throws into question all human knowledge on this now-forsaken speck of cosmic dust we laughably call “our” planet.

    Yet, in all their pulpy, melodramatic, low-brow seriousness, the questions raised by Lovecraft and other writers in Weird Tales are also philosophical questions. They are questions that address the limits of human knowledge in a rapidly-changing world, a world that seems indifferent to the machinations of science or doctrinal exuberance of religion, impassive before the hubris of technological advance or the lures of political ideology – a cold “crawling chaos” lurking just beneath the fragile fabric of humanity. What the characters of such stories discover (aside from the usual train of madness, dread, and, well, death) is a kind of stumbling humbleness, the human brain discovering its own limit, enlightened only of its own hubris – the humility of thought.

    *

    This theme  – the limits of what can be known, the limits of what can be felt, the limits of what can be done – is central to Fisher’s The Weird and the Eerie. This is markedly different from other approaches to horror, which, however critical they may seem, often regard the horror genre as having an essentially therapeutic function, enabling us to purge, cope with, or work through our collective fears and anxieties. This therapeutic view of horror often becomes polarized between reactionary readings (a horror story that promotes the establishing or re-establishing of norms) or progressive readings (a horror story that promotes otherness, difference, and transgression of norms). And yet, in the final analysis, it is also hard to escape the sense that there is a certain kind of solipsism to the horror genre, that it is we human beings that remain at the center of it all, who have either constructed boundaries and bunkers and have once again staved off another threat to our collective identity, or who have devised clever ways of creating hybrids, fusions, and monstrous couplings with the other, thereby extending humanity’s long dreamed-of share of immortality.

    Whether reactionary or progressive, both responses to the horror genre involve a strategy in which the world in all its strangeness is transformed into a world made in our own image (anthropomorphism), or a world rendered useful for us as human beings (anthropocentrism). In spite of all the horrifying things that happen to the characters in horror stories, there is a sense in which the horror genre is ultimately a kind of humanism, a panegyric to the limitless potential of human knowledge, the immeasurable capacity for human feeling, the infinite promise of human sovereignty. This is, of course, not surprising, given the somber didactics of even the most extreme zombie apocalypses, vampiric mutations, or demonic plagues. Species self-interest is at stake. Humanity may be brought to the brink of extinction, only so that that same humanity may extend its mastery (self-mastery and mastery over its environment), and even obtain some form of ascendency over its own tenuous, existential status. Subtending the survivalist imperative of the horror genre and its pragmatic arsenal of mastering monsters of all kinds is another kind of mastery – a metaphysical mastery.

    But this is only one way of understanding the horror genre. The insight of books like Fisher’s is that the horror genre is also capable of chipping away at this species-specific sovereignty, taking aim at the twin pillars of anthropomorphism and anthropocentrism. Instead of being concerned with species self-interest and mastery, such horror stories tend more towards humility, hubris, and even, in its darkest moments, futility. It is a project that is doomed to failure, of course, and perhaps this why so many of the characters in the tales of Lovecraft, Algernon Blackwood, or Izumi Kyoka find themselves in worlds that are both untenable and unlivable. They end up with nothing but a bit of useless quasi-wisdom, scribbling away madly in a darkened forest room trying to make sense of it all not making any sense. Or they detach themselves from the humdrum human world of plans and projects, finding themselves inexorably pulled headlong into the ambivalent abyss of self-abnegation. Or worse – they simply continue to exist. What results is what we might call a “bleak humanism” – a horror story interested in humanity only to the extent that humanity is defined by its uncertainties, its finitude, its doubts – the humility of being human.

    Fisher’s terms are relatively clear. “What the weird and the eerie have in common is a preoccupation with the strange.” For Fisher, the strange is, quite simply, “a fascination for the outside […] that which lies beyond standard perception, cognition and experience.” But the weird and the eerie are quite different in how they apprehend the strange. As Fisher writes, “the weird is constituted by a presence – the presence of that which does not belong.” There is something exorbitant, out-of-place, and incongruous about the weird. It is the part that does not fit into the whole, or the part that disturbs the whole – threshold worlds populated by portals, gateways, time loops, and simulacra. Fundamental presumptions about self, other, knowledge, and reality will have to be rethought. “The eerie, by contrast, is constituted by a failure of absence or by a failure of presence. There is something where there should be nothing, or there is nothing where there should be something.” Here we encounter disembodied voices, lapses in memory, selves that are others, revelations of the alien within, and nefarious motives buried in the unconscious, inorganic world in which we are embedded.

    The weird and the eerie are not exclusive to the more esoteric regions of cosmic horror; they are also embedded in and bound up with quotidian notions of selfhood and the everyday relationship between self and world. The weird and eerie crop up in those furtive moments when we suspect we are not who we think we are, when we wonder if we do not act so much as we are acted upon. When everything we assumed to be a cause is really an effect. The weird and eerie are, ultimately, inseparable from the fabric of the social, cultural, and political landscape in which we are embedded. Fisher: “Capital is at every level an eerie entity: conjured out of nothing, capital nevertheless exerts more influence than any allegedly substantial entity.” There is a sense in which, for Fisher, the weird and the eerie constitute the poles of our ubiquitous “capitalist realism,” prompting us to re-examine not only presumptions concerning human agency, intentionality, and control, but also inviting a darker, more disturbing reflection on the strange agency of the inanimate and impersonal materiality of the world around us and within us.

    Fisher’s interest in Lovecraft stems from this shift in perspective from the human-centric to the nonhuman-oriented – not simply a psychology of “fear,” but the unnerving, impersonal calm of the weird and eerie. As scholars of the horror genre frequently note, Lovecraft’s tales are distinct from genre fantasy, in that they rarely posit an other world beyond, beneath, or parallel to this one. And yet, anomalous and strange events do take place within this world. Furthermore, they seem to take place according to some logic that remains utterly alien to the human world of moral codes, natural law, and cosmic order. If such anomalies could simply be dismissed as anomalies, as errors or aberrations in nature, then the natural order of the world would remain intact. But they cannot be so easily dismissed, and neither can they simply be incorporated into the existing order without undermining it entirely. Fisher nicely summarizes the dilemma: “a weird entity or object is so strange that it makes us feel that it should not exist, or at least that it should not exist here. Yet if the entity or object is here, then the categories which we have up until now used to the make sense of the world cannot be valid. The weird thing is not wrong, after all: it is our conceptions that must be inadequate.”

    *

    This dilemma (which literary critic Tzvetan Todorov called “the fantastic”) is presented in unique ways by authors of the weird tale and cosmic horror. Such authors refuse to identify the weird with the supernatural, and often refuse the distinction between the natural and supernatural entirely. They do so not via mythology or religion, but via science – or at least a peculiar take on science. In cosmic horror, the strange reality described by science is often far more unreal than any vampire, werewolf, or zombie. Fisher highlights this: “In many ways, a natural phenomenon such as a black hole is more weird than a vampire.” Why? Because the existence of the vampire, anomalous and transgressive as it may seem, actually reinforces the boundary between the natural order “in here” and a transcendent, supernatural order “out there.” “Compare this to a black hole,” Fisher continues, “the bizarre ways in which it bends space and time are completely outside our common experience, and yet a black hole belongs to the natural-material cosmos – a cosmos which must therefore be much stranger than our ordinary experience can comprehend.” Science, for all its explanatory power, inadvertently reveals the hubris of the explanatory impulse of all human knowledge, not just science.

    Authors such as Lovecraft were well aware of this shift in their approach to the horror genre. An oft-cited passage from one of Lovecraft’s letters reads: “…all my tales are based on the fundamental premise that common human laws and interests and emotions have no validity or significance in the vast cosmos-at-large.” To write the truly weird tale, Lovecraft notes, “one must forget that such things as organic life, good and evil, love and hate, and all such local attributes of a negligible and temporary race called mankind, have any existence at all.” So much for humanism, then. But Fisher is also right to note that Lovecraft’s tales are not simply horror tales. As Lovecraft himself repeatedly noted, the affects of fear, terror, and horror are merely consequences of human being confronting an impersonal and indifferent non-human world – what Lovecraft once called “indifferentism” (which, as he jibes, wonders “whether the cosmos gives a damn one way or the other”). There is an allure to the unhuman that is, at the same time, opaque and obscure. As Fisher writes, “it is not horror but fascination – albeit a fascination usually mixed with a certain trepidation – that is integral to Lovecraft’s rendition of the weird…the weird cannot only repel, it must also compel our attention.”

    This reaches a pitch in Fisher’s writing on author Nigel Kneale and his series of Quatermass films and TV shows. The Quatermass and the Pit series, for instance, opens with the shocking discovery of an alien spaceship buried within the bowels of a London tube station (which station I will not say). The strange, quasi-insect remains inside the ship point to another, very different form of life than that of terrestrial life. But the science tells them that the alien spaceship is actually a relic from the distant past. It seems that not only geology and cosmology, but human history will have to be rethought. Gradually, the scientists learn that the alien relics are millions of years old, and in fact a distant, early progenitor of human beings. We, in turns, out, are they – or vice-versa. The Quatermass series not only demonstrates the efficacy of scientific inquiry, it puts forth a further proposition: that science works too well. “Kneale shows that an enquiry into the nature of what the world is like is also inevitably an unraveling of what human beings had taken themselves to be…if human beings fully belong to the so-called natural world, then on what grounds can a special case be made for them?” Reality turns out to be weirder and more eerie than any fantastical world or alien civilization. This is what Fisher calls “Radical Enlightenment,” a kind of physics that goes all the way, a materialism to the nth degree, even at the cost of disassembling the self-aware and self-privileging human brain that conceives of it. Reversals and inversions abound. What if humanity itself is not the cause of world history but the effect of material and physical laws that we can only dimly intuit?

    This theme of Radical Enlightenment runs through Fisher’s book. While he does discuss works of fiction or film one would expect in relation to the horror genre (Lovecraft, Kubrick’s The Shining, David Lynch’s recent films), Fisher also offers ruminations on contemporary works (such as Jonathan Glazer’s 2013 film Under the Skin), as well as a number of evocative comparisons, such as a chapter on the weird effects of time loops in Rainer Werner Fassbinder’s film World on a Wire and Philip K. Dick’s novel Time Out Of Joint. There are also several surprises, including a meditation on the strange “vanishing landscapes” in M.R. James’s ghost stories and Brian Eno’s 1982 ambient album On Land. Also welcome is Fisher’s attentiveness to under-appreciated works in the horror genre, including the disquieting short fiction of Daphne du Maurier. In the span of a few carefully-written pages, Fisher follows the twists and turns of his twin concepts one chapter at a time, one example at a time, until it is revealed exactly how enmeshed the weird and the eerie are in culture generally.

    *

    The Weird and the Eerie is an evocative and carefully-written short study in cultural aesthetics. Far from the familiar line-up of vampires, zombies, and demons, Fisher’s eclectic examples speak directly to one of the central themes of the horror genre: the limits of human knowledge, the metamorphic shapes of fear, and the blurriness of boundaries of all types. His simple conceptual distinction quickly gives way to reversals, permutations, and complications, ultimately refusing any notion of a monstrous or alien unhumanness  “out there”; with Fisher, the unhuman is more likely to reside within the human itself (or as Lovecraft might write it, “the unhuman is discovered to reside within the human itself”).

    Many books on the horror genre are concerned with providing answers, using varieties of taxonomy and psychology to provide a therapeutic application to “our” lives, helping us to cathartically purge collective anxieties and fears. For Fisher, the emphasis is more on questions, questions that target the vanity and presumptuousness of human culture, questions regarding human consciousness elevating itself above all else, questions concerning the presumed sovereignty of the species at whatever cost – perhaps questions it’s better not to pose, at the risk of undermining the entire endeavor to begin with.

    I should let the reader decide which approach makes more sense, given the weird and/or eerie “Waldo-moment” in which we currently find ourselves. But the weird and the eerie are scalable, pervading broad cultural structures as well as the minutiae of personal ruminations. I’ve known Fisher as a colleague for some time. About a week after I had agreed to do this review, I heard via email of Fisher’s suicide. Someone I knew was previously there, over there, doing what they do, they way we so often presume a person’s presence in between moments of punctuated interaction. And then, suddenly, they’re not there. About a week after this, The Weird and the Eerie arrived in the mail. It was hard not to pick up the book and feel it had a kind of aura around it, as if it was some kind of final statement, a last communiqué. I had it on the table in a short stack with other books, and I kept half-expecting it to also vanish, as if its very presence there were incongruous. I would occasionally pick up the book and flip through it, as if secretly hoping to discover pages that weren’t there before. But my copy was the same as all the others. Besides, isn’t that essentially what a book is, a last word written by someone either long dead or who will die in the future? Maybe all books are eerie in this way.

    Eugene Thacker is the author of several books, including In The Dust Of This Planet (Zero Books, 2011) and Cosmic Pessimism (Univocal, 2015).

  • Ben Murphy – The Universes of Speculative Realism: A Review of Steven Shaviro’s The Universe of Things: On Speculative Realism

    Ben Murphy – The Universes of Speculative Realism: A Review of Steven Shaviro’s The Universe of Things: On Speculative Realism

    Steven Shaviro’s The Universe of Things: On Speculative Realism (2014)

    Reviewed by Ben Murphy

    Steven Shaviro begins The Universe of Things (2014) promising a “new look” at Alfred North Whitehead “in light of” speculative realism. The terms of this preface ought to be reversed though, since what follows Shaviro’s introduction is actually a “new look” at speculative realism “in light of” some Whiteheadean ideas. This distinction is important: readers should not seek out The Universe of Things for an introduction to Whitehead qua Whitehead or even a “new look” at Whitehead vis-à-vis current issues of cultural and critical analysis. (Indeed, better options along these lines include, respectively, Shaviro’s own earlier book, Without Criteria (2009), and the more recent University of Minnesota Press collection The Lure of Whitehead (2014).) Universe, on the other hand, is better described as an attempt to map the cumulative geography of speculative realism, a philosophical movement which Shaviro stresses should be referred to in the plural: speculative realisms. Speculative realisms (and its sibling endeavors like object oriented ontology and new materialism) are perpetually in search of heterodox traditions and forgotten figures—philosophical antecedents sought for foundational credence and inspiration. And in this sense Shaviro’s incorporation of Whitehead is the latest in a lengthening line: Graham Harman recuperates a certain version of Heidegger, Jane Bennett returns to Spinoza and Bergson (among others), and, more far afield still, Ian Hamilton Grant champions Schelling’s Naturphilosophie. But if these and other thinkers raid the archive to consolidate new and distinct philosophical templates, Shaviro’s survey is decidedly more evaluative than constructive. Working Whitehead into the cracks of speculative realism, Shaviro widens that movement’s internal fractures in order to expose, and at most nuance—rather than overturn, reverse, or revamp—its prevailing assumptions.

    Shaviro’s critical take on speculative realism relies on two recurring moves: first, an overarching unification and, second, a subsidiary distinction. First, in the name of unity, Shaviro stresses that speculative realisms hold in common a core desire to step outside what he—following French philosopher Quentin Meillasoux—calls the correlationist circle. As reiterated by Shaviro, the primary target implied by this phrase is Kant’s position that the world is only knowable and approachable through thought. “We” can never grasp an object “in itself” or “for itself” in isolation from its relation to us, the thinking subjects. This insistence means that any account of the world and reality is fundamentally an account of the world and reality as accessed through and by human thought. Speculative realisms are unified in wanting to get beyond this self-reflexive loop. Quentin Meillasoux, Graham Harman, Ray Brassier, and Ian Hamilton Grant (the school’s four founding fathers)—as well as fellow travelers—shed the correlationist straight jacket by theorizing (or, better, speculating) about the real world, the world of the “great outdoors” (another Meillasoux coinage) or, as Eugene Thacker puts it in his “horror of philosophy” series, the world “without us.” (For a very different account which disputes whether “correlationism” refers to a fair or even a meaningful reading of Kant, see David Golumbia’s “’Correlationism’: The Dogma that Never Was,” recently published in bounday 2.) As Shaviro notes, there’s a timeliness to this “anti-correlationist” critique, since casting the philosophical net beyond the circumscribing human mind seems a deadly serious endeavor in the face of impending ecological catastrophe. Still, the warming planet is just the most obvious and palatable hook that initiates what Shaviro calls the “changed climate of thought” (4) recently amenable to speculative realism. And if both new materialism and object oriented ontology are more prone to non- or para-academic environmental and ecological interventions, then speculative realism is more interested in revisiting and recasting the history of philosophy.

    A commitment to outfoxing correlationism unites speculative realism, but Shaviro’s second move—that of division—hinges on pinpointing the particular strategies employed to achieve this revisionary project. Repeatedly in Universe, Shaviro splits speculative realism into two main factions. On the one hand, Meillasoux and Brassier pursue lines of thought that Shaviro calls “eliminativist”: for these admittedly nihilistic thinkers, correlationism is undone by the revelation that thought is “epiphenomenal, illusory, and entirely without efficacy” (73)—that thought doesn’t rightly and necessarily belong anywhere in the universe. For Shaviro, Brassier goes further in approaching the “extinction of thought” than Meillasoux, who saves thought from complete elimination by introducing a deus ex machina according to which thought and life emerge “ex nihilo” and simultaneously from a universe previously devoid of both (76). The contrast to this first faction is found in Harman, Grant, Levi Bryant, and Timothy Morton. Instead of proposing that thought is fundamentally inimical to the universe, this coalition of speculative realism wagers that agency and thought are everywhere. Positing the “sheer ubiquity of thought in the cosmos” (82), this position reaches its apotheosis for Shaviro in a panpsychic vision where all things—animate and otherwise—are sentient (if perhaps not exactly conscious). Shaviro places himself in this second faction only after making a further distinction that separates him from Harman in particular. Whereas Harman, according to Shaviro, stresses the withdrawn nature of objects—withdrawn in the sense that the object must always “recede” from its relations (30)—Shaviro joins Whitehead (and Latour) in making a distinction between epistemological withdrawnness and ontological relations (see 105). Where an object may always hold something in reserve from what is knowable to the perceiving mind (as Harman insists), even this measure of the object that is reserved may be affected and changed by modes of contact that elude knowledge and understanding. Because of “vicarious causation” and “immanent, noncognitive contact” (138, 148) (a mode of contact that Shaviro never satisfactorily distinguishes from more popular usages of the term “affect”), an “occult process of influence” occurs that is “outside” any correlation between “subject and object, or knower and known” (148). The object, then, is not so utterly withdrawn as Harman’s narrowly epistemological account suggests. So between eleminativism and panpsychicism as extremes of the speculative realism spectrum, Shaviro says, we’re faced with a “basic choice” (83).

    Describing correlationism and the various offerings to get beyond it is standard fare for speculative realism. But what Universe lacks in originality it compensates for with breadth of analysis and consistently careful, patient exposition. Shaviro admirably treats a wide swath of speculative realists (plus quite a few philosophical giants from both continental and analytical traditions), and he does so with a tone perpetually modulated for utter clarity. Absent is any of the obfuscating rhetoric or over-the-top claims that one might expect from someone who sets out to correct Kant. In part Shaviro’s achievement stems from his own outsider status. His rich body of academic work—on everything from film studies to music video aesthetics to sci-fi infused accelerationism—as well as the light touch on display here and throughout his superb and eclectic online presence (see: http://www.shaviro.com/) stand him in good stead as a welcome interlocutor and guide. Approaching speculative realism as a kindred but not coincident thinker, he’s able to recapitulate his own coming-to-terms with ideas in a way that translates well to other sympathetic non-initiates.

    Apart from style and tone, though, Shaviro’s approach is also commendable for a self-avowed pragmatism of ideas. In an aside in the first chapter, Shaviro applauds Isabelle Stengers for the insight that “the construction of metaphysical concepts always addresses certain particular, situated needs” (33). “The concepts that a philosopher produces,” Shaviro continues, “depend on the problems to which he or she is responding. Every thinker is motivated by the difficulties that cry out to him or to her, demanding a response” (33). While a fair representation of Shaviro’s own admirably simple and workmanlike prose, these statements also epitomize the generous spirit that urges Universe. Shaviro is careful to explain the fruits and situational benefits of every idea that he treats, perhaps especially those ideas that he wants to challenge—an attractive way of grounding philosophical ideas which, being speculative by definition, sometimes feel quite flighty.

    The discussion of panpsychism that spans chapters four and five is the most exciting and original element of Universe. In part this is because it draws on a body of work in cognitive science and the philosophy of biology that Shaviro knows well and that is fresh fodder for discussions of speculative realism. His discussion in this section also has the added charm of giving itself over to the speculative freedoms afforded to speculative realism itself. As Shaviro recognizes, speculative realism is at its best when it joins with speculative fiction in the common task of “extrapolation” (10). Thus in considering panpsychism we’re teased with the notion that slime molds have thoughts (88). Less bogged down by the minutia of distinctions between this SR thinker and that, Shaviro joins a more diverse group of thinkers to consider, for instance, Thomas Nagel’s question about what it’s like to be a bat. Well aware of the absurdities attendant to a truly panpsychic vision, Shaviro lets speculation carry the day, and it’s a pleasure to follow him through a romp that ties the questions of speculative realism to a longer intellectual tradition of sometimes strange twists and turns.

    Also helpful and fresh for speculative realism—although somewhat hard to square with the rest of this book—is Shaviro’s first chapter, which shows how Emmanuel Levinas helps us appreciate speculative realism even as Whitehead’s “aesthetic” mode of “contrast” departs from Levinas’ “ethical” encounter with the Other. Where for Levinas the encounter trumps self-concern, for Whitehead both self-concern (or “self-enjoyment”) and “concern” for the Other are poles best understand in balancing counterpoint (rather than conflict). Apart from being the most detailed analysis of Whitehead’s thought—and, indeed, his thought as it changed in his long arc of writing—this opening account is valuable for SR in arguing that a commitment to circumventing correlationism need not be an ethical project in the traditional sense. In other words, in Shaviro’s reading of Whitehead, a philosophy geared towards the object world “without us” isn’t premised on care. The problem here and elsewhere in Universe, though, is the fuzzy usage of the term “aesthetic.” As I’ve suggested, chapter one deploys this term opposite Levinasian ethics in a frustratingly negative mode of definition: aesthetics is said to be what is not ethics. While gaining some clarification in the volume’s titular chapter (see 52-54), the aesthetic remains unclear even when given new treatment in a discussion of Kant that occupies the last ten pages of the book. Here “aesthetic” is set against knowledge (or epistemology) rather than ethics, and, as my discussion of Shaviro’s disagreement with Harman suggests, “aesthetic” comes to mean something like noncognitive contact, or “affect.” If these disparate senses of the “aesthetic” are related or even mutually inclusive, Shaviro doesn’t do enough to show how.

    For all its merits, Universe suffers heavily from being stuck between monograph and essay collection. One searches in vain for the absent promise that the book’s chapters can be read collectively or in isolation, approached in order or at random. Such a promise, at least, would admit that the chapters don’t serially build to anything in particular. Lacking this or any other clues from Shaviro, though, we’re faced with seven relatively short offerings that loop back on one another with frustratingly little meta-commentary. Much of the mapping of speculative realism as I’ve described it above via unification and division, for instance, appears essentially verbatim in chapters two, six, and seven. The treatment of Harman—both agreement and disagreement—in particular makes continual reappearance. The same could be said of the discussion of panpsychism, which is interesting the first and perhaps even second time but quickly turns suspect as it is recycled through chapters three, four, and five with only the trimmings changed. The mere fact that bits of argument can appear at the beginning and end of the book in essentially the same form (and with Shaviro seemingly unaware of such repetitions) leaves the reader wondering about the value of a journey that feels constrained to a treadmill. A more cynical reader might look to, and find answer in the book’s editorial meta-data, which reveals that the first three chapters are previously published. Insofar as Universe excels at any one thing, then, it may be at academic entrepreneurialism—a feat of (re)publishing in which a triplet of core essays are surrounded with the sort of rhetorical packing peanuts which actually detract from ideas that would be more forceful as standalone articles. The reader already deep inside the sweep of SR may find plenty in this extended cut edition, but those more casually interested will be better served to read independently (as interests dictate) “Self-Enjoyment and Concern” (on Whitehead, Levinas, and SR), “The Actual Volcano” (Shaviro’s primary disagreement with Harman), and “The Universe of Things” (a broad strokes and bouncy introduction to the promises and riddles of SR, new materialism, and object ontology). Each has gems of insight owed to Shaviro’s exhaustive research, and reading them apart from one another—perhaps even in their original contexts—would lessen the rather tiresome burden of trying to figure out how they all fit together.

    Ben Murphy is a Ph.D. student at the University of North Carolina at Chapel Hill. He works on 19th and 20th century American literature, the history and philosophy of science, and critical theory. His essay on James Dickey’s Deliverance and film adaptation is forthcoming from Mississippi Quarterly (2017), and you can also find his writing at ETHOS: A Digital Review of Arts, Humanities, and Public Ethics and The Carolina Quarterly. Website: http://englishcomplit.unc.edu/people/ben-murphy

  • Zachary Loeb – Mars is Still Very Far Away

    Zachary Loeb – Mars is Still Very Far Away

    a review of McKenzie Wark, Molecular Red (Verso, 2015)

    by Zachary Loeb

    ~

    There are some games where a single player wins, games where a group of players wins, and then there are games where all of the players can share equally in defeat. Yet regardless of the way winners and losers are apportioned, there is something disconcerting about a game where the rules change significantly when one is within sight of victory. Suddenly the strategy that had previously assured success now promises defeat and the confused players are forced to reconsider all of the seemingly right decisions that have now brought them to an impending loss. It may be a trifle silly to talk of winners and losers in the Anthropocene, with its bleak herald climate change, but the epoch in which humans have become a geological force is one in which the strategies that propelled certain societies towards victory no longer seem like such wise tactics. With victory seeming less and less certain it is easy to assume defeat is inevitable.

    Molecular_Red_300dpi_CMYK-max_221-dc0af21fb3204cf05919dfce4acafe57

    “Let’s not despair” is the retort McKenzie Wark offers on the first page of Molecular Red: Theory for the Anthropocene. The book approaches the Anthropocene as both a challenge and an opportunity, not for seeing who can pen the grimmest apocalyptic dirge but for developing new forms of critical theory. Prevailing responses to the Anthropocene – ranging from faith in new technology, to confidence in the market, to hopes for accountability, to despairing of technology – all strike Wark as insufficient, what he deems necessary are theories (which will hopefully lead to solutions) that recognize the ways in which the aforementioned solutions are entangled with each other. For Wark the coming crumbling of the American system was foreshadowed by the collapse of the Soviet system – and thus Molecular Red looks back at Soviet history to consider what other routes could have been taken there, before he switches his focus back to the United States to search for today’s alternate routes. Molecular Red reads aspects of Soviet history through the lens of “what if?” in order to consider contemporary questions from the perspective “what now?” As he writes: “[t]here is no other world, but it can’t be this one” (xxi).

    Molecular Red is an engaging and interesting read that introduces its readers to a raft of under-read thinkers – and its counsel against despair is worth heeding.  And yet, by the book’s end, it is easy to come away with a sense that while it is true that “there is no other world” that it will, alas, almost certainly be exactly this one.

    Before Wark introduces individual writers and theorists he first unveils the main character of his book: “the Carbon Liberation Front” (xiv). In Wark’s estimation the Carbon Liberation Front (CLF from this point forward) represents the truly victorious liberation movement of the past centuries. And what this liberation movement has accomplished is the freeing of – as the name suggests – carbon, an element which has been burnt up by humans in pursuit of energy with the result being an atmosphere filled with heat-trapping carbon dioxide. “The Anthropocene runs on carbon” (xv), and seeing as the scientists who coined the term “Anthropocene” used it to mark the period wherein glacial ice cores began to show a concentration of green house gases, such as CO2 and Ch4 – the CLF appears as a force one cannot ignore.

    Turning to Soviet history, Wark works to rescue Lenin’s rival Alexander Bogdanov from being relegated to a place as a mere footnote. Yet, Wark’s purpose is not to simply emphasize that Lenin and Bogdanov had different ideas regarding what the Bolsheviks should have done, what is of significance in Bogdanov is not questions of tactics but matters of theory. In particular Wark highlights Bogdanov’s ideas of “proletkult” and “tektology” while also drawing upon Bogdanov’s view of nature – he conceived of this “elusive category” as “simply that which labor encounters” (4, italics in original text). Bogdanov’s tektology was to be “a new way of organizing knowledge” while proletkult was to be “a new practice of culture” – as Wark explains “Bogdanov is not really trying to write philosophy so much a to hack it, to repurpose it for something other than the making of more philosophy” (13). Tektology was an attempt to bring together the lived experience of the proletariat along with philosophy and science – to create an active materialism “based on the social production of human existence” (18) and this production sees Nature as the realm within which laboring takes place. Or, as Wark eloquently puts it, tektology “is a way of organizing knowledge for difficult times…and perhaps also for the strange times likely to come in the twenty-first century” (40). Proletkult (which was an actual movement for some time) sought “to change labor, by merging art and work; to change everyday life…and to change affect” (35) – its goal was not to create proletarian culture but to provide a proletarian “point of view.” Deeply knowledgeable about science, himself a sort of science-fiction author (he wrote a quasi-utopian novel set on Mars called Red Star), and hopeful that technological advances would make workers more like engineers and artists, Bogdanov strikes Wark as “not the present writing about the future, but the past writing to the future” (59). Wark suggests that “perhaps Bogdanov is the point to which to return” (59) hence Wark’s touting of tektology, proletkult and Bogdanov’s view of nature.

    While Wark makes it clear that Bogdanov’s ideas did have some impact in Soviet Russia, their effect was far less than what it could have been – and thus Bogdanov’s ideas remain an interesting case of “what if?” Yet, in the figure of Andrey Platonov, Wark finds an example of an individual whose writings reached towards proletkult. Wark sees Platonov as “the great writer of our planet of slums” (68). The fiction written by Platonov, his “(anti)novellas” as Wark calls them, are largely the tales of committed and well-meaning communists whose efforts come to naught. For Platonov’s characters failure is a constant companion, they struggle against nature in the name of utopianism and find that they simply must keep struggling. In Platonov’s work one finds a continual questioning of communism’s authoritarian turn from below, his “Marxism is an ascetic one, based on the experience of sub-proletarian everyday life” (104). And while Platonov’s tales are short on happy endings, Wark detects hope amidst the powerlessness, as long as life goes on, for “if one can keep living then everything is still possible” (80). Such is the type of anti-cynicism that makes Platonov’s Marxism worth considering – it finds the glimmer of utopia on the horizon even if it never seems to draw closer.

    From the cold of the Soviet winter, Wark moves to the birthplace of the Californian Ideology – an ideology which Wark suggests has won the day: “it has no outside, and it is accelerating” (118). Yet, as with the case of Soviet communism, Wark is interested in looking for the fissures within the ideology, and instead of opining on Barbook and Cameron’s term moves through Ernst Mach and Paul Feyerabend en route to a consideration of Donna Haraway. Wark emphasizes how Haraway’s Marxism “insists on including nonhuman actors” (136) – her techno-science functions as a way of further breaking down the barrier that had been constructed between humans and nature. Shattering this divider is necessary to consider the ways that life itself has become caught up with capital in the age of patented life forms like OncoMouse. Amidst these entanglements Haraway’s “Cyborg Manifesto” appears to have lost none of its power – Wark sees that “cyborgs are monsters, or rather demonstrations, in the double sense of to show and to warn, of possible worlds” (146). Such a show of possibilities is to present alternatives even when, “There’s no mother nature, no father science, no way back (or forward) to integrity” (150). Returning to Bogdanov, Wark writes that “Tektology is all about constructing temporary shelter in the world” (150) – and the cyborg identity is simultaneously what constructs such shelter and seeks haven within it. Beyond Haraway, Wark considers the work of Karen Barad and Paul Edwards, in order to further illustrate that “we are at one and the same time a product of techno-science and yet inclined to think ourselves separate from it” (165). Haraway, and the web of thinkers with which Wark connects her, appear as a way to reconnect with “something like the classical Marxist and Bogdanovite open-mindedness toward the sciences” (179).

    After science, Wark transitions to discussing the science fiction of Kim Stanley Robinson – in particular his Mars trilogy. Robinson’s tale of the scientist/technicians colonizing Mars and their attempts to create a better world on the one they are settling is a demonstration of how “the struggle for utopia is both technical and political, and so much else besides” (191). The value of the Mars trilogy, with its tale of revolutions, both successful and unsuccessful, and its portrayal of a transformed Earth, is in the slow unfolding of revolutionary change. In Red Mars (the first book of the trilogy, published in 1992) there is not a glorious revolution that instantly changes everything, but rather “the accumulation of minor, even molecular, elements of a new way of life and their negotiations with each other” (194). At work in the ruminations of the main characters of Red Mars, Wark detects something reminiscent of tektology even as the books themselves seem like a sort of proletkult for the Anthropocene.

    Molecular Red’s tour of oft overlooked, or overly neglected thinkers, is an argument for a reengagement with Marxism, but a reengagement that willfully and carefully looks for the paths not taken. The argument is not that Lenin needs to be re-read, but that Bogdanov needs to be read. Wark does not downplay the dangers of the Anthropocene, but he refuses to wallow in dismay or pine for a pastoral past that was a fantasy in the first place. For Wark, we are closely entwined with our technology and the idea that it should all be turned off is a nonstarter. Molecular Red is not a trudge through the swamps of negativity, rather it’s a call: “Let’s use the time and information and everyday life still available to us to begin the task, quietly but in good cheer, of thinking otherwise, of working and experimenting” (221).

    Wark does not conclude Molecular Red by reminding his readers that they have nothing to lose but their chains. Rather he reminds them that they still have a world to win.  

    Molecular Red begins with an admonishment not to despair, and ends with a similar plea not to lose hope. Granted, in order to find this hope one needs to be willing to consider that the causes for hopelessness may themselves be rooted in looking for hope in the wrong places. Wark argues, that by embracing techno-science, reveling in our cyborg selves, and creating new cultural forms to help us re-imagine our present and future – the left can make itself relevant once more. As a call for the left to embrace technology and look forward Molecular Red occupies a similar cultural shelf-space as that filled by recent books like Inventing the Future and Austerity Ecology and the Collapse-Porn Addicts. Which is to say that those who think that what is needed is “a frank acknowledgment of the entangling of our cyborg bodies within the technical” (xxi), those who think that the left needs to embrace technology with greater gusto, will find Molecular Red’s argument quite appealing. As for those who disagree – they will likely not find their minds changed by Molecular Red.

    As a writer Wark has a talent for discussing dense theoretical terms in a readable and enjoyable format throughout Molecular Red. Regardless of what one ultimately thinks of Wark’s argument, one of the major strengths of Molecular Red is the way it introduces readers to overlooked theorists. After reading Wark’s chapters on Bogdanov and Platonov the reader certainly understands why Wark finds their work so engrossing and inspiring. Similarly, Wark makes a compelling case for the continued importance of Haraway’s cyborg concept and his treatment of Kim Stanley Robinson’s Mars trilogy is an apt demonstration of incorporating science fiction into works of theory. Amidst all of the grim books out there about the Anthropocene, Molecular Red is refreshing in its optimism. This is “Theory for the Anthropocene,” as the book’s subtitle puts it, but it is positive theory.

    Granted, some of Wark’s linguistic flourishes become less entertaining over time – “the carbon liberation front” is an amusing concept at first but by the end of Molecular Red the term is as likely to solicit an eye-roll as introspection. A great deal of carbon has certainly been liberated, but has this been the result of a concerted effort (a “liberation front”) or has this been the result of humans not fully thinking through the consequences of technology? Certainly there are companies that have made fortunes through “liberating” carbon, but who is ultimately responsible for “the carbon liberation front?” One might be willing to treat terms like “liberation front” with less scrutiny were they not being used in a book so invested in re-vitalizing leftist theory. Does not a “liberation front” imply a movement with an ideology? It seems that the liberation of carbon is more of an accident of a capitalist ideology than the driver of that ideology itself. It may seem silly to focus upon the uneasy feeling that accompanies the term “carbon liberation front” but this is an example of a common problem with Molecular Red – the more one thinks about some of the premises the less satisfying Wark’s arguments become.

    Given Wark’s commitment to reconfiguring Marxism for the Anthropocene it is unsurprising that he should choose to devote much of his attention to labor. This is especially fitting given the emphasis that Bogdanov and Platonov place on labor. Wark clearly finds much to approve of in Bogdanov’s idea that “all workers would become more like engineers, and also more like artists” (28). These are largely the type of workers one encounters in Robinson’s work and who are, generally, the heroes of Platonov’s tales, they make up a sort of “proto-hacker class” (90). It is an interesting move from the Soviet laborer to the technician/artists/hacker of Robinson – and it is not surprising that the author of A Hacker Manifesto (2004) should view hackers in such a romantic light. Yet Molecular Red is not a love letter to hackers, which makes it all the more interesting that labor in the Anthropocene is not given broader consideration. Bogdanov might have hoped that automation would make workers more like engineers and artists – but is there not still plenty of laboring going on in the Anthropocene? There is a heck of a lot of labor that goes into making the high-tech devices enjoyed by technicians, hackers and artists – though it may be a type of labor that is more convenient to ignore as it troubles the idea that workers are all metamorphosing into technician/artist/hackers. Given Platonov’s interest in the workers who seemed abandoned by the utopian promises they had been told it is a shame that Molecular Red does not pay greater attention to the forgotten workers of the Anthropocene. Yet, contemporary miners of minerals for high-tech doodads, device assemblers, e-waste recyclers, and the impoverished citizens of areas already suffering the burdens of climate change have more in common with the forgotten proletarians of Platonov than with the utopian scientists of Robinson’s Red Mars.

    One way to read Molecular Red is as a plea to the left not to give up on techno-science. Though it seems worth wondering to what extent the left has actually done anything like this. Some on the left may be less willing to conclude that the Internet is the solution to every problem (“some” does not imply “the majority”), but agitating for green technologies and alternative energies seems a pretty clear demonstration that far from giving up on technology many on the left still approach it with great hope. Wark is arguing for “something like the classical Marxist and Bogdanovite open-mindedness toward the sciences…rather than the Heidegger-inflected critique of Marcuse and others” (179). Yet in looking at contemporary discussions around techno-science and the left, it does not seem that the “Heidegger-inflected critique of Marcuse and others” is particularly dominant. There may be a few theorists here and there still working to advance a rigorous critique of technology – but as the recent issues on technology from The Nation and Jacobin both show – the left is not currently being controlled by a bogey-man of Marcuse. Granted, this is a shame, for Molecular Red could have benefited from engaging with some of the critics of Marxism’s techno-utopian streak. Indeed, is the problem the lack of “open-mindedness toward the sciences” or that being open-minded has failed thus far to do much to stall the Anthropocene? Or is it that, perhaps, the left simply needs to prepare itself for being open-minded about geo-engineering? Wark describes the Anthropocene as being a sort of metabolic rift and cautions that “to reject techno-science altogether is to reject the means of knowing about metabolic rift” (180). Yet this seems to be something of a straw-man argument – how many critics are genuinely arguing that people should “reject techno-science”? Perhaps John Zerzan has a much wider readership than I knew.

    Molecular Red cautions its readers against despair but the text has a significant darkness about it. Wark writes “we are cyborgs, making a cyborg planet with cyborg weather, a crazed, unstable disingression, whose information and energy systems are out of joint” (180) – but the knowledge that “we are cyborgs” does little to help the worker who has lost her job without suddenly becoming an engineer/artist, “a cyborg planet” does nothing to heal the sicknesses of those living near e-waste dumps, and calling it “cyborg weather” does little to help those who are already struggling to cope with the impacts of climate change. We may be cyborgs, but that doesn’t mean the Anthropocene will go easy on us. After all, the scientists in the Mars trilogy may work on transforming that planet into a utopia but while they are at it things do not exactly go well back on Earth. When Wark writes that “here among the ruins, something living yet remains” (xxii) he is echoing the ideology behind every anarcho-punk record cover that shows a better life being built on the ruins of the present world. But another feature of those album covers, and the allusion to “among the ruins,” is that the fact that some “living yet remains” is a testament to all of the dying that has also transpired.

    McKenzie Wark has written an interesting and challenging book in Molecular Red and it is certainly a book with which it is worth engaging. Regardless of whether or not one is ultimately convinced by Wark’s argument, his final point will certainly resonate with those concerned about the present but hopeful for the future.

    After all, we still have a world to win.
    _____

    Zachary Loeb is a writer, activist, librarian, and terrible accordion player. He earned his MSIS from the University of Texas at Austin, and is currently working towards an MA in the Media, Culture, and Communications department at NYU. His research areas include media refusal and resistance to technology, ethical implications of technology, infrastructure and e-waste, as well as the intersection of library science with the STS field. Using the moniker “The Luddbrarian,” Loeb writes at the blog Librarian Shipwreck and is a frequent contributor to The b2 Review Digital Studies section.

    Back to the essay

  • Ending the World as We Know It: Alexander R. Galloway in Conversation with Andrew Culp

    Ending the World as We Know It: Alexander R. Galloway in Conversation with Andrew Culp

    by Alexander R. Galloway and Andrew Culp
    ~

    Alexander R. Galloway: You have a new book called Dark Deleuze (University of Minnesota Press, 2016). I particularly like the expression “canon of joy” that guides your investigation. Can you explain what canon of joy means and why it makes sense to use it when talking about Deleuze?

    Andrew Culp, Dark Deleuze (University of Minnesota Press, 2016)

    Andrew Culp: My opening is cribbed from a letter Gilles Deleuze wrote to philosopher and literary critic Arnaud Villani in the early 1980s. Deleuze suggests that any worthwhile book must have three things: a polemic against an error, a recovery of something forgotten, and an innovation. Proceeding along those three lines, I first argue against those who worship Deleuze as the patron saint of affirmation, second I rehabilitate the negative that already saturates his work, and third I propose something he himself was not capable of proposing, a “hatred for this world.” So in an odd twist of Marx on history, I begin with those who hold up Deleuze as an eternal optimist, yet not to stand on their shoulders but to topple the church of affirmation.

    The canon portion of “canon of joy” is not unimportant. Perhaps more than any other recent thinker, Deleuze queered philosophy’s line of succession. A large portion of his books were commentaries on outcast thinkers that he brought back from exile. Deleuze was unwilling to discard Nietzsche as a fascist, Bergson as a spiritualist, or Spinoza as a rationalist. Apparently this led to lots of teasing by fellow agrégation students at the Sorbonne in the late ’40s. Further showing his strange journey through the history of philosophy, his only published monograph for nearly a decade was an anti-transcendental reading of Hume at a time in France when phenomenology reigned. Such an itinerant path made it easy to take Deleuze at his word as a self-professed practitioner of “minor philosophy.” Yet look at Deleuze’s outcasts now! His initiation into the pantheon even bought admission for relatively forgotten figures such as sociologist Gabriel Tarde. Deleuze’s popularity thus raises a thorny question for us today: how do we continue the minor Deleuzian line when Deleuze has become a “major thinker”? For me, the first step is to separate Deleuze (and Guattari) from his commentators.

    I see two popular joyous interpretations of Deleuze in the canon: unreconstructed Deleuzians committed to liberating flows, and realists committed to belief in this world. The first position repeats the language of molecular revolution, becoming, schizos, transversality, and the like. Some even use the terms without transforming them! The resulting monotony seals Deleuze and Guattari’s fate as a wooden tongue used by people still living in the ’80s. Such calcification of their concepts is an especially grave injustice because Deleuze quite consciously shifted terminology from book to book to avoid this very outcome. Don’t get me wrong, I am deeply indebted to the early work on Deleuze! I take my insistence on the Marxo-Freudian core of Deleuze and Guattari from one of their earliest Anglophone commentators, Eugene Holland, who I sought out to direct my dissertation. But for me, the Tiqqun line “the revolution was molecular, and so was the counter-revolution” perfectly depicts the problem of advocating molecular politics. Why? Today’s techniques of control are now molecular. The result is that control societies have emptied the molecular thinker’s only bag of tricks (Bifo is a good test case here), which leaves us with a revolution that only goes one direction: backward.

    I am equally dissatisfied by realist Deleuzians who delve deep into the early strata of A Thousand Plateaus and away from the “infinite speed of thought” that motivates What is Philosophy? I’m thinking of the early incorporations of dynamical systems theory, the ’90s astonishment over everything serendipitously looking like a rhizome, the mid-00s emergence of Speculative Realism, and the ongoing “ontological” turn. Anyone who has read Manuel DeLanda will know this exact dilemma of materiality versus thought. He uses examples that slow down Deleuze and Guattari’s concepts to something easily graspable. In his first book, he narrates history as a “robot historian,” and in A Thousand Years of Nonlinear History, he literally traces the last thousand years of economics, biology, and language back to clearly identifiable technological inventions. Such accounts are dangerously compelling due to their lucidity, but they come at a steep cost: android realism dispenses with Deleuze and Guattari’s desiring subject, which is necessary for a theory of revolution by way of the psychoanalytic insistence on the human ability to overcome biological instincts (e.g. Freud’s Instincts and their Vicissitudes and Beyond the Pleasure Principle). Realist interpretations of Deleuze conceive of the subject as fully of this world. And with it, thought all but evaporates under the weight of this world. Deleuze’s Hume book is an early version of this criticism, but the realists have not taken heed. Whether emergent, entangled, or actant, strong realists ignore Deleuze and Guattari’s point in What is Philosophy? that thought always comes from the outside at a moment when we are confronted by something so intolerable that the only thing remaining is to think.

    Galloway: The left has always been ambivalent about media and technology, sometimes decrying its corrosive influence (Frankfurt School), sometimes embracing its revolutionary potential (hippy cyberculture). Still, you ditch technical “acceleration” in favor of “escape.” Can you expand your position on media and technology, by way of Deleuze’s notion of the machinic?

    Culp: Foucault says that an episteme can be grasped as we are leaving it. Maybe we can finally catalogue all of the contemporary positions on technology? The romantic (computer will never capture my soul), the paranoiac (there is an unknown force pulling the strings), the fascist-pessimist (computers will control everything)…

    Deleuze and Guattari are certainly not allergic to technology. My favorite quote actually comes from the Foucault book in which Deleuze says that “technology is social before it is technical” (6). The lesson we can draw from this is that every social formation draws out different capacities from any given technology. An easy example is from the nomads Deleuze loved so much. Anarcho-primitivists speculate that humans learn oppression with the domestication of animals and settled agriculture during the Neolithic Revolution. Diverging from the narrative, Deleuze celebrates the horse people of the Eurasian steppe described by Arnold Toynbee. Threatened by forces that would require them to change their habitat, Toynbee says, they instead chose to change their habits. The subsequent domestication of the horse did not sew the seeds of the state, which was actually done by those who migrated from the steppes after the last Ice Age to begin wet rice cultivation in alluvial valleys (for more, see James C Scott’s The Art of Not Being Governed). On the contrary, the new relationship between men and horses allowed nomadism to achieve a higher speed, which was necessary to evade the raiding-and-trading used by padi-states to secure the massive foreign labor needed for rice farming. This is why the nomad is “he who does not move” and not a migrant (A Thousand Plateaus, 381).

    Accelerationism attempts to overcome the capitalist opposition of human and machine through the demand for full automation. As such, it peddles in technological Proudhonism that believes one can select what is good about technology and just delete what is bad. The Marxist retort is that development proceeds by its bad side. So instead of flashy things like self-driving cars, the real dot-communist question is: how will Amazon automate the tedious, low-paying jobs that computers are no good at? What happens to the data entry clerks, abusive-content managers, or help desk technicians? Until it figures out who will empty the recycle bin, accelerationism is only a socialism of the creative class.

    The machinic is more than just machines–it approaches technology as a question of organization. The term is first used by Guattari in a 1968 paper titled “Machine and Structure” that he presented to Lacan’s Freudian School of Paris, a paper that would jumpstart his collaboration with Deleuze. He argues for favoring machine to structure. Structures transform parts of a whole by exchanging or substituting particularities so that every part shares in a general form (in other words, the production of isomorphism). An easy political example is the Leninist Party, which mediates the particularized private interests to form them into the general will of a class. Machines instead treat the relationship between things as a problem of communication. The result is the “control and communication” of Norbert Wiener’s cybernetics, which connects distinct things in a circuit instead of implanting a general logic. The word “machine” never really caught on but the concept has made inroads in the social sciences, where actor-network theory, game theory, behaviorism, systems theory, and other cybernetic approaches have gained acceptance.

    Structure or machine, each engenders a different type of subjectivity, and each realizes a different model of communication. The two are found in A Thousand Plateaus, where Deleuze and Guattari note two different types of state subject formation: social subjection and machinic enslavement (456-460). While it only takes up a few short pages, the distinction is essential to Bernard Stiegler’s work and has been expertly elaborated by Maurizio Lazzarato in the book Signs and Machines. We are all familiar with molar social subjection synonymous with “agency”–it is the power that results from individuals bridging the gap between themselves and broader structures of representation, social roles, and institutional demands. This subjectivity is well outlined by Lacanians and other theorists of the linguistic turn (Virno, Rancière, Butler, Agamben). Missing from their accounts is machinic enslavement, which treats people as simply cogs in the machine. Such subjectivity is largely overlooked because it bypasses existential questions of recognition or self-identity. This is because machinic enslavement operates at the level of the infra-social or pre-individual through the molecular operators of unindividuated affects, sensations, desires not assigned to a subject. Offering a concrete example, Deleuze and Guattari reference Mumford’s megamachines of surplus societies that create huge landworks by treating humans as mere constituent parts. Capitalism revived the megamachine in the sixteenth century, and more recently, we have entered the “third age” of enslavement marked by the development of cybernetic and informational machines. In place of the pyramids are technical machines that use humans at places in technical circuits where computers are incapable or too costly, e.g. Amazon’s Mechanical Turk.

    I should also clarify that not all machines are bad. Rather, Dark Deleuze only trusts one kind of machine, the war machine. And war machines follow a single trajectory–a line of flight out of this world. A major task of the war machine conveniently aligns with my politics of techno-anarchism: to blow apart the networks of communication created by the state.

    Galloway: I can’t resist a silly pun, cannon of joy. Part of your project is about resisting a certain masculinist tendency. Is that a fair assessment? How do feminism and queer theory influence your project?

    Culp: Feminism is hardwired into the tagline for Dark Deleuze through a critique of emotional labor and the exhibition of bodies–“A revolutionary Deleuze for today’s digital world of compulsory happiness, decentralized control, and overexposure.” The major thread I pull through the book is a materialist feminist one: something intolerable about this world is that it demands we participate in its accumulation and reproduction. So how about a different play on words: Sara Ahmed’s feminist killjoy, who refuses the sexual contract that requires women to appear outwardly grateful and agreeable? Or better yet, Joy Division? The name would associate the project with post-punk, its conceptual attack on the mainstream, and the band’s nod to the sexual labor depicted in the novella House of Dolls.

    My critique of accumulation is also a media argument about connection. The most popular critics of ‘net culture are worried that we are losing ourselves. So on the one hand, we have Sherry Turkle who is worried that humans are becoming isolated in a state of being “alone-together”; and on the other, there is Bernard Stiegler, who thinks that the network supplants important parts of what it means to be human. I find this kind of critique socially conservative. It also victim-blames those who use social media the most. Recall the countless articles attacking women who take selfies as part of self-care regimen or teens who creatively evade parental authority. I’m more interested in the critique of early ’90s ‘net culture and its enthusiasm for the network. In general, I argue that network-centric approaches are now the dominant form of power. As such, I am much more interested in how the rhizome prefigures the digitally-coordinated networks of exploitation that have made Apple, Amazon, and Google into the world’s most powerful corporations. While not a feminist issue on its face, it’s easy to see feminism’s relevance when we consider the gendered division of labor that usually makes women the employees of choice for low-paying jobs in electronics manufacturing, call centers, and other digital industries.

    Lastly, feminism and queer theory explicitly meet in my critique of reproduction. A key argument of Deleuze and Guattari in Anti-Oedipus is the auto-production of the real, which is to say, we already live in a “world without us.” My argument is that we need to learn how to hate some of the things it produces. Of course, this is a reworked critique of capitalist alienation and exploitation, which is a system that gives to us (goods and the wage) only because it already stole them behind our back (restriction from the means of subsistence and surplus value). Such ambivalence is the everyday reality of the maquiladora worker who needs her job but may secretly hope that all the factories burn to the ground. Such degrading feelings are the result of the compromises we make to reproduce ourselves. In the book, I give voice to them by fusing together David Halperin and Valerie Traub’s notion of gay shame acting as a solvent to whatever binds us to identity and Deleuze’s shame at not being able to prevent the intolerable. But feeling shame is not enough. To complete the argument, we need to draw out the queer feminist critique of reproduction latent in Marx and Freud. Détourning an old phrase: direct action begins at the point of reproduction. My first impulse is to rely on the punk rock attitude of Lee Edelman and Paul Preciado’s indictment of reproduction. But you are right that they have their masculinist moments, so what we need is something more post-punk–a little less aggressive and a lot more experimental. Hopefully Dark Deleuze is that.

    Galloway: Edelman’s “fuck Annie” is one of the best lines in recent theory. “Fuck the social order and the Child in whose name we’re collectively terrorized; fuck Annie; fuck the waif from Les Mis; fuck the poor, innocent kid on the Net; fuck Laws both with capital ls and small; fuck the whole network of Symbolic relations and the future that serves as its prop” (No Future, 29). Your book claims, in essence, that the Fuck Annies are more interesting than the Aleatory Materialists. But how can we escape the long arm of Lucretius?

    Culp: My feeling is that the politics of aleatory materialism remains ambiguous. Beyond the literal meaning of “joy,” there are important feminist takes on the materialist Spinoza of the encounter that deserve our attention. Isabelle Stengers’s work is among the most comprehensive, though the two most famous are probably Donna Haraway’s cyborg feminism and Karen Barad’s agential realism. Curiously, while New Materialism has been quite a boon for the art and design world, its socio-political stakes have never been more uncertain. One would hope that appeals to matter would lend philosophical credence to topical events such as #blacklivesmatter. Yet for many, New Materialism has simply led to a new formalism focused on material forms or realist accounts of physical systems meant to eclipse the “epistemological excesses” of post-structuralism. This divergence was not lost on commentators in the most recent issue of of October, which functioned as a sort of referendum on New Materialism. On the hand, the issue included a generous accounting of the many avenues artists have taken in exploring various “new materialist” directions. Of those, I most appreciated Mel Chen’s reminder that materialism cannot serve as a “get out of jail free card” on the history of racism, sexism, ablism, and speciesism. While on the other, it included the first sustained attack on New Materialism by fellow travelers. Certainly the New Materialist stance of seeing the world from the perspective of “real objects” can be valuable, but only if it does not exclude old materialism’s politics of labor. I draw from Deleuzian New Materialist feminists in my critique of accumulation and reproduction, but only after short-circuiting their world-building. This is a move I learned from Sue Ruddick, whose Theory, Culture & Society article on the affect of the philosopher’s scream is an absolute tour de force. And then there is Graham Burnett’s remark that recent materialisms are like “Etsy kissed by philosophy.” The phrase perfectly crystallizes the controversy, but it might be too hot to touch for at least a decade…

    Galloway: Let’s focus more on the theme of affirmation and negation, since the tide seems to be changing. In recent years, a number of theorists have turned away from affirmation toward a different set of vectors such as negation, eclipse, extinction, or pessimism. Have we reached peak affirmation?

    Culp: We should first nail down what affirmation means in this context. There is the metaphysical version of affirmation, such as Foucault’s proud title as a “happy positivist.” In this declaration in Archaeology of Knowledge and “The Order of Discourse,” he is not claiming to be a logical positivist. Rather, Foucault is distinguishing his approach from Sartrean totality, transcendentalism, and genetic origins (his secondary target being the reading-between-the-lines method of Althusserian symptomatic reading). He goes on to formalize this disagreement in his famous statement on the genealogical method, “Nietzsche, Genealogy, History.” Despite being an admirer of Sartre, Deleuze shares this affirmative metaphysics with Foucault, which commentators usually describe as an alternative to the Hegelian system of identity, contradiction, determinate negation, and sublation. Nothing about this “happily positivist” system forces us to be optimists. In fact, it only raises the stakes for locating how all the non-metaphysical senses of the negative persist.

    Affirmation could be taken to imply a simple “more is better” logic as seen in Assemblage Theory and Latourian Compositionalism. Behind this logic is a principle of accumulation that lacks a theory of exploitation and fails to consider the power of disconnection. The Spinozist definition of joy does little to dispel this myth, but it is not like either project has revolutionary political aspirations. I think we would be better served to follow the currents of radical political developments over the last twenty years, which have been following an increasingly negative path. One part of the story is a history of failure. The February 15, 2003 global demonstration against the Iraq War was the largest protest in history but had no effect on the course of the war. More recently, the election of democratic socialist governments in Europe has done little to stave off austerity, even as economists publicly describe it as a bankrupt model destined to deepen the crisis. I actually find hope in the current circuit of struggle and think that its lack of alter-globalization world-building aspirations might be a plus. My cues come from the anarchist black bloc and those of the post-Occupy generation who would rather not pose any demands. This is why I return to the late Deleuze of the “control societies” essay and his advice to scramble the codes, to seek out spaces where nothing needs to be said, and to establish vacuoles of non-communication. Those actions feed the subterranean source of Dark Deleuze‘s darkness and the well from which comes hatred, cruelty, interruption, un-becoming, escape, cataclysm, and the destruction of worlds.

    Galloway: Does hatred for the world do a similar work for you that judgment or moralism does in other writers? How do we avoid the more violent and corrosive forms of hate?

    Culp: Writer Antonin Artaud’s attempt “to have done with the judgment of God” plays a crucial role in Dark Deleuze. Not just any specific authority but whatever gods are left. The easiest way to summarize this is “the three deaths.” Deleuze already makes note of these deaths in the preface to Difference and Repetition, but it only became clear to me after I read Gregg Flaxman’s Gilles Deleuze and the Fabulation of Philosophy. We all know of Nietzsche’s Death of God. With it, Nietzsche notes that God no longer serves as the central organizing principle for us moderns. Important to Dark Deleuze is Pierre Klossowski’s Nietzsche, who is part of a conspiracy against all of humanity. Why? Because even as God is dead, humanity has replaced him with itself. Next comes the Death of Man, which we can lay at the feet of Foucault. More than any other text, The Order of Things demonstrates how the birth of modern man was an invention doomed to fail. So if that death is already written in sand about to be washed away, then what comes next? Here I turn to the world, worlding, and world-building. It seems obvious when looking at the problems that plague our world: global climate change, integrated world capitalism, and other planet-scale catastrophes. We could try to deal with each problem one by one. But why not pose an even more radical proposition? What if we gave up on trying to save this world? We are already awash in sci-fi that tries to do this, though most of it is incredibly socially conservative. Perhaps now is the time for thinkers like us to catch up. Fragments of Deleuze already lay out the terms of the project. He ends the preface to Different and Repetition by assigning philosophy the task of writing apocalyptic science fiction. Deleuze’s book opens with lightning across the black sky and ends with the world swelling into a single ocean of excess. Dark Deleuze collects those moments and names it the Death of This World.

    Galloway: Speaking of climate change, I’m reminded how ecological thinkers can be very religious, if not in word then in deed. Ecologists like to critique “nature” and tout their anti-essentialist credentials, while at the same time promulgating tellurian “change” as necessary, even beneficial. Have they simply replaced one irresistible force with another? But your “hatred of the world” follows a different logic…

    Culp: Irresistible indeed! Yet it is very dangerous to let the earth have the final say. Not only does psychoanalysis teach us that it is necessary to buck the judgment of nature, the is/ought distinction at the philosophical core of most ethical thought refuses to let natural fact define the good. I introduce hatred to develop a critical distance from what is, and, as such, hatred is also a reclamation of the future in that it is a refusal to allow what-is to prevail over what-could-be. Such an orientation to the future is already in Deleuze and Guattari. What else is de-territorialization? I just give it a name. They have another name for what I call hatred: utopia.

    Speaking of utopia, Deleuze and Guattari’s definition of utopia in What is Philosophy? as simultaneously now-here and no-where is often used by commentators to justify odd compromise positions with the present state of affairs. The immediate reference is Samuel Butler’s 1872 book Erewhon, a backward spelling of nowhere, which Deleuze also references across his other work. I would imagine most people would assume it is a utopian novel in the vein of Edward Bellamy’s Looking Backward. And Erewhon does borrow from the conventions of utopian literature, but only to skewer them with satire. A closer examination reveals that the book is really a jab at religion, Victorian values, and the British colonization of New Zealand! So if there is anything that the now-here of Erewhon has to contribute to utopia, it is that the present deserves our ruthless criticism. So instead of being a simultaneous now-here and no-where, hatred follows from Deleuze and Guattari’s suggestion in A Thousand Plateaus to “overthrow ontology” (25). Therefore, utopia is only found in Erewhon by taking leave of the now-here to get to no-where.

    Galloway: In Dark Deleuze you talk about avoiding “the liberal trap of tolerance, compassion, and respect.” And you conclude by saying that the “greatest crime of joyousness is tolerance.” Can you explain what you mean, particularly for those who might value tolerance as a virtue?

    Culp: Among the many followers of Deleuze today, there are a number of liberal Deleuzians. Perhaps the biggest stronghold is in political science, where there is a committed group of self-professed radical liberals. Another strain bridges Deleuze with the liberalism of John Rawls. I was a bit shocked to discover both of these approaches, but I suppose it was inevitable given liberalism’s ability to assimilate nearly any form of thought.

    Herbert Marcuse recognized “repressive tolerance” as the incredible power of liberalism to justify the violence of positions clothed as neutral. The examples Marcuse cites are governments who say they respect democratic liberties because they allow political protest although they ignore protesters by labeling them a special interest group. For those of us who have seen university administrations calmly collect student demands, set up dead-end committees, and slap pictures of protestors on promotional materials as a badge of diversity, it should be no surprise that Marcuse dedicated the essay to his students. An important elaboration on repressive tolerance is Wendy Brown’s Regulating Aversion. She argues that imperialist US foreign policy drapes itself in tolerance discourse. This helps diagnose why liberal feminist groups lined up behind the US invasion of Afghanistan (the Taliban is patriarchal) and explains how a mere utterance of ISIS inspires even the most progressive liberals to support outrageous war budgets.

    Because of their commitment to democracy, Brown and Marcuse can only qualify liberalism’s universal procedures for an ethical subject. Each criticizes certain uses of tolerance but does not want to dispense with it completely. Deleuze’s hatred of democracy makes it much easier for me. Instead, I embrace the perspective of a communist partisan because communists fight from a different structural position than that of the capitalist.

    Galloway: Speaking of structure and position, you have a section in the book on asymmetry. Most authors avoid asymmetry, instead favoring concepts like exchange or reciprocity. I’m thinking of texts on “the encounter” or “the gift,” not to mention dialectics itself as a system of exchange. Still you want to embrace irreversibility, incommensurability, and formal inoperability–why?

    Culp: There are a lot of reasons to prefer asymmetry, but for me, it comes down to a question of political strategy.

    First, a little background. Deleuze and Guattari’s critique of exchange is important to Anti-Oedipus, which was staged through a challenge to Claude Lévi-Strauss. This is why they shift from the traditional Marxist analysis of mode of production to an anthropological study of anti-production, for which they use the work of Pierre Clastres and Georges Bataille to outline non-economic forms of power that prevented the emergence of capitalism. Contemporary anthropologists have renewed this line of inquiry, for instance, Eduardo Viveiros de Castro, who argues in Cannibal Metaphysics that cosmologies differ radically enough between peoples that they essentially live in different worlds. The cannibal, he shows, is not the subject of a mode of production but a mode of predation.

    Those are not the stakes that interest me the most. Consider instead the consequence of ethical systems built on the gift and political systems of incommensurability. The ethical approach is exemplified by Derrida, whose responsibility to the other draws from the liberal theological tradition of accepting the stranger. While there is distance between self and other, it is a difference that is bridged through the democratic project of radical inclusion, even if such incorporation can only be aporetically described as a necessary-impossibility. In contrast, the politics of asymmetry uses incommensurability to widen the chasm opened by difference. It offers a strategy for generating antagonism without the formal equivalence of dialectics and provides an image of revolution based on fundamental transformation. The former can be seen in the inherent difference between the perspective of labor and the perspective of capital, whereas the latter is a way out of what Guy Debord calls “a perpetual present.”

    Galloway: You are exploring a “dark” Deleuze, and I’m reminded how the concepts of darkness and blackness have expanded and interwoven in recent years in everything from afro-pessimism to black metal theory (which we know is frighteningly white). How do you differentiate between darkness and blackness? Or perhaps that’s not the point?

    Culp: The writing on Deleuze and race is uneven. A lot of it can be blamed on the imprecise definition of becoming. The most vulgar version of becoming is embodied by neoliberal subjects who undergo an always-incomplete process of coming more into being (finding themselves, identifying their capacities, commanding their abilities). The molecular version is a bit better in that it theorizes subjectivity as developing outside of or in tension with identity. Yet the prominent uses of becoming and race rarely escaped the postmodern orbit of hybridity, difference, and inclusive disjunction–the White Man’s face as master signifier, miscegenation as anti-racist practice, “I am all the names of history.” You are right to mention afro-pessimism, as it cuts a new way through the problem. As I’ve written elsewhere, Frantz Fanon describes being caught between “infinity and nothingness” in his famous chapter on the fact of blackness in Black Skin White Masks. The position of infinity is best championed by Fred Moten, whose black fugitive is the effect of an excessive vitality that has survived five hundred years of captivity. He catches fleeting moments of it in performances of jazz, art, and poetry. This position fits well with the familiar figures of Deleuzo-Guattarian politics: the itinerant nomad, the foreigner speaking in a minor tongue, the virtuoso trapped in-between lands. In short: the bastard combination of two or more distinct worlds. In contrast, afro-pessimism is not the opposite of the black radical tradition but its outside. According to afro-pessimism, the definition of blackness is nothing but the social death of captivity. Remember the scene of subjection mentioned by Fanon? During that nauseating moment he is assailed by a whole series of cultural associations attached to him by strangers on the street. “I was battered down by tom-toms, cannibalism, intellectual deficiency, fetishism, racial defects, slave-ships, and above all else, above all: ‘Sho’ good eatin”” (112). The lesson that afro-pessimism draws from this scene is that cultural representations of blackness only reflect back the interior of white civil society. The conclusion is that combining social death with a culture of resistance, such as the one embodied by Fanon’s mentor Aimé Césaire, is a trap that leads only back to whiteness. Afro-pessimism thus follows the alternate route of darkness. It casts a line to the outside through an un-becoming that dissolves the identity we are give as a token for the shame of being a survivor.

    Galloway: In a recent interview the filmmaker Haile Gerima spoke about whiteness as “realization.” By this he meant both realization as such–self-realization, the realization of the self, the ability to realize the self–but also the more nefarious version as “realization through the other.” What’s astounding is that one can replace “through” with almost any other preposition–for, against, with, without, etc.–and the dynamic still holds. Whiteness is the thing that turns everything else, including black bodies, into fodder for its own realization. Is this why you turn away from realization toward something like profanation? And is darkness just another kind of whiteness?

    Culp: Perhaps blackness is to the profane as darkness is to the outside. What is black metal if not a project of political-aesthetic profanation? But as other commentators have pointed out, the politics of black metal is ultimately telluric (e.g. Benjamin Noys’s “‘Remain True to the Earth!’: Remarks on the Politics of Black Metal”). The left wing of black metal is anarchist anti-civ and the right is fascist-nativist. Both trace authority back to the earth that they treat as an ultimate judge usurped by false idols.

    The process follows what Badiou calls “the passion for the real,” his diagnosis of the Twentieth Century’s obsession with true identity, false copies, and inauthentic fakes. His critique equally applies to Deleuzian realists. This is why I think it is essential to return to Deleuze’s work on cinema and the powers of the false. One key example is Orson Welles’s F for Fake. Yet my favorite is the noir novel, which he praises in “The Philosophy of Crime Novels.” The noir protagonist never follows in the footsteps of Sherlock Holmes or other classical detectives’s search for the real, which happens by sniffing out the truth through a scientific attunement of the senses. Rather, the dirty streets lead the detective down enough dead ends that he proceeds by way of a series of errors. What noir reveals is that crime and the police have “nothing to do with a metaphysical or scientific search for truth” (82). The truth is rarely decisive in noir because breakthroughs only come by way of “the great trinity of falsehood”: informant-corruption-torture. The ultimate gift of noir is a new vision of the world whereby honest people are just dupes of the police because society is fueled by falsehood all the way down.

    To specify the descent to darkness, I use darkness to signify the outside. The outside has many names: the contingent, the void, the unexpected, the accidental, the crack-up, the catastrophe. The dominant affects associated with it are anticipation, foreboding, and terror. To give a few examples, H. P. Lovecraft’s scariest monsters are those so alien that characters cannot describe them with any clarity, Maurice Blanchot’s disaster is the Holocaust as well as any other event so terrible that it interrupts thinking, and Don DeLillo’s “airborne toxic event” is an incident so foreign that it can only be described in the most banal terms. Of Deleuze and Guattari’s many different bodies without organs, one of the conservative varieties comes from a Freudian model of the psyche as a shell meant to protect the ego from outside perturbations. We all have these protective barriers made up of habits that help us navigate an uncertain world–that is the purpose of Guattari’s ritornello, that little ditty we whistle to remind us of the familiar even when we travel to strange lands. There are two parts that work together, the refrain and the strange land. The refrains have only grown yet the journeys seem to have ended.

    I’ll end with an example close to my own heart. Deleuze and Guattari are being used to support new anarchist “pre-figurative politics,” which is defined as seeking to build a new society within the constraints of the now. The consequence is that the political horizon of the future gets collapsed into the present. This is frustrating for someone like me, who holds out hope for a revolutionary future that ceases the million tiny humiliations that make up everyday life. I like J. K. Gibson-Graham’s feminist critique of political economy, but community currencies, labor time banks, and worker’s coops are not my image of communism. This is why I have drawn on the gothic for inspiration. A revolution that emerges from the darkness holds the apocalyptic potential of ending the world as we know it.

    Works Cited

    • Ahmed, Sara. The Promise of Happiness. Durham, NC: Duke University Press, 2010.
    • Artaud, Antonin. To Have Done With The Judgment of God. 1947. Live play, Boston: Exploding Envelope, c1985. https://www.youtube.com/watch?v=VHtrY1UtwNs.
    • Badiou, Alain. The Century. 2005. Cambridge, UK: Polity Press, 2007.
    • Barad, Karen. Meeting the Universe Halfway: Quantum Physics and the Entanglement of Matter. Durham, NC: Duke University Press, 2007.
    • Bataille, Georges. “The Notion of Expenditure.” 1933. In Visions of Excess: Selected Writings, 1927-1939, translated by Allan Stoekl, Carl R. Lovin, and Donald M. Leslie Jr., 167-81. Minneapolis: University of Minnesota Press, 1985.
    • Bellamy, Edward. Looking Backward: From 2000 to 1887. Boston: Ticknor & co., 1888.
    • Blanchot, Maurice. The Writing of the Disaster. 1980. Translated by Ann Smock. Lincoln, NE: University of Nebraska Press, 1995.
    • Brown, Wendy. Regulating Aversion: Tolerance in the Age of Identity and Empire. Princeton, N.J.: Princeton University Press, 2006.
    • Burnett, Graham. “A Questionnaire on Materialisms.” October 155 (2016): 19-20.
    • Butler, Samuel. Erewhon: or, Over the Range. 1872. London: A.C. Fifield, 1910. http://www.gutenberg.org/files/1906/1906-h/1906-h.htm.
    • Chen, Mel Y. “A Questionnaire on Materialisms.” October 155 (2016): 21-22.
    • Clastres, Pierre. Society against the State. 1974. Translated by Robert Hurley and Abe Stein. New York: Zone Books, 1987.
    • Culp, Andrew. Dark Deleuze. Minneapolis: University of Minnesota Press, 2016.
    • ———. “Blackness.” New York: Hostis, 2015.
    • Debord, Guy. The Society of the Spectacle. 1967. Translated by Fredy Perlman et al. Detroit: Red and Black, 1977.
    • DeLanda, Manuel. A Thousand Years of Nonlinear History. New York: Zone Books, 2000.
    • ———. War in the Age of Intelligent Machines. New York: Zone Books, 1991.
    • DeLillo, Don. White Noise. New York: Viking Press, 1985.
    • Deleuze, Gilles. Cinema 2: The Time-Image. 1985. Translated by Hugh Tomlinson and Robert Galeta. Minneapolis: University of Minnesota Press, 1989.
    • ———. “The Philosophy of Crime Novels.” 1966. Translated by Michael Taormina. In Desert Islands and Other Texts, 1953-1974, 80-85. New York: Semiotext(e), 2004.
    • ———. Difference and Repetition. 1968. Translated by Paul Patton. New York: Columbia University Press, 1994.
    • ———. Empiricism and Subjectivity: An Essay on Hume’s Theory of Human Nature. 1953. Translated by Constantin V. Boundas. New York: Columbia University Press, 1995.
    • ———. Foucault. 1986. Translated by Seán Hand. Minneapolis: University of Minnesota Press, 1988.
    • Deleuze, Gilles, and Félix Guattari. Anti-Oedipus. 1972. Translated by Robert Hurley, Mark Seem, and Helen R. Lane. Minneapolis: University of Minnesota Press, 1977.
    • ———. A Thousand Plateaus. 1980. Translated by Brian Massumi. Minneapolis: University of Minnesota Press, 1987.
    • ———. What Is Philosophy? 1991. Translated by Hugh Tomlinson and Graham Burchell. New York: Columbia University Press, 1994.
    • Derrida, Jacques. The Gift of Death and Literature in Secret. Translated by David Willis. Chicago: University of Chicago Press, 2007; second edition.
    • Edelman, Lee. No Future: Queer Theory and the Death Drive. Durham, N.C.: Duke University Press, 2004.
    • Fanon, Frantz. Black Skin White Masks. 1952. Translated by Charles Lam Markmann. New York: Grove Press, 1968.
    • Flaxman, Gregory. Gilles Deleuze and the Fabulation of Philosophy. Minneapolis: University of Minnesota Press, 2011.
    • Foucault, Michel. The Archaeology of Knowledge and the Discourse on Language. 1971. Translated by A.M. Sheridan Smith. New York: Pantheon Books, 1972.
    • ———. “Nietzsche, Genealogy, History.” 1971. In Language, Counter-Memory, Practice: Selected Essays and Interviews, translated by Donald F. Bouchard and Sherry Simon, 113-38. Ithaca, N.Y.: Cornell University Press, 1977.
    • ———. The Order of Things. 1966. New York: Pantheon Books, 1970.
    • Freud, Sigmund. Beyond the Pleasure Principle. 1920. Translated by James Strachley. London: Hogarth Press, 1955.
    • ———. “Instincts and their Vicissitudes.” 1915. Translated by James Strachley. In Standard Edition of the Complete Psychological Works of Sigmund Freud 14, 111-140. London: Hogarth Press, 1957.
    • Gerima, Haile. “Love Visual: A Conversation with Haile Gerima.” Interview by Sarah Lewis and Dagmawi Woubshet. Aperture, Feb 23, 2016. http://aperture.org/blog/love-visual-haile-gerima/.
    • Gibson-Graham, J.K. The End of Capitalism (As We Knew It): A Feminist Critique of Political Economy. Hoboken: Blackwell, 1996.
    • ———. A Postcapitalist Politics. Minneapolis: University of Minnesota Press, 2006.
    • Guattari, Félix. “Machine and Structure.” 1968. Translated by Rosemary Sheed. In Molecular Revolution: Psychiatry and Politics, 111-119. Harmondsworth, Middlesex: Penguin, 1984.
    • Halperin, David, and Valerie Traub. “Beyond Gay Pride.” In Gay Shame, 3-40. Chicago: University of Chicago Press, 2009.
    • Haraway, Donna. Simians, Cyborgs, and Women: The Reinvention of Nature. New York: Routledge, 1991.
    • Klossowski, Pierre. “Circulus Vitiosus.” Translated by Joseph Kuzma. The Agonist: A Nietzsche Circle Journal 2, no. 1 (2009): 31-47.
    • ———. Nietzsche and the Vicious Circle. 1969. Translated by Daniel W. Smith. Chicago: University of Chicago Press, 1997.
    • Lazzarato, Maurizio. Signs and Machines. 2010. Translated by Joshua David Jordan. Los Angeles: Semiotext(e), 2014.
    • Marcuse, Herbert. “Repressive Tolerance.” In A Critique of Pure Tolerance, 81-117. Boston: Beacon Press, 1965.
    • Mauss, Marcel. The Gift: The Form and Reason for Exchange in Archaic Societies. 1950. Translated by W. D. Hallis. New York: Routledge, 1990.
    • Moten, Fred. In The Break: The Aesthetics of the Black Radical Tradition. Minneapolis: University of Minnesota Press, 2003.
    • Mumford, Lewis. Technics and Human Development. San Diego: Harcourt Brace Jovanovich, 1967.
    • Noys, Benjamin. “‘Remain True to the Earth!’: Remarks on the Politics of Black Metal.” In: Hideous Gnosis: Black Metal Theory Symposium 1 (2010): 105-128.
    • Preciado, Paul. Testo-Junkie: Sex, Drugs, and Biopolitics in the Phamacopornographic Era. 2008. Translated by Bruce Benderson. New York: The Feminist Press, 2013.
    • Ruddick, Susan. “The Politics of Affect: Spinoza in the Work of Negri and Deleuze.” Theory, Culture, Society 27, no. 4 (2010): 21-45.
    • Scott, James C. The Art of Not Being Governed: An Anarchist History of Upland Southeast Asia. New Haven: Yale University Press, 2009.
    • Sexton, Jared. “Afro-Pessimism: The Unclear Word.” In Rhizomes 29 (2016). http://www.rhizomes.net/issue29/sexton.html.
    • ———. “Ante-Anti-Blackness: Afterthoughts.” In Lateral 1 (2012). http://lateral.culturalstudiesassociation.org/issue1/content/sexton.html.
    • ———. “The Social Life of Social Death: On Afro-Pessimism and Black Optimism.” In Intensions 5 (2011). http://www.yorku.ca/intent/issue5/articles/jaredsexton.php.
    • Stiegler, Bernard. For a New Critique of Political Economy. Cambridge: Polity Press, 2010.
    • ———. Technics and Time 1: The Fault of Epimetheus. 1994. Translated by George Collins and Richard Beardsworth. Redwood City, CA: Stanford University Press, 1998.
    • Tiqqun. “How Is It to Be Done?” 2001. In Introduction to Civil War. 2001. Translated by Alexander R. Galloway and Jason E. Smith. Los Angeles, Calif.: Semiotext(e), 2010.
    • Toynbee, Arnold. A Study of History. Abridgement of Volumes I-VI by D.C. Somervell. London, Oxford University Press, 1946.
    • Turkle, Sherry. Alone Together: Why We Expect More from Technology and Less from Each Other. New York: Basic Books, 2012.
    • Viveiros de Castro, Eduardo. Cannibal Metaphysics: For a Post-structural Anthropology. 2009. Translated by Peter Skafish. Minneapolis, Minn.: Univocal, 2014.
    • Villani, Arnaud. La guêpe et l’orchidée. Essai sur Gilles Deleuze. Paris: Éditions de Belin, 1999.
    • Welles, Orson, dir. F for Fake. 1974. New York: Criterion Collection, 2005.
    • Wiener, Norbert. Cybernetics: Cybernetics: Or Control and Communication in the Animal and the Machine. Cambridge, MA: MIT Press, 1948; second revised edition.
    • Williams, Alex, and Nick Srincek. “#ACCELERATE MANIFESTO for an Accelerationist Politics.” Critical Legal Thinking. 2013. http://criticallegalthinking.com/2013/05/14/accelerate-manifesto-for-an-accelerationist-politics/.

    _____

    Alexander R. Galloway is a writer and computer programer working on issues in philosophy, technology, and theories of mediation. Professor of Media, Culture, and Communication at New York University, he is author of several books and dozens of articles on digital media and critical theory, including Protocol: How Control Exists after Decentralization (MIT, 2006), Gaming: Essays in Algorithmic Culture (University of Minnesota, 2006); The Interface Effect (Polity, 2012), and most recently Laruelle: Against the Digital (University of Minnesota, 2014), reviewed here in 2014. He is a frequent contributor to The b2 Review “Digital Studies.”

    Andrew Culp is a Visiting Assistant Professor of Rhetoric Studies at Whitman College. He specializes in cultural-communicative theories of power, the politics of emerging media, and gendered responses to urbanization. His work has appeared in Radical Philosophy, Angelaki, Affinities, and other venues. He previously pre-reviewed Galloway’s Laruelle: Against the Digital for The b2 Review “Digital Studies.”

    Back to the essay

  • Alexander R. Galloway — From Data to Information

    Alexander R. Galloway — From Data to Information

    By Alexander R. Galloway
    ~

    In recent months I’ve been spending time learning Swift. As such, I’ve been thinking a lot about data structures. Swift has a nice spectrum of possible data structures to pick from — something that I’ll have to discuss another day — but what interests me here is the question of data itself. Scholars often treat etymology as a special kind of divination. (And philosophers like Heidegger made a career of it.) But I find the etymology of the word “data” to be particularly elegant and revealing.

    Data comes from the Latin dare, meaning to give. But it’s the form that’s most interesting. First of all, it’s in the neuter plural, so it refers to “things.” Second, data is a participle in the perfect passive form. Thus the word means literally “the things having been given.” Or, for short, I like to think of data as “the givens.” French preserves this double meaning nicely by calling data the données. (The French also use the word “data,” although *I believe* this is technically an anglicism imported from technical vocabulary, despite French being much closer to Latin than English.)

    Data are the things having been given. Using the language of philosophy, and more specifically of phenomenology, data are the very facts of the givenness of Being. They are knowable and measurable. Data display a facticity; they are “what already exists,” and as such are a determining apparatus. They indicate what is present, what exists. The word data carries certain scientific or empirical undertones. But more important are the phenomenological overtones: data refer to the neutered, generic fact of the things having been given.

    Even in this simple arrangement a rudimentary relation holds sway. For implicit in the notion of the facticity of givenness is a relation to givenness. Data are not just a question of the givenness of Being, but are also necessarily illustrative of a relationship back toward a Being that has been given. In short, givenness itself implies a relation. This is one of the fundamental observations of phenomenology.

    Chicago datum

    Even if nothing specific can be said about a given entity x, it is possible to say that, if given, x is something as opposed to nothing, and therefore that x has a relationship to its own givenness as something. X is “as x”; the as-structure is all that is required to demonstrate that x exists in a relation. (By contrast, if x were immanent to itself, it would not be possible to assume relation. But by virtue of being made distinct as something given, givenness implies non-immanence and thus relation.) Such a “something” can be understood in terms of self-similar identity or, as the scientists say, negentropy, a striving to remain the same.

    So even as data are defined in terms of their givenness, their non-immanence with the one, they also display a relation with themselves. Through their own self-similarity or relation with themselves, they tend back toward the one (as the most generic instance of the same). The logic of data is therefore a logic of existence and identity: on the one hand, the facticity of data means that they exist, that they ex-sistere, meaning to stand out of or from; on the other hand, the givenness of data as something means that they assume a relationship of identity, as the self-similar “whatever entity” that was given.

    The true definition of data, therefore, is not simply “the things having been given.” The definition must conjoin givenness and relation. For this reason, data often go by another name, a name that more suitably describes the implicit imbrication of givenness and relation. The name is information.

    Information combines both aspects of data: the root form refers to a relationship (here a relationship of identity as same), while the prefix in refers to the entering into existence of form, the actual givenness of abstract form into real concrete formation.

    Heidegger sums it up well with the following observation about the idea: “All metaphysics including its opponent positivism speaks the language of Plato. The basic word of its thinking, that is, of his presentation of the Being of beings, is eidos, idea: the outward appearance in which beings as such show themselves. Outward appearance, however, is a manner of presence.” In other words, outward appearance or idea is not a deviation from presence, or some precondition that produces presence. Idea is precisely coterminous with presence. To understand data as information means to understand data as idea, but not just idea, also a host of related terms: form, class, concept, thought, image, outward appearance, shape, presence, or form-of-appearance.

    As Lisa Gitelman has reminded us, there is no such thing as “raw” data, because to enter into presence means to enter into form. An entity “in-form” is not a substantive entity, nor is it an objective one. The in-form is the negentropic transcendental of the situation, be it “material” like the givens or “ideal” like the encoded event. Hence an idea is just as much subject to in-formation as are material objects. An oak tree is in-formation, just as much as a computer file is in-formation.

    All of this is simply another way to understand Parmenides’s claim about the primary identity of philosophy: “Thought and being are the same.”

    [Contains a modified excerpt from Laruelle: Against the Digital [University of Minnesota Press: 2014], pp. 75-77.]
    _____

    Alexander R. Galloway is a writer and computer programer working on issues in philosophy, technology, and theories of mediation. Professor of Media, Culture, and Communication at New York University, he is author of several books and dozens of articles on digital media and critical theory, including Protocol: How Control Exists after Decentralization (MIT, 2006), Gaming: Essays in Algorithmic Culture (University of Minnesota, 2006); The Interface Effect (Polity, 2012), and most recently Laruelle: Against the Digital (University of Minnesota, 2014), reviewed here in 2014. Galloway has recently been writing brief notes on media and digital culture and theory at his blog, on which this post first appeared.

    Back to the essay

  • Artificial Intelligence as Alien Intelligence

    Artificial Intelligence as Alien Intelligence

    By Dale Carrico
    ~

    Science fiction is a genre of literature in which artifacts and techniques humans devise as exemplary expressions of our intelligence result in problems that perplex our intelligence or even bring it into existential crisis. It is scarcely surprising that a genre so preoccupied with the status and scope of intelligence would provide endless variations on the conceits of either the construction of artificial intelligences or contact with alien intelligences.

    Of course, both the making of artificial intelligence and making contact with alien intelligence are organized efforts to which many humans are actually devoted, and not simply imaginative sites in which writers spin their allegories and exhibit their symptoms. It is interesting that after generations of failure the practical efforts to construct artificial intelligence or contact alien intelligence have often shunted their adherents to the margins of scientific consensus and invested these efforts with the coloration of scientific subcultures: While computer science and the search for extraterrestrial intelligence both remain legitimate fields of research, both AI and aliens also attract subcultural enthusiasms and resonate with cultic theology, each attracts its consumer fandoms and public Cons, each has its True Believers and even its UFO cults and Robot cults at the extremities.

    Champions of artificial intelligence in particular have coped in many ways with the serial failure of their project to achieve its desired end (which is not to deny that the project has borne fruit) whatever the confidence with which generation after generation of these champions have insisted that desired end is near. Some have turned to more modest computational ambitions, making useful software or mischievous algorithms in which sad vestiges of the older dreams can still be seen to cling. Some are simply stubborn dead-enders for Good Old Fashioned AI‘s expected eventual and even imminent vindication, all appearances to the contrary notwithstanding. And still others have doubled down, distracting attention from the failures and problems bedeviling AI discourse simply by raising its pitch and stakes, no longer promising that artificial intelligence is around the corner but warning that artificial super-intelligence is coming soon to end human history.

    alien planet

    Another strategy for coping with the failure of artificial intelligence on its conventional terms has assumed a higher profile among its champions lately, drawing support for the real plausibility of one science-fictional conceit — construction of artificial intelligence — by appealing to another science-fictional conceit, contact with alien intelligence. This rhetorical gambit has often been conjoined to the compensation of failed AI with its hyperbolic amplification into super-AI which I have already mentioned, and it is in that context that I have written about it before myself. But in a piece published a few days ago in The New York Times, “Outing A.I.: Beyond the Turing Test,” Benjamin Bratton, a professor of visual arts at U.C. San Diego and Director of a design think-tank, has elaborated a comparatively sophisticated case for treating artificial intelligence as alien intelligence with which we can productively grapple. Near the conclusion of his piece Bratton declares that “Musk, Gates and Hawking made headlines by speaking to the dangers that A.I. may pose. Their points are important, but I fear were largely misunderstood by many readers.” Of course these figures made their headlines by making the arguments about super-intelligence I have already rejected, and mentioning them seems to indicate Bratton’s sympathy with their gambit and even suggests that his argument aims to help us to understand them better on their own terms. Nevertheless, I take Bratton’s argument seriously not because of but in spite of this connection. Ultimately, Bratton makes a case for understanding AI as alien that does not depend on the deranging hyperbole and marketing of robocalypse or robo-rapture for its force.

    In the piece, Bratton claims “Our popular conception of artificial intelligence is distorted by an anthropocentric fallacy.” The point is, of course, well taken, and the litany he rehearses to illustrate it is enormously familiar by now as he proceeds to survey popular images from Kubrick’s HAL to Jonze’s Her and to document public deliberation about the significance of computation articulated through such imagery as the “rise of the machines” in the Terminator franchise or the need for Asimov’s famous fictional “Three Laws of Robotics.” It is easy — and may nonetheless be quite important — to agree with Bratton’s observation that our computational/media devices lack cruel intentions and are not susceptible to Asimovian consciences, and hence thinking about the threats and promises and meanings of these devices through such frames and figures is not particularly helpful to us even though we habitually recur to them by now. As I say, it would be easy and important to agree with such a claim, but Bratton’s proposal is in fact somewhat a different one:

    [A] mature A.I. is not necessarily a humanlike intelligence, or one that is at our disposal. If we look for A.I. in the wrong ways, it may emerge in forms that are needlessly difficult to recognize, amplifying its risks and retarding its benefits. This is not just a concern for the future. A.I. is already out of the lab and deep into the fabric of things. “Soft A.I.,” such as Apple’s Siri and Amazon recommendation engines, along with infrastructural A.I., such as high-speed algorithmic trading, smart vehicles and industrial robotics, are increasingly a part of everyday life.

    Here the serial failure of the program of artificial intelligence is redeemed simply by declaring victory. Bratton demonstrates that crying uncle does not preclude one from still crying wolf. It’s not that Siri is some sickly premonition of the AI-daydream still endlessly deferred, but that it represents the real rise of what robot cultist Hans Moravec once promised would be our “mind children” but here and now as elfen aliens with an intelligence unto themselves. It’s not that calling a dumb car a “smart” car is simply a hilarious bit of obvious marketing hyperbole, but represents the recognition of a new order of intelligent machines among us. Rather than criticize the way we may be “amplifying its risks and retarding its benefits” by reading computation through the inapt lens of intelligence at all, he proposes that we should resist holding machine intelligence to the standards that have hitherto defined it for fear of making its recognition “too difficult.”

    The kernel of legitimacy in Bratton’s inquiry is its recognition that “intelligence is notoriously difficult to define and human intelligence simply can’t exhaust the possibilities.” To deny these modest reminders is to indulge in what he calls “the pretentious folklore” of anthropocentrism. I agree that anthropocentrism in our attributions of intelligence has facilitated great violence and exploitation in the world, denying the dignity and standing of Cetaceans and Great Apes, but has also facilitated racist, sexist, xenophobic travesties by denigrating humans as beastly and unintelligent objects at the disposal of “intelligent” masters. “Some philosophers write about the possible ethical ‘rights’ of A.I. as sentient entities, but,” Bratton is quick to insist, “that’s not my point here.” Given his insistence that the “advent of robust inhuman A.I.” will force a “reality-based” “disenchantment” to “abolish the false centrality and absolute specialness of human thought and species-being” which he blames in his concluding paragraph with providing “theological and legislative comfort to chattel slavery” it is not entirely clear to me that emancipating artificial aliens is not finally among the stakes that move his argument whatever his protestations to the contrary. But one can forgive him for not dwelling on such concerns: the denial of an intelligence and sensitivity provoking responsiveness and demanding responsibilities in us all to women, people of color, foreigners, children, the different, the suffering, nonhuman animals compels defensive and evasive circumlocutions that are simply not needed to deny intelligence and standing to an abacus or a desk lamp. It is one thing to warn of the anthropocentric fallacy but another to indulge in the pathetic fallacy.

    Bratton insists to the contrary that his primary concern is that anthropocentrism skews our assessment of real risks and benefits. “Unfortunately, the popular conception of A.I., at least as depicted in countless movies, games and books, still seems to assume that humanlike characteristics (anger, jealousy, confusion, avarice, pride, desire, not to mention cold alienation) are the most important ones to be on the lookout for.” And of course he is right. The champions of AI have been more than complicit in this popular conception, eager to attract attention and funds for their project among technoscientific illiterates drawn to such dramatic narratives. But we are distracted from the real risks of computation so long as we expect risks to arise from a machinic malevolence that has never been on offer nor even in the offing. Writes Bratton: “Perhaps what we really fear, even more than a Big Machine that wants to kill us, is one that sees us as irrelevant. Worse than being seen as an enemy is not being seen at all.”

    But surely the inevitable question posed by Bratton’s disenchanting expose at this point should be: Why, once we have set aside the pretentious folklore of machines with diabolical malevolence, do we not set aside as no less pretentiously folkloric the attribution of diabolical indifference to machines? Why, once we have set aside the delusive confusion of machine behavior with (actual or eventual) human intelligence, do we not set aside as no less delusive the confusion of machine behavior with intelligence altogether? There is no question were a gigantic bulldozer with an incapacitated driver to swerve from a construction site onto a crowded city thoroughfare this would represent a considerable threat, but however tempting it might be in the fraught moment or reflective aftermath poetically to invest that bulldozer with either agency or intellect it is clear that nothing would be gained in the practical comprehension of the threat it poses by so doing. It is no more helpful now in an epoch of Greenhouse storms than it was for pre-scientific storytellers to invest thunder and whirlwinds with intelligence. Although Bratton makes great play over the need to overcome folkloric anthropocentrism in our figuration of and deliberation over computation, mystifying agencies and mythical personages linger on in his accounting however he insists on the alienness of “their” intelligence.

    Bratton warns us about the “infrastructural A.I.” of high-speed financial trading algorithms, Google and Amazon search algorithms, “smart” vehicles (and no doubt weaponized drones and autonomous weapons systems would count among these), and corporate-military profiling programs that oppress us with surveillance and harass us with targeted ads. I share all of these concerns, of course, but personally insist that our critical engagement with infrastructural coding is profoundly undermined when it is invested with insinuations of autonomous intelligence. In “Art in the Age of Mechanical Reproducibility,” Walter Benjamin pointed out that when philosophers talk about the historical force of art they do so with the prejudices of philosophers: they tend to write about those narrative and visual forms of art that might seem argumentative in allegorical and iconic forms that appear analogous to the concentrated modes of thought demanded by philosophy itself. Benjamin proposed that perhaps the more diffuse and distracted ways we are shaped in our assumptions and aspirations by the durable affordances and constraints of the made world of architecture and agriculture might turn out to drive history as much or even more than the pet artforms of philosophers do. Lawrence Lessig made much the same point when he declared at the turn of the millennium that “Code Is Law.”

    It is well known that special interests with rich patrons shape the legislative process and sometimes even explicitly craft legislation word for word in ways that benefit them to the cost and risk of majorities. It is hard to see how our assessment of this ongoing crime and danger would be helped and not hindered by pretending legislation is an autonomous force exhibiting an alien intelligence, rather than a constellation of practices, norms, laws, institutions, ritual and material artifice, the legacy of the historical play of intelligent actors and the site for the ongoing contention of intelligent actors here and now. To figure legislation as a beast or alien with a will of its own would amount to a fetishistic displacement of intelligence away from the actual actors actually responsible for the forms that legislation actually takes. It is easy to see why such a displacement is attractive: it profitably abets the abuses of majorities by minorities while it absolves majorities from conscious complicity in the terms of their own exploitation by laws made, after all, in our names. But while these consoling fantasies have an obvious allure this hardly justifies our endorsement of them.

    I have already written in the past about those who want to propose, as Bratton seems inclined to do in the present, that the collapse of global finance in 2008 represented the working of inscrutable artificial intelligences facilitating rapid transactions and supporting novel financial instruments of what was called by Long Boom digerati the “new economy.” I wrote:

    It is not computers and programs and autonomous techno-agents who are the protagonists of the still unfolding crime of predatory plutocratic wealth-concentration and anti-democratizing austerity. The villains of this bloodsoaked epic are the bankers and auditors and captured-regulators and neoliberal ministers who employed these programs and instruments for parochial gain and who then exonerated and rationalized and still enable their crimes. Our financial markets are not so complex we no longer understand them. In fact everybody knows exactly what is going on. Everybody understands everything. Fraudsters [are] engaged in very conventional, very recognizable, very straightforward but unprecedentedly massive acts of fraud and theft under the cover of lies.

    I have already written in the past about those who want to propose, as Bratton seems inclined to do in the present, that our discomfiture in the setting of ubiquitous algorithmic mediation results from an autonomous force over which humans intentions are secondary considerations. I wrote:

    [W]hat imaginary scene is being conjured up in this exculpatory rhetoric in which inadvertent cruelty is ‘coming from code’ as opposed to coming from actual persons? Aren’t coders actual persons, for example? … [O]f course I know what [is] mean[t by the insistence…] that none of this was ‘a deliberate assault.’ But it occurs to me that it requires the least imaginable measure of thought on the part of those actually responsible for this code to recognize that the cruelty of [one user’s] confrontation with their algorithm was the inevitable at least occasional result for no small number of the human beings who use Facebook and who live lives that attest to suffering, defeat, humiliation, and loss as well as to parties and promotions and vacations… What if the conspicuousness of [this] experience of algorithmic cruelty indicates less an exceptional circumstance than the clarifying exposure of a more general failure, a more ubiquitous cruelty? … We all joke about the ridiculous substitutions performed by autocorrect functions, or the laughable recommendations that follow from the odd purchase of a book from Amazon or an outing from Groupon. We should joke, but don’t, when people treat a word cloud as an analysis of a speech or an essay. We don’t joke so much when a credit score substitutes for the judgment whether a citizen deserves the chance to become a homeowner or start a small business, or when a Big Data profile substitutes for the judgment whether a citizen should become a heat signature for a drone committing extrajudicial murder in all of our names. [An] experience of algorithmic cruelty [may be] extraordinary, but that does not mean it cannot also be a window onto an experience of algorithmic cruelty that is ordinary. The question whether we might still ‘opt out’ from the ordinary cruelty of algorithmic mediation is not a design question at all, but an urgent political one.

    I have already written in the past about those who want to propose, as Bratton seems inclined to do in the present, that so-called Killer Robots are a threat that must be engaged by resisting or banning “them” in their alterity rather than by assigning moral and criminal responsibility on those who code, manufacture, fund, and deploy them. I wrote:

    Well-meaning opponents of war atrocities and engines of war would do well to think how tech companies stand to benefit from military contracts for ‘smarter’ software and bleeding-edge gizmos when terrorized and technoscientifically illiterate majorities and public officials take SillyCon Valley’s warnings seriously about our ‘complacency’ in the face of truly autonomous weapons and artificial super-intelligence that do not exist. It is crucial that necessary regulation and even banning of dangerous ‘autonomous weapons’ proceeds in a way that does not abet the mis-attribution of agency, and hence accountability, to devices. Every ‘autonomous’ weapons system expresses and mediates decisions by responsible humans usually all too eager to disavow the blood on their hands. Every legitimate fear of ‘killer robots’ is best addressed by making their coders, designers, manufacturers, officials, and operators accountable for criminal and unethical tools and uses of tools… There simply is no such thing as a smart bomb. Every bomb is stupid. There is no such thing as an autonomous weapon. Every weapon is deployed. The only killer robots that actually exist are human beings waging and profiting from war.

    “Arguably,” argues Bratton, “the Anthropocene itself is due less to technology run amok than to the humanist legacy that understands the world as having been given for our needs and created in our image. We hear this in the words of thought leaders who evangelize the superiority of a world where machines are subservient to the needs and wishes of humanity… This is the sentiment — this philosophy of technology exactly — that is the basic algorithm of the Anthropocenic predicament, and consenting to it would also foreclose adequate encounters with A.I.” The Anthropocene in this formulation names the emergence of environmental or planetary consciousness, an emergence sometimes coupled to the global circulation of the image of the fragility and interdependence of the whole earth as seen by humans from outer space. It is the recognition that the world in which we evolved to flourish might be impacted by our collective actions in ways that threaten us all. Notice, by the way, that multiculture and historical struggle are figured as just another “algorithm” here.

    I do not agree that planetary catastrophe inevitably followed from the conception of the earth as a gift besetting us to sustain us, indeed this premise understood in terms of stewardship or commonwealth would go far in correcting and preventing such careless destruction in my opinion. It is the false and facile (indeed infantile) conception of a finite world somehow equal to infinite human desires that has landed us and keeps us delusive ignoramuses lodged in this genocidal and suicidal predicament. Certainly I agree with Bratton that it would be wrong to attribute the waste and pollution and depletion of our common resources by extractive-industrial-consumer societies indifferent to ecosystemic limits to “technology run amok.” The problem of so saying is not that to do so disrespects “technology” — as presumably in his view no longer treating machines as properly “subservient to the needs and wishes of humanity” would more wholesomely respect “technology,” whatever that is supposed to mean — since of course technology does not exist in this general or abstract way to be respected or disrespected.

    The reality at hand is that humans are running amok in ways that are facilitated and mediated by certain technologies. What is demanded in this moment by our predicament is the clear-eyed assessment of the long-term costs, risks, and benefits of technoscientific interventions into finite ecosystems to the actual diversity of their stakeholders and the distribution of these costs, risks, and benefits in an equitable way. Quite a lot of unsustainable extractive and industrial production as well as mass consumption and waste would be rendered unprofitable and unappealing were its costs and risks widely recognized and equitably distributed. Such an understanding suggests that what is wanted is to insist on the culpability and situation of actually intelligent human actors, mediated and facilitated as they are in enormously complicated and demanding ways by technique and artifice. The last thing we need to do is invest technology-in-general or environmental-forces with alien intelligence or agency apart from ourselves.

    I am beginning to wonder whether the unavoidable and in many ways humbling recognition (unavoidable not least because of environmental catastrophe and global neoliberal precarization) that human agency emerges out of enormously complex and dynamic ensembles of interdependent/prostheticized actors gives rise to compensatory investments of some artifacts — especially digital networks, weapons of mass destruction, pandemic diseases, environmental forces — with the sovereign aspect of agency we no longer believe in for ourselves? It is strangely consoling to pretend our technologies in some fancied monolithic construal represent the rise of “alien intelligences,” even threatening ones, other than and apart from ourselves, not least because our own intelligence is an alienated one and prostheticized through and through. Consider the indispensability of pedagogical techniques of rote memorization, the metaphorization and narrativization of rhetoric in songs and stories and craft, the technique of the memory palace, the technologies of writing and reading, the articulation of metabolism and duration by timepieces, the shaping of both the body and its bearing by habit and by athletic training, the lifelong interplay of infrastructure and consciousness: all human intellect is already technique. All culture is prosthetic and all prostheses are culture.

    Bratton wants to narrate as a kind of progressive enlightenment the mystification he recommends that would invest computation with alien intelligence and agency while at once divesting intelligent human actors, coders, funders, users of computation of responsibility for the violations and abuses of other humans enabled and mediated by that computation. This investment with intelligence and divestment of responsibility he likens to the Copernican Revolution in which humans sustained the momentary humiliation of realizing that they were not the center of the universe but received in exchange the eventual compensation of incredible powers of prediction and control. One might wonder whether the exchange of the faith that humanity was the apple of God’s eye for a new technoscientific faith in which we aspired toward godlike powers ourselves was really so much a humiliation as the exchange of one megalomania for another. But what I want to recall by way of conclusion instead is that the trope of a Copernican humiliation of the intelligent human subject is already quite a familiar one:

    In his Introductory Lectures on Psychoanalysis Sigmund Freud notoriously proposed that

    In the course of centuries the naive self-love of men has had to submit to two major blows at the hands of science. The first was when they learnt that our earth was not the center of the universe but only a tiny fragment of a cosmic system of scarcely imaginable vastness. This is associated in our minds with the name of Copernicus… The second blow fell when biological research de­stroyed man’s supposedly privileged place in creation and proved his descent from the animal kingdom and his ineradicable animal nature. This revaluation has been accomplished in our own days by Darwin… though not without the most violent contemporary opposition. But human megalomania will have suffered its third and most wounding blow from the psychological research of the present time which seeks to prove to the ego that it is not even master in its own house, but must content itself with scanty information of what is going on un­consciously in the mind.

    However we may feel about psychoanalysis as a pseudo-scientific enterprise that did more therapeutic harm than good, Freud’s works considered instead as contributions to moral philosophy and cultural theory have few modern equals. The idea that human consciousness is split from the beginning as the very condition of its constitution, the creative if self-destructive result of an impulse of rational self-preservation beset by the overabundant irrationality of humanity and history, imposed a modesty incomparably more demanding than Bratton’s wan proposal in the same name. Indeed, to the extent that the irrational drives of the dynamic unconscious are often figured as a brute machinic automatism, one is tempted to suggest that Bratton’s modest proposal of alien artifactual intelligence is a fetishistic disavowal of the greater modesty demanded by the alienating recognition of the stratification of human intelligence by unconscious forces (and his moniker a symptomatic citation). What is striking about the language of psychoanalysis is the way it has been taken up to provide resources for imaginative empathy across the gulf of differences: whether in the extraordinary work of recent generations of feminist, queer, and postcolonial scholars re-orienting the project of the conspicuously sexist, heterosexist, cissexist, racist, imperialist, bourgeois thinker who was Freud to emancipatory ends, or in the stunning leaps in which Freud identified with neurotic others through psychoanalytic reading, going so far as to find in the paranoid system-building of the psychotic Dr. Schreber an exemplar of human science and civilization and a mirror in which he could see reflected both himself and psychoanalysis itself. Freud’s Copernican humiliation opened up new possibilities of responsiveness in difference out of which could be built urgently necessary responsibilities otherwise. I worry that Bratton’s Copernican modesty opens up new occasions for techno-fetishistic fables of history and disavowals of responsibility for its actual human protagonists.
    _____

    Dale Carrico is a member of the visiting faculty at the San Francisco Art Institute as well as a lecturer in the Department of Rhetoric at the University of California at Berkeley from which he received his PhD in 2005. His work focuses on the politics of science and technology, especially peer-to-peer formations and global development discourse and is informed by a commitment to democratic socialism (or social democracy, if that freaks you out less), environmental justice critique, and queer theory. He is a persistent critic of futurological discourses, especially on his Amor Mundi blog, on which an earlier version of this post first appeared.

    Back to the essay

  • Something About the Digital

    Something About the Digital

    By Alexander R. Galloway
    ~

    (This catalog essay was written in 2011 for the exhibition “Chaos as Usual,” curated by Hanne Mugaas at the Bergen Kunsthall in Norway. Artists in the exhibition included Philip Kwame Apagya, Ann Craven, Liz Deschenes, Thomas Julier [in collaboration with Cédric Eisenring and Kaspar Mueller], Olia Lialina and Dragan Espenschied, Takeshi Murata, Seth Price, and Antek Walczak.)

    There is something about the digital. Most people aren’t quite sure what it is. Or what they feel about it. But something.

    In 2001 Lev Manovich said it was a language. For Steven Shaviro, the issue is being connected. Others talk about “cyber” this and “cyber” that. Is the Internet about the search (John Battelle)? Or is it rather, even more primordially, about the information (James Gleick)? Whatever it is, something is afoot.

    What is this something? Given the times in which we live, it is ironic that this term is so rarely defined and even more rarely defined correctly. But the definition is simple: the digital means the one divides into two.

    Digital doesn’t mean machine. It doesn’t mean virtual reality. It doesn’t even mean the computer – there are analog computers after all, like grandfather clocks or slide rules. Digital means the digits: the fingers and toes. And since most of us have a discrete number of fingers and toes, the digital has come to mean, by extension, any mode of representation rooted in individually separate and distinct units. So the natural numbers (1, 2, 3, …) are aptly labeled “digital” because they are separate and distinct, but the arc of a bird in flight is not because it is smooth and continuous. A reel of celluloid film is correctly called “digital” because it contains distinct breaks between each frame, but the photographic frames themselves are not because they record continuously variable chromatic intensities.

    We must stop believing the myth, then, about the digital future versus the analog past. For the digital died its first death in the continuous calculus of Newton and Leibniz, and the curvilinear revolution of the Baroque that came with it. And the digital has suffered a thousand blows since, from the swirling vortexes of nineteenth-century thermodynamics, to the chaos theory of recent decades. The switch from analog computing to digital computing in the middle twentieth century is but a single battle in the multi-millennial skirmish within western culture between the unary and the binary, proportion and distinction, curves and jumps, integration and division – in short, over when and how the one divides into two.

    What would it mean to say that a work of art divides into two? Or to put it another way, what would art look like if it began to meditate on the one dividing into two? I think this is the only way we can truly begin to think about “digital art.” And because of this we shall leave Photoshop, and iMovie, and the Internet and all the digital tools behind us, because interrogating them will not nearly begin to address these questions. Instead look to Ann Craven’s paintings. Or look to the delightful conversation sparked here between Philip Kwame Apagya and Liz Deschenes. Or look to the work of Thomas Julier, even to a piece of his not included in the show, “Architecture Reflecting in Architecture” (2010, made with Cedric Eisenring), which depicts a rectilinear cityscape reflected inside the mirror skins of skyscrapers, just like Saul Bass’s famous title sequence in North By Northwest (1959).

    DSC_0002__560
    Liz Deschenes, “Green Screen 4” (2001)

    All of these works deal with the question of twoness. But it is twoness only in a very particular sense. This is not the twoness of the doppelganger of the romantic period, or the twoness of the “split mind” of the schizophrenic, and neither is it the twoness of the self/other distinction that so forcefully animated culture and philosophy during the twentieth century, particularly in cultural anthropology and then later in poststructuralism. Rather we see here a twoness of the material, a digitization at the level of the aesthetic regime itself.

    Consider the call and response heard across the works featured here by Apagya and Deschenes. At the most superficial level, one might observe that these are works about superimposition, about compositing. Apagya’s photographs exploit one of the oldest and most useful tricks of picture making: superimpose one layer on top of another layer in order to produce a picture. Painters do this all the time of course, and very early on it became a mainstay of photographic technique (even if it often remained relegated to mere “trick” photography), evident in photomontage, spirit photography, and even the side-by-side compositing techniques of the carte de visite popularized by André-Adolphe-Eugène Disdéri in the 1850s. Recall too that the cinema has made productive use of superimposition, adopting the technique with great facility from the theater and its painted scrims and moving backdrops. (Perhaps the best illustration of this comes at the end of A Night at the Opera [1935], when Harpo Marx goes on a lunatic rampage through the flyloft during the opera’s performance, raising and lowering painted backdrops to great comic effect.) So the more “modern” cinematic techniques of, first, rear screen projection, and then later chromakey (known commonly as the “green screen” or “blue screen” effect), are but a reiteration of the much longer legacy of compositing in image making.

    Deschenes’ “Green Screen #4” points to this broad aesthetic history, as it empties out the content of the image, forcing us to acknowledge the suppressed color itself – in this case green, but any color will work. Hence Deschenes gives us nothing but a pure background, a pure something.

    Allowed to curve gracefully off the wall onto the floor, the green color field resembles the “sweep wall” used commonly in portraiture or fashion photography whenever an artist wishes to erase the lines and shadows of the studio environment. “Green Screen #4” is thus the antithesis of what has remained for many years the signal art work about video chromakey, Peter Campus’ “Three Transitions” (1973). Whereas Campus attempted to draw attention to the visual and spatial paradoxes made possible by chromakey, and even in so doing was forced to hide the effect inside the jittery gaps between images, Deschenes by contrast feels no such anxiety, presenting us with the medium itself, minus any “content” necessary to fuel it, minus the powerful mise en abyme of the Campus video, and so too minus Campus’ mirthless autobiographical staging. If Campus ultimately resolves the relationship between images through a version of montage, Deschenes offers something more like a “divorced digitality” in which no two images are brought into relation at all, only the minimal substrate remains, without input or output.

    The sweep wall is evident too in Apagya’s images, only of a different sort, as the artifice of the various backgrounds – in a nod not so much to fantasy as to kitsch – both fuses with and separates from the foreground subject. Yet what might ultimately unite the works by Apagya and Deschenes is not so much the compositing technique, but a more general reference, albeit oblique but nevertheless crucial, to the fact that such techniques are today entirely quotidian, entirely usual. These are everyday folk techniques through and through. One needs only a web cam and simple software to perform chromakey compositing on a computer, just as one might go to the county fair and have one’s portrait superimposed on the body of a cartoon character.

    What I’m trying to stress here is that there is nothing particularly “technological” about digitality. All that is required is a division from one to two – and by extension from two to three and beyond to the multiple. This is why I see layering as so important, for it spotlights an internal separation within the image. Apagya’s settings are digital, therefore, simply by virtue of the fact that he addresses our eye toward two incompatible aesthetic zones existing within the image. The artifice of a painted backdrop, and the pose of a person in a portrait.

    Certainly the digital computer is “digital” by virtue of being binary, which is to say by virtue of encoding and processing numbers at the lowest levels using base-two mathematics. But that is only the most prosaic and obvious exhibit of its digitality. For the computer is “digital” too in its atomization of the universe, into, for example, a million Facebook profiles, all equally separate and discrete. Or likewise “digital” too in the computer interface itself which splits things irretrievably into cursor and content, window and file, or even, as we see commonly in video games, into heads-up-display and playable world. The one divides into two.

    So when clusters of repetition appear across Ann Craven’s paintings, or the iterative layers of the “copy” of the “reconstruction” in the video here by Thomas Julier and Cédric Eisenring, or the accumulations of images that proliferate in Olia Lialina and Dragon Espenschied’s “Comparative History of Classic Animated GIFs and Glitter Graphics” [2007] (a small snapshot of what they have assembled in their spectacular book from 2009 titled Digital Folklore), or elsewhere in works like Oliver Laric’s clipart videos (“787 Cliparts” [2006] and “2000 Cliparts” [2010]), we should not simply recall the famous meditations on copies and repetitions, from Walter Benjamin in 1936 to Gilles Deleuze in 1968, but also a larger backdrop that evokes the very cleavages emanating from western metaphysics itself from Plato onward. For this same metaphysics of division is always already a digital metaphysics as it forever differentiates between subject and object, Being and being, essence and instance, or original and repetition. It shouldn’t come as a surprise that we see here such vivid aesthetic meditations on that same cleavage, whether or not a computer was involved.

    Another perspective on the same question would be to think about appropriation. There is a common way of talking about Internet art that goes roughly as follows: the beginning of net art in the middle to late 1990s was mostly “modernist” in that it tended to reflect back on the possibilities of the new medium, building an aesthetic from the material affordances of code, screen, browser, and jpeg, just as modernists in painting or literature built their own aesthetic style from a reflection on the specific affordances of line, color, tone, or timbre; whereas the second phase of net art, coinciding with “Web 2.0” technologies like blogging and video sharing sites, is altogether more “postmodern” in that it tends to co-opt existing material into recombinant appropriations and remixes. If something like the “WebStalker” web browser or the Jodi.org homepage are emblematic of the first period, then John Michael Boling’s “Guitar Solo Threeway,” Brody Condon’s “Without Sun,” or the Nasty Nets web surfing club, now sadly defunct, are emblematic of the second period.

    I’m not entirely unsatisfied by such a periodization, even if it tends to confuse as many things as it clarifies – not entirely unsatisfied because it indicates that appropriation too is a technique of digitality. As Martin Heidegger signals, by way of his notoriously enigmatic concept Ereignis, western thought and culture was always a process in which a proper relationship of belonging is established in a world, and so too appropriation establishes new relationships of belonging between objects and their contexts, between artists and materials, and between viewers and works of art. (Such is the definition of appropriation after all: to establish a belonging.) This is what I mean when I say that appropriation is a technique of digitality: it calls out a distinction in the object from “where it was prior” to “where it is now,” simply by removing that object from one context of belonging and separating it out into another. That these two contexts are merely different – that something has changed – is evidence enough of the digitality of appropriation. Even when the act of appropriation does not reduplicate the object or rely on multiple sources, as with the artistic ready-made, it still inaugurates a “twoness” in the appropriated object, an asterisk appended to the art work denoting that something is different.

    TMu_Cyborg_2011_18-1024x682
    Takeshi Murata, “Cyborg” (2011)

    Perhaps this is why Takeshi Murata continues his exploration of the multiplicities at the core of digital aesthetics by returning to that age old format, the still life. Is not the still life itself a kind of appropriation, in that it brings together various objects into a relationship of belonging: fig and fowl in the Dutch masters, or here the various detritus of contemporary cyber culture, from cult films to iPhones?

    Because appropriation brings things together it must grapple with a fundamental question. Whatever is brought together must form a relation. These various things must sit side-by-side with each other. Hence one might speak of any grouping of objects in terms of their “parallel” nature, that is to say, in terms of the way in which they maintain their multiple identities in parallel.

    But let us dwell for a moment longer on these agglomerations of things, and in particular their “parallel” composition. By parallel I mean the way in which digital media tend to segregate and divide art into multiple, separate channels. These parallel channels may be quite manifest, as in the separate video feeds that make up the aforementioned “Guitar Solo Threeway,” or they may issue from the lowest levels of the medium, as when video compression codecs divide the moving image into small blocks of pixels that move and morph semi-autonomously within the frame. In fact I have found it useful to speak of this in terms of the “parallel image” in order to differentiate today’s media making from that of a century ago, which Friedrich Kittler and others have chosen to label “serial” after the serial sequences of the film strip, or the rat-ta-tat-tat of a typewriter.

    Thus films like Tatjana Marusic’s “The Memory of a Landscape” (2004) or Takeshi Murata’s “Monster Movie” (2005) are genuinely digital films, for they show parallelity in inscription. Each individual block in the video compression scheme has its own autonomy and is able to write to the screen in parallel with all the other blocks. These are quite literally, then, “multichannel” videos – we might even take a cue from online gaming circles and label them “massively multichannel” videos. They are multichannel not because they require multiple monitors, but because each individual block or “channel” within the image acts as an individual micro video feed. Each color block is its own channel. Thus, the video compression scheme illustrates, through metonymy, how pixel images work in general, and, as I suggest, it also illustrates the larger currents of digitality, for it shows that these images, in order to create “an” image must first proliferate the division of sub-images, which themselves ultimately coalesce into something resembling a whole. In other words, in order to create a “one” they must first bifurcate the single image source into two or more separate images.

    The digital image is thus a cellular and discrete image, consisting of separate channels multiplexed in tandem or triplicate or, greater, into nine, twelve, twenty-four, one hundred, or indeed into a massively parallel image of a virtually infinite visuality.

    For me this generates a more appealing explanation for why art and culture has, over the last several decades, developed a growing anxiety over copies, repetitions, simulations, appropriations, reenactments – you name it. It is common to attribute such anxiety to a generalized disenchantment permeating modern life: our culture has lost its aura and can no longer discern an original from a copy due to endless proliferations of simulation. Such an assessment is only partially correct. I say only partially because I am skeptical of the romantic nostalgia that often fuels such pronouncements. For who can demonstrate with certainty that the past carried with it a greater sense of aesthetic integrity, a greater unity in art? Yet the assessment begins to adopt a modicum of sense if we consider it from a different point of view, from the perspective of a generalized digitality. For if we define the digital as “the one dividing into two,” then it would be fitting to witness works of art that proliferate these same dualities and multiplicities. In other words, even if there was a “pure” aesthetic origin it was a digital origin to begin with. And thus one needn’t fret over it having infected our so-called contemporary sensibilities.

    Instead it is important not to be blinded by the technology. But rather to determine that, within a generalized digitality, there must be some kind of differential at play. There must be something different, and without such a differential it is impossible to say that something is something (rather than something else, or indeed rather than nothing). The one must divide into something else. Nothing less and nothing more is required, only a generic difference. And this is our first insight into the “something” of the digital.

    _____

    Alexander R. Galloway is a writer and computer programer working on issues in philosophy, technology, and theories of mediation. Professor of Media, Culture, and Communication at New York University, he is author of several books and dozens of articles on digital media and critical theory, including Protocol: How Control Exists after Decentralization (MIT, 2006), Gaming: Essays in Algorithmic Culture (University of Minnesota, 2006); The Interface Effect (Polity, 2012), and most recently Laruelle: Against the Digital (University of Minnesota, 2014), reviewed here in 2014. Galloway has recently been writing brief notes on media and digital culture and theory at his blog, on which this post first appeared.

    Back to the essay

  • Network Pessimism

    Network Pessimism

    By Alexander R. Galloway
    ~

    I’ve been thinking a lot about pessimism recently. Eugene Thacker has been deep in this material for some time already. In fact he has a new, lengthy manuscript on pessimism called Infinite Resignation, which is a bit of departure from his other books in terms of tone and structure. I’ve read it and it’s excellent. Definitely “the worst” he’s ever written! Following the style of other treatises from the history of philosophical pessimism–Leopardi, Cioran, Schopenhauer, Kierkegaard, and others–the bulk of the book is written in short aphorisms. It’s very poetic language, and some sections are driven by his own memories and meditations, all in an attempt to plumb the deepest, darkest corners of the worst the universe has to offer.

    Meanwhile, the worst can’t stay hidden. Pessimism has made it to prime time, to NPR, and even right-wing media. Despite all this attention, Eugene seems to have little interest in showing his manuscript to publishers. A true pessimist! Not to worry, I’m sure the book will see the light of day eventually. Or should I say dead of night? When it does, the book is sure to sadden, discourage, and generally worsen the lives of Thacker fans everywhere.

    Interestingly pessimism also appears in a number of other authors and fields. I’m thinking, for instance, of critical race theory and the concept of Afro-pessimism. The work of Fred Moten and Frank B. Wilderson, III is particularly interesting in that regard. Likewise queer theory has often wrestled with pessimism, be it the “no future” debates around reproductive futurity, or what Anna Conlan has simply labeled “homo-pessimism,” that is, the way in which the “persistent association of homosexuality with death and oppression contributes to a negative stereotype of LGBTQ lives as unhappy and unhealthy.”[1]

    In his review of my new book, Andrew Culp made reference to how some of this material has influenced me. I’ll be posting more on Moten and these other themes in the future, but let me here describe, in very general terms, how the concept of pessimism might apply to contemporary digital media.

    *

    A previous post was devoted to the reticular fallacy, defined as the false assumption that the erosion of hierarchical organization leads to an erosion of organization as such. Here I’d like to address the related question of reticular pessimism or, more simply, network pessimism.

    Network pessimism relies on two basic assumptions: (1) “everything is a network”; (2) “the best response to networks is more networks.”

    Who says everything is a network? Everyone, it seems. In philosophy, Bruno Latour: ontology is a network. In literary studies, Franco Moretti: Hamlet is a network. In the military, Donald Rumsfeld: the battlefield is a network. (But so too our enemies are networks: the terror network.) Art, architecture, managerial literature, computer science, neuroscience, and many other fields–all have shifted prominently in recent years toward a network model. Most important, however, is the contemporary economy and the mode of production. Today’s most advanced companies are essentially network companies. Google monetizes the shape of networks (in part via clustering algorithms). Facebook has rewritten subjectivity and social interaction along the lines of canalized and discretized network services. The list goes on and on. Thus I characterize the first assumption — “everything is a network” — as a kind of network fundamentalism. It claims that whatever exists in the world appears naturally in the form of a system, an ecology, an assemblage, in short, as a network.

    Ladies and gentlemen, behold the good news, postmodernism is definitively over! We have a new grand récit. As metanarrative, the network will guide us into a new Dark Age.

    If the first assumption expresses a positive dogma or creed, the second is more negative or nihilistic. The second assumption — that the best response to networks is more networks — is also evident in all manner of social and political life today. Eugene and I described this phenomena at greater length in The Exploit, but consider a few different examples from contemporary debates… In military theory: network-centric warfare is the best response to terror networks. In Deleuzian philosophy: the rhizome is the best response to schizophrenic multiplicity. In autonomist Marxism: the multitude is the best response to empire. In the environmental movement: ecologies and systems are the best response to the systemic colonization of nature. In computer science: distributed architectures are the best response to bottlenecks in connectivity. In economics: heterogenous “economies of scope” are the best response to the distributed nature of the “long tail.”

    To be sure, there are many sites today where networks still confront power centers. The point is not to deny the continuing existence of massified, centralized sovereignty. But at the same time it’s important to contextualize such confrontations within a larger ideological structure, one that inoculates the network form and recasts it as the exclusive site of liberation, deviation, political maturation, complex thinking, and indeed the very living of life itself.

    Why label this a pessimism? For the same reasons that queer theory and critical race theory are grappling with pessimism: Is alterity a death sentence? Is this as good as it gets? Is this all there is? Can we imagine a parallel universe different from this one? (Although the pro-pessimism camp would likely state it in the reverse: We must destabilize and annihilate all normative descriptions of the “good.” This world isn’t good, and hooray for that!)

    So what’s the problem? Why should we be concerned about network pessimism? Let me state clearly so there’s no misunderstanding, pessimism isn’t the problem here. Likewise, networks are not the problem. (Let no one label me “anti network” nor “anti pessimism” — in fact I’m not even sure what either of those positions would mean.) The issue, as I see it, is that network pessimism deploys and sustains a specific dogma, confining both networks and pessimism to a single, narrow ideological position. It’s this narrow-mindedness that should be questioned.

    Specifically I can see three basic problems with network pessimism, the problem of presentism, the problem of ideology, and the problem of the event.

    The problem of presentism refers to the way in which networks and network thinking are, by design, allergic to historicization. This exhibits itself in a number of different ways. Networks arrive on the scene at the proverbial “end of history” (and they do so precisely because they help end this history). Ecological and systems-oriented thinking, while admittedly always temporal by nature, gained popularity as a kind of solution to the problems of diachrony. Space and landscape take the place of time and history. As Fredric Jameson has noted, the “spatial turn” of postmodernity goes hand in hand with a denigration of the “temporal moment” of previous intellectual movements.

    man machines buy fritz kahn
    Fritz Kahn, “Der Mensch als Industriepalast (Man as Industrial Palace)” (Stuttgart, 1926). Image source: NIH

    From Hegel’s history to Luhmann’s systems. From Einstein’s general relativity to Riemann’s complex surfaces. From phenomenology to assemblage theory. From the “time image” of cinema to the “database image” of the internet. From the old mantra always historicize to the new mantra always connect.

    During the age of clockwork, the universe was thought to be a huge mechanism, with the heavens rotating according to the music of the spheres. When the steam engine was the source of newfound power, the world suddenly became a dynamo of untold thermodynamic force. After full-fledged industrialization, the body became a factory. Technologies and infrastructures are seductive metaphors. So it’s no surprise (and no coincidence) that today, in the age of the network, a new template imprints itself on everything in sight. In other words, the assumption “everything is a network” gradually falls apart into a kind of tautology of presentism. “Everything right now is a network…because everything right now has been already defined as a network.”

    This leads to the problem of ideology. Again we’re faced with an existential challenge, because network technologies were largely invented as a non-ideological or extra-ideological structure. When writing Protocol I interviewed some of the computer scientists responsible for the basic internet protocols and most of them reported that they “have no ideology” when designing networks, that they are merely interested in “code that works” and “systems that are efficient and robust.” In sociology and philosophy of science, figures like Bruno Latour routinely describe their work as “post-critical,” merely focused on the direct mechanisms of network organization. Hence ideology as a problem to be forgotten or subsumed: networks are specifically conceived and designed as those things that both are non-ideological in their conception (we just want to “get things done”), but also post-ideological in their architecture (in that they acknowledge and co-opt the very terms of previous ideological debates, things like heterogeneity, difference, agency, and subject formation).

    The problem of the event indicates a crisis for the very concept of events themselves. Here the work of Alain Badiou is invaluable. Network architectures are the perfect instantiation of what Badiou derisively labels “democratic materialism,” that is, a world in which there are “only bodies and languages.” In Badiou’s terms, if networks are the natural state of the situation and there is no way to deviate from nature, then there is no event, and hence no possibility for truth. Networks appear, then, as the consummate “being without event.”

    What could be worse? If networks are designed to accommodate massive levels of contingency — as with the famous Robustness Principle — then they are also exceptionally adept at warding off “uncontrollable” change wherever it might arise. If everything is a network, then there’s no escape, there’s no possibility for the event.

    Jameson writes as much in The Seeds of Time when he says that it is easier to imagine the end of the earth and the end of nature than it is to imagine the ends of capitalism. Network pessimism, in other words, is really a kind of network defeatism in that it makes networks the alpha and omega of our world. It’s easier to imagine the end of that world than it is to discard the network metaphor and imagine a kind of non-world in which networks are no longer dominant.

    In sum, we shouldn’t give in to network pessimism. We shouldn’t subscribe to the strong claim that everything is a network. (Nor should we subscribe to the softer claim, that networks are merely the most common, popular, or natural architecture for today’s world.) Further, we shouldn’t think that networks are the best response to networks. Instead we must ask the hard questions. What is the political fate of networks? Did heterogeneity and systematicity survive the Twentieth Century? If so, at what cost? What would a non-net look like? And does thinking have a future without the network as guide?

    _____

    Alexander R. Galloway is a writer and computer programer working on issues in philosophy, technology, and theories of mediation. Professor of Media, Culture, and Communication at New York University, he is author of several books and dozens of articles on digital media and critical theory, including Protocol: How Control Exists after Decentralization (MIT, 2006), Gaming: Essays in Algorithmic Culture (University of Minnesota, 2006); The Interface Effect (Polity, 2012), and most recently Laruelle: Against the Digital (University of Minnesota, 2014), reviewed here in 2014. Galloway has recently been writing brief notes on media and digital culture and theory at his blog, on which this post first appeared.

    Back to the essay
    _____

    Notes

    [1] Anna Conlan, “Representing Possibility: Mourning, Memorial, and Queer Museology,” in Gender, Sexuality and Museums, ed. Amy K. Levin (London: Routledge, 2010). 253-263.

  • Flat Theory

    Flat Theory

    By David M. Berry
    ~

    The world is flat.[1] 6Or perhaps better, the world is increasingly “layers.” Certainly the augmediated imaginaries of the major technology companies are now structured around a post-retina vision of mediation made possible and informed by the digital transformations ushered in by mobile technologies – whether smartphones, wearables, beacons or nearables – an internet of places and things. These imaginaries provide a sense of place, as well as sense for management, of the complex real-time streams of information and data broken into shards and fragments of narrative, visual culture, social media and messaging. Turned into software, they reorder and re-present information, decisions and judgment, amplifying the sense and senses of (neoliberal) individuality whilst reconfiguring what it means to be a node in the network of post digital capitalism.  These new imaginaries serve as abstractions of abstractions, ideologies of ideologies, a prosthesis to create a sense of coherence and intelligibility in highly particulate computational capitalism (Berry 2014). To explore the experimentation of the programming industries in relation to this it is useful to explore the design thinking and material abstractions that are becoming hegemonic at the level of the interface.

    Two new competing computational interface paradigms are now deployed in the latest version of Apple and Google’s operating systems, but more notably as regulatory structures to guide the design and strategy related to corporate policy. The first is “flat design” which has been introduced by Apple through iOS 8 and OS X Yosemite as a refresh of the aging operating systems’ human computer interface guidelines, essentially stripping the operating system of historical baggage related to techniques of design that disguised the limitations of a previous generation of technology, both in terms of screen but also processor capacity. It is important to note, however, that Apple avoids talking about “flat design” as its design methodology, preferring to talk through its platforms specificity, that is about iOS’ design or OS X’s design. The second is “material design” which was introduced by Google into its Android L, now Lollipop, operating system and which also sought to bring some sense of coherence to a multiplicity of Android devices, interfaces, OEMs and design strategies. More generally “flat design” is “the term given to the style of design in which elements lose any type of stylistic characters that make them appear as though they lift off the page” (Turner 2014). As Apple argues, one should “reconsider visual indicators of physicality and realism” and think of the user interface as “play[ing] a supporting role”, that is that techniques of mediation through the user interface should aim to provide a new kind of computational realism that presents “content” as ontologically prior to, or separate from its container in the interface (Apple 2014). This is in contrast to “rich design,” which has been described as “adding design ornaments such as bevels, reflections, drop shadows, and gradients” (Turner 2014).

    color_family_a_2xI want to explore these two main paradigms – and to a lesser extent the flat-design methodology represented in Windows 7 and the, since renamed, Metro interface – through a notion of a comprehensive attempt by both Apple and Google to produce a rich and diverse umwelt, or ecology, linked through what what Apple calls “aesthetic integrity” (Apple 2014). This is both a response to their growing landscape of devices, platforms, systems, apps and policies, but also to provide some sense of operational strategy in relation to computational imaginaries. Essentially, both approaches share an axiomatic approach to conceptualizing the building of a system of thought, in other words, a primitivist predisposition which draws from both a neo-Euclidian model of geons (for Apple), but also a notion of intrinsic value or neo-materialist formulations of essential characteristics (for Google). That is, they encapsulate a version of what I am calling here flat theory. Both of these companies are trying to deal with the problematic of multiplicities in computation, and the requirement that multiple data streams, notifications and practices have to be combined and managed within the limited geography of the screen. In other words, both approaches attempt to create what we might call aggregate interfaces by combining techniques of layout, montage and collage onto computational surfaces (Berry 2014: 70).

    The “flat turn” has not happened in a vacuum, however, and is the result of a new generation of computational hardware, smart silicon design and retina screen technologies. This was driven in large part by the mobile device revolution which has not only transformed the taken-for-granted assumptions of historical computer interface design paradigms (e.g. WIMP) but also the subject position of the user, particularly structured through the Xerox/Apple notion of single-click functional design of the interface. Indeed, one of the striking features of the new paradigm of flat design, is that it is a design philosophy about multiplicity and multi-event. The flat turn is therefore about modulation, not about enclosure, as such, indeed it is a truly processual form that constantly shifts and changes, and in many ways acts as a signpost for the future interfaces of real-time algorithmic and adaptive surfaces and experiences. The structure of control for the flat design interfaces is following that of the control society, is “short-term and [with] rapid rates of turnover, but also continuous and without limit” (Deleuze 1992). To paraphrase Deleuze: Humans are no longer in enclosures, certainly, but everywhere humans are in layers.

    manipulation_2x

    Apple uses a series of concepts to link its notion of flat design which include, aesthetic integrity, consistency, direct manipulation, feedback, metaphors, and user control (Apple 2014). Reinforcing the haptic experience of this new flat user interface has been described as building on the experience of “touching glass” to develop the “first post-Retina (Display) UI (user interface)” (Cava 2013). This is the notion of layered transparency, or better, layers of glass upon which the interface elements are painted through a logical internal structure of Z-axis layers. This laminate structure enables meaning to be conveyed through the organization of the Z-axis, both in terms of content, but also to place it within a process or the user interface system itself.

    Google, similarly, has reorganized it computational imaginary around a flattened layered paradigm of representation through the notion of material design. Matias Duarte, Google’s Vice President of Design and a Chilean computer interface designer, declared that this approach uses the notion that it “is a sufficiently advanced form of paper as to be indistinguishable from magic” (Bohn 2014). But magic which has constraints and affordances built into it, “if there were no constraints, it’s not design — it’s art” Google claims (see Interactive Material Design) (Bohn 2014). Indeed, Google argues that the “material metaphor is the unifying theory of a rationalized space and a system of motion”, further arguing:

    The fundamentals of light, surface, and movement are key to conveying how objects move, interact, and exist in space and in relation to each other. Realistic lighting shows seams, divides space, and indicates moving parts… Motion respects and reinforces the user as the prime mover… [and together] They create hierarchy, meaning, and focus (Google 2014).

    This notion of materiality is a weird materiality in as much as Google “steadfastly refuse to name the new fictional material, a decision that simultaneously gives them more flexibility and adds a level of metaphysical mysticism to the substance. That’s also important because while this material follows some physical rules, it doesn’t create the “trap” of skeuomorphism. The material isn’t a one-to-one imitation of physical paper, but instead it’s ‘magical’” (Bohn 2014). Google emphasises this connection, arguing that “in material design, every pixel drawn by an application resides on a sheet of paper. Paper has a flat background color and can be sized to serve a variety of purposes. A typical layout is composed of multiple sheets of paper” (Google Layout, 2014). The stress on material affordances, paper for Google and glass for Apple are crucial to understanding their respective stances in relation to flat design philosophy.[2]

    • Glass (Apple): Translucency, transparency, opaqueness, limpidity and pellucidity.
    • Paper (Google): Opaque, cards, slides, surfaces, tangibility, texture, lighted, casting shadows.
    Paradigmatic Substances for Materiality

    In contrast to the layers of glass paper-notes-templatethat inform the logics of transparency, opaqueness and translucency of Apple’s flat design, Google uses the notion of remediated “paper” as a digital material, that is this “material environment is a 3D space, which means all objects have x, y, and z dimensions. The z-axis is perpendicularly aligned to the plane of the display, with the positive z-axis extending towards the viewer.  Every sheet of material occupies a single position along the z-axis and has a standard 1dp thickness” (Google 2014). One might think then of Apple as painting on layers of glass, and Google as thin paper objects (material) placed upon background paper. However a key difference lies in the use of light and shadow in Google’s notion which enables the light source, located in a similar position to the user of the interface, to cast shadows of the material objects onto the objects and sheets of paper that lie beneath them (see Jitkoff 2014). Nonetheless, a laminate structure is key to the representational grammar that constitutes both of these platforms.

    armin_hofmann_2
    Armin Hofmann, head of the graphic design department at the Schule für Gestaltung Basel (Basel School of Design) and was instrumental in developing the graphic design style known as the Swiss Style. Designs from 1958 and 1959.

    Interestingly, both design strategies emerge from an engagement with and reconfiguration of the principles of design that draw from the Swiss style (sometimes called the International Typographic Style) in design (Ashghar 2014, Turner 2014).[3] This approach emerged in the 1940s, and

    mainly focused on the use of grids, sans-serif typography, and clean hierarchy of content and layout. During the 40’s and 50’s, Swiss design often included a combination of a very large photograph with simple and minimal typography (Turner 2014).

    The design grammar of the Swiss style has been combined with minimalism and the principle of “responsive design”, that is that the materiality and specificity of the device should be responsive to the interface and context being displayed. Minimalism is a “term used in the 20th century, in particular from the 1960s, to describe a style characterized by an impersonal austerity, plain geometric configurations and industrially processed materials” (MoMA 2014).

    img-robert-morris-1_125225955286
    Robert Morris: Untitled (Scatter Piece), 1968-69, felt, steel, lead, zinc, copper, aluminum, brass, dimensions variable; at Leo Castelli Gallery, New York. Photo Genevieve Hanson. All works © 2010 Robert Morris/Artists Rights Society (ARS), New York.

    Robert Morris, one of the principle artists of Minimalism, and author of the influential Notes on Sculpture used “simple, regular and irregular polyhedrons. Influenced by theories in psychology and phenomenology” which he argued “established in the mind of the beholder ‘strong gestalt sensation’, whereby form and shape could be grasped intuitively” (MoMA 2014).[4]

    The implications of these two competing world-views are far-reaching in that much of the worlds initial contact, or touch points, for data services, real-time streams and computational power is increasingly through the platforms controlled by these two companies. However, they are also deeply influential across the programming industries, and we see alternatives and multiple reconfigurations in relation to the challenge raised by the “flattened” design paradigms. That is, they both represent, if only in potentia, a situation of a power relation and through this an ideological veneer on computation more generally. Further, with the proliferation of computational devices – and the screenic imaginary associated with them in the contemporary computational condition – there appears a new logic which lies behind, justifies and legitimates these design methodologies.

    It seems to me that these new flat design philosophies, in the broad sense, produce an order in precepts and concepts in order to give meaning and purpose not only in the interactions with computational platforms, but also more widely in terms of everyday life. Flat design and material design are competing philosophies that offer alternative patterns of both creation and interpretation, which are meant to have not only interface design implications, but more broadly in the ordering of concepts and ideas, the practices and the experience of computational technologies broadly conceived. Another way to put this could be to think about these moves as being a computational founding, the generation of, or argument for, an axial framework for building, reconfiguration and preservation.

    Indeed, flat design provides and more importantly serves, as a translational or metaphorical heuristic for both re-presenting the computational, but also teaches consumers and users how to use and manipulate new complex computational systems and stacks. In other words, in a striking visual technique flat design communicates the vertical structure of the computational stack, on which the Stack corporations are themselves constituted. But also begins to move beyond the specificity of the device as privileged site of a computational interface interaction from beginning to end. For example, interface techniques are abstracted away from the specificity of the device, for example through Apple’s “handoff” continuity framework which also potentially changes reading and writing practices in interesting ways and new use-cases for wearables and nearables.

    These new interface paradigms, introduced by the flat turn, have very interesting possibilities for the application of interface criticism, through unpacking and exploring the major trends and practices of the Stacks, that is, the major technology companies. I think that further than this, the notion of layers are instrumental in mediating the experience of an increasingly algorithmic society (e.g. think dashboards, personal information systems, quantified self, etc.), and as such provide an interpretative frame for a world of computational patterns but also a constituting grammar for building these systems in the first place. There is an element in which the notion of the postdigital may also be a useful way into thinking about the question of the link between art, computation and design given here (see Berry and Dieter, forthcoming) but also the importance of notions of materiality for the conceptualization deployed by designers working within both the flat design and material design paradigms – whether of paper, glass, or some other “material” substance.[5]
    _____

    David M. Berry is Reader in the School of Media, Film and Music at the University of Sussex. He writes widely on computation and the digital and blogs at Stunlaw. He is the author of Critical Theory and the Digital, The Philosophy of Software: Code and Mediation in the Digital Age , Copy, Rip, Burn: The Politics of Copyleft and Open Source, editor of Understanding Digital Humanities and co-editor of Postdigital Aesthetics: Art, Computation And Design. He is also a Director of the Sussex Humanities Lab.

    Back to the essay
    _____

    Notes

    [1] Many thanks to Michael Dieter and Søren Pold for the discussion which inspired this post.

    [2] The choice of paper and glass as the founding metaphors for the flat design philosophies of Google and Apple raise interesting questions for the way in which these companies articulate the remediation of other media forms, such as books, magazines, newspapers, music, television and film, etc. Indeed, the very idea of “publication” and the material carrier for the notion of publication is informed by the materiality, even if only a notional affordance given by this conceptualization. It would be interesting to see how the book is remediated through each of the design philosophies that inform both companies, for example.

    [3] One is struck by the posters produced in the Swiss style which date to the 1950s and 60s but which today remind one of the mobile device screens of the 21st Century.

    [4] There is also some interesting links to be explored between the Superflat style and postmodern art movement, founded by the artist Takashi Murakami, which is influenced by manga and anime, both in terms of the aesthetic but also in relation to the cultural moment in which “flatness” is linked to “shallow emptiness.”

    [5] There is some interesting work to be done in thinking about the non-visual aspects of flat theory, such as the increasing use of APIs, such as the RESTful api, but also sound interfaces that use “flat” sound to indicate spatiality in terms of interface or interaction design. There are also interesting implications for the design thinking implicit in the Apple Watch, and the Virtual Reality and Augmented Reality platforms of Oculus Rift, Microsoft HoloLens, Meta and Magic Leap.

    Bibliography