Author: boundary2

  • Arif Dirlik – The Rise of China and the End of the World As We Know It

    Arif Dirlik – The Rise of China and the End of the World As We Know It

    On February 27, 2016, longstanding boundary 2 board member Arif Dirlik gave his final lecture at the University of British Columbia. The talk, The Rise of China and the End of the World As We Know It, is available in full on the UBC Library’s website.

  • Martin Woessner — The Sociologists and the Squirrel — Review of “Georg Simmel and the Disciplinary Imaginary”

    Martin Woessner — The Sociologists and the Squirrel — Review of “Georg Simmel and the Disciplinary Imaginary”

    by Martin Woessner

    Review of Elizabeth S. Goodstein, Georg Simmel and the Disciplinary Imaginary (Palo Alto: Stanford UP, 2017).

    Georg Simmel only began to be recognized as one of the founding figures of modern sociology shortly before his death in 1918.  The recognition came too late and generally amounted to the backhanded compliment in which scholars specialize: Simmel was brilliant, but. As an academic discipline in continental Europe and North America, sociology was still in the process of finding its methodological and institutional footing at the time.  It had neither the heritage nor the prestige of philosophy, but modernity was on its side.  It was the discipline of the future.  Sociologists were rigorous, scientific, and systematic—everything that Simmel supposedly was not.  Especially in comparison to Durkheim and Weber, Simmel’s work seemed dilettantish, more subjective and speculative than objective or empirical; more like poetry, in other words, than sociology.  It was a strange complaint to make of somebody who wrote a tome like The Philosophy of Money, which was hundreds of pages long and chock full of concrete examples. But it stuck.

    In the early decades of the twentieth century, as sociology became ever more scientific, Simmel’s fame became that of the negative example.  Neither his methodological preoccupations, which were wide-ranging, nor his intellectual style, which shunned footnotes and bibliographies, fit within the narrowing confines of academic sociology. He thus had to be written into and out of the discipline simultaneously.  In a 1936 survey of social thought across the Rhine, Raymond Aron conceded that “the development of sociology as an autonomous discipline can, in fact, scarcely be explained without taking his work into account,” but then proceeded to dispatch Simmel in just a few short pages, as if he were some kind of embarrassing distant relative who had to be acknowledged, but not necessarily celebrated.[1]  Another, perhaps more poetic but no less dismissive portrait came from Jose Ortega y Gasset, who likened Simmel to a “philosophical squirrel,” more content to leap from branch to branch, indeed from tree to tree, than to harvest the insights of any one particular area of inquiry.[2]

    Simmel may have ended up a squirrel by necessity rather than by choice.  Unable to secure a fully funded academic post until very late in his career, and then only in out-of-the-way Strasbourg—rather than, say, Heidelberg, where, with the help of Weber, he had hoped to obtain an appointment, or Berlin, where he lived and studied and taught as an unsalaried lecturer for most of his life—Simmel never enjoyed the academic security that might have lent itself to less squirrelish, more scientific pursuits.  His Berlin lectures were fabled performances—attended by everyone from Rainer Maria Rilke to George Santayana, who praised them to his Harvard colleague William James—but he nevertheless “remained,” as Elizabeth Goodstein argues in her new book, Georg Simmel and the Disciplinary Imaginary, “at the margins of the academic establishment.”[3]

    Goodstein revisits Simmel’s marginality because she thinks it is the key to understanding not just his career, which was simultaneously storied and tenuous, but also his curious absence from academic debates today.  Something essential about Simmel has been lost, she argues, in the narrative that transformed Simmel into a sociological ancestor, in the “decoupling” of his more sociological work from its philosophical foundations.[4]  Indeed, as David Frisby pointed out some time ago, Simmel never really thought of himself as a sociologist anyhow.[5]  There was a reason he didn’t call it The Sociology of Money.  Writing to a French colleague already in 1899, Simmel confessed that “it is altogether rather painful for me that abroad I am only known as a sociologist—whereas I am a philosopher, see my life’s vocation in philosophy, and only pursue sociology as a sideline.”[6]

    Heeding this remark, Goodstein urges us to see Simmel more as he saw himself: a marginalized figure, caught between ascendant “social science” on the one hand and “a kind of philosophy that was passing away” on the other.[7]   If we do so, we might begin to appreciate how very relevant Simmel’s work is to contemporary debates not just in sociology, but also across the humanities and social sciences more generally. In the vicissitudes of Simmel’s career and legacy, in other words, Goodstein sees a parable or two for the current intellectual epoch, in which academic disciplines seem to be in the process of reforming themselves along new and sometimes competing lines of inquiry.

    Instead of presenting us with Simmel as squirrel, then, Goodstein offers us a portrait of Simmel as conflicted interdisciplinarian.  It is reassuring, I suppose, to think that what our academic colleagues dismiss as our most evident weaknesses might one day be viewed as our greatest strengths, that what seems scatterbrained now may be heralded as innovative in the future.  For those of us who work in the amorphous field of interdisciplinary studies, Goodstein’s book might serve as both legitimation and justification—a defense of our squirreliness to our colleagues over in the harder sciences maybe.  Still, it is difficult to shake the idea that interdisciplinarity is, like disciplinarity was a century ago, just another fad, another way to demonstrate to society that what we academics do behind closed doors is valuable and worthy of recognition, if not also funding.

    As Louis Menand and others have argued, talk of interdisciplinarity is, at root, an expression of anxiety.[8]  In the academy today there is certainly plenty to be anxious about, but, like Menand, I’m not sure that the discourse of interdisciplinarity adequately addresses any of it.  Interdisciplinarity does not address budget crises, crumbling infrastructure, or the increasingly contingent nature of academic labor.  In fact, it may even exacerbate these problems, insofar as it questions the rationale for having distinct disciplinary departments in the first place: why not collapse two or three different programs in the humanities into one, cut half their staff, and run a leaner, cheaper interdisciplinary program instead?  If we are all doing “theory” anyways, what difference does it make if we are attached to a literature department, a philosophy department, or a sociology department?

    That sounds paranoid, I know.  Interdisciplinarity is not an evil conspiracy concocted by greedy administrators.  It is simply the academic buzzword of our times.  But like all buzzwords, it says a lot without saying anything of substance, really.  It repackages what we already do and sells it back to us.  Like any fashion or fad, it is unique enough to seem innovative, but not so unique as to be truly independent.  Well over a century ago Simmel suggested that fashion trends were reflections of our competing desires for both “imitation” and “differentiation.”[9]  Interdisciplinarity’s fashionable status in the contemporary academy suggests that these desires have found a home in higher education.  In an effort to differentiate ourselves from our colleagues, we try to imitate the innovators.  We buy into the trend.  Interdisciplinary programs, built around interdisciplinary pedagogy, now produce and promote interdisciplinary research and scholarship, the end results of which are interdisciplinary curricula, conferences, journals, and textbooks.  All of them come at a price.  None of them, it seems to me, are worth it.

    When viewed from this perspective at least, Goodstein’s book isn’t about Simmel at all.  It is about what has been done to Simmel by the changing tides of academic fashion.  The reception of his work becomes, in Goodstein’s hands, a cautionary tale about the plight of disciplinary thinking in the twentieth- and twenty-first centuries.  The first section of the book, which investigates the way in which Simmel became a “(mostly) forgotten founding father” of modern sociology, shows how “Simmel’s oeuvre came to be understood as simultaneously foundational for and marginal to the modern social sciences.”[10]  Insofar as he made social types (including “the stranger” and “the adventurer”) and forms of social interaction (such as “exchange” and “conflict,” but also including “sociability” itself) topics worthy of academic scrutiny Simmel proved indispensible; insofar as he did so in an impressionistic as opposed to empirical or quantitative style he was expendable.  He was both imitated and ignored.  Simmel helped make the discipline of sociology possible, but he would remain forever a stranger to it—“a philosophical Monet,” as his student György Lukács described him, surrounded by conventional realists.[11]

    Goodstein uses the Simmel case to warn against the dangers of what now gets called, in those overpriced textbooks, “disciplinary reductionism.”  She doesn’t use that term, but she is not immune to similar sounding jargon, which is part and parcel of interdisciplinary branding.  “In exploring the history of Simmel’s representation as (proto)sociologist,” she writes, “I render more visible the highly tendentious background narratives on which the plausibility of that metadisciplinary (imagined, lived) order as a whole depends—and call into question the (largely tacit) equation of the differentiation and specialization knowledge practices with intellectual progress.”[12]  An explanatory footnote tacked on to this sentence doesn’t clarify things all that much: “My purpose is not to argue against the value disciplines or to discount the modes of knowing they embody and perpetuate, but to emphasize that meta-, inter-, pre-, trans-, and even anti-disciplinary approaches are not just supplements or correctives to disciplinary knowledge practices but are themselves valuable constitutive features of a vibrant intellectual culture.”[13]  Sounds squirrely to me, and not necessarily in a good way.

    If Simmel’s reception in academic sociology serves as a cautionary tale about the limits of disciplinary knowledge for Goodstein, his writings represent something else entirely: a light of inspiration at the end of the disciplinary tunnel.  They offer “an alternative vision of inquiry into human cultural or social life as a whole,” one that rejects the narrow tunnel-vision of specialized, compartmentalized, disciplinary frameworks.[14]  It is a vision that might also help us to think critically about interdisciplinarity as well, for as Goodstein points out later in the book, in a more critical voice, “the contemporary turn to interdisciplinarity remains situated in a discursive space shaped and reinforced by disciplinary divisions.”[15]

    The middle section of Goodstein’s book is devoted to a close reading of The Philosophy of Money.  Its three chapters argue, each from a slightly different angle, that Simmel’s magnum opus substantiates just such an “alternative vision.”  Here Simmel is presented not as the academic as which sociologists came to portray him, but as what he so desperately wanted to be seen, namely a philosopher.  Goodstein argues that Simmel should be understood as a  “modernist philosopher,” a kind of missing link, as it were, between Nietzsche on the one side and Husserl and Heidegger on the other.  Simmel takes from Nietzsche the importance of post-Cartesian perspectivism, and, in applying it to social and cultural life, anticipates not just the phenomenology of Husserl and the existential philosophy of Heidegger, but also the critical theory of Lukács, and, later, the Frankfurt School.  This is the theory you have been waiting for, the one that brings it all together.

    In Goodstein’s view, The Philosophy of Money attempts nothing less than an inquiry into all social and cultural life through the subject of money relations. As such, it is neither “inter- or transdisciplinary.”  “It is,” she writes, “metadisciplinary.”[16]  It operates at a level all its own.  It uses the phenomena associated with money—abstraction, valuation, and signification, for example—to explore larger questions associated with epistemology, ethics, and even metaphysics more generally.  It shuttles back and forth between the most concrete and immediate observations to the most far-reaching speculations.  It helps us understand how calculation, objectivity, and relativity, for example, become the defining features of modernity.  It shows us how seemingly objective social and cultural forms—from artistic styles to legal and political norms—emerge out of intimate, subjective experience.  But it also shows how these forms come to reify the forms of life out of which they initially sprang.[17]

    In Simmel’s hands, money becomes a synecdoche—the “synecdoche of synecdoche” Goodstein repeats, one too many times—for social and cultural life as a whole.[18]  What Hegel’s Phenomenology of Spirit did for history, The Philosophy of Money does for cold, hard cash.  In this regard, at least, Goodstein’s efforts to re-categorize Simmel as a “modernist philosopher”—to put the philosophy back into the book, as it were—are insightful.  Still, as I read Georg Simmel and the Disciplinary Imaginary, I couldn’t help but wonder if it might not be more valuable these days to put some of the money back into it instead.  Given all the ways in which interdisciplinarity has been sold to us, and given the neoliberal reforms that are sweeping through the academy, now might be the time to focus on money as money, and not merely as synecdoche.

    The problems we face today, both within and beyond the academy, are tremendous.  We live in an age, as Goodstein puts it, of “accelerating ecological, economic, and sociopolitical crises.”[19]  No matter what its promotional materials suggest, interdisciplinarity will not rescue us from any of them.  Goodstein eventually admits as much: “the proliferation of increasingly differentiated inter-, trans-, and post-disciplinary practices reinforces rather than challenges the philosophical—ethical, but also metaphysical—insufficiencies of the modern disciplinary imaginary.”[20]  In the final section of her book she emphasizes not so much the disciplining of Simmel’s work by those narrow-minded sociologists as the liberating theoretical potential of his “practices of thought,” which “even today do not comfortably fit into existing institutional frameworks.”[21]  After depicting Simmel as a victim of academic rationalization, Goodstein now presents him as a potential savior—a way out of the mess of disciplinarity altogether.

    Attractive as that sounds, I’m not sure that Simmel’s “modernist philosophy” will rescue us, either.  In fact, I’m not sure that any philosophical or theoretical framework will, by itself, give us what we need to confront the challenges we face.  Worrying about finding the right intellectual perspective may not be as important as worrying about where, in our society, the money comes from and where—and to whom—it goes at the end of the day.  We need some advocacy to go along with our philosophy, and fretting over the merits of inter-, trans-, post-, meta-, anti-disciplinarity may just get in the way of it.

    Simmel predicted that he would “die without spiritual heirs,” which was, in his opinion, “a good thing.”  In a revealing quotation that serves as the guiding leitmotif of Goodstein’s book, he likened his intellectual legacy to “cold cash divided among many heirs, and each converts his portion into an enterprise of some sort that corresponds to his nature; whose provenance in that inheritance is not visible.”[22]  Georg Simmel and the Disciplinary Imagination goes a long way towards reestablishing that provenance.  Maybe it’s about time we start calling for an inheritance tax to be imposed upon the current practitioners and proponents of interdisciplinarity, who have turned that cold cash into gold.

    Martin Woessner is Associate Professor of History & Society at The City College of New York’s Center for Worker Education.  He is the author of Heidegger in America (Cambridge UP, 2011).

    Notes

    [1] Raymond Aron, German Sociology, trans. Mary and Thomas Bottomore (New York: Free Press of Glencoe, 1964), 5 n.1.  Aron’s text was first published in French in 1936.

    [2] Quoted in Lewis Coser, Masters of Sociological Thought: Ideas in Historical and Social Context, Second Edition (Long Grove, Illinois: Waveland Press, 1997), 199.

    [3] Goodstein, Georg Simmel, 15.

    [4] Ibid., 112.

    [5] David Frisby, Fragments of Modernity: Theories of Modernity in the Work of Simmel, Kracauer and Benjamin (Cambridge, Massachusetts: MIT Press, 1986), 64.

    [6] Goodstein, Georg Simmel, 41.

    [7] Ibid., 29.

    [8] Louis Menand, The Marketplace of Ideas: Reform and Resistance in the American University (New York: Norton, 2010), 97.

    [9] Georg Simmel, “Fashion,” in On Individuality and Social Forms, edited and with an introduction by Donald N. Levine (Chicago: University of Chicago Press, 1971), 296.

    [10] Goodstein, Georg Simmel, 106.

    [11] “Introduction to the Translation,” in Simmel, The Philosophy of Money, trans. Tom Bottomore and David Frisby, from a first draft by Kaethe Mengelberg (London: Routledge & Kegan Paul, 1978), 29.

    [12] Goodstein, Georg Simmel, 33.

    [13] Ibid., note 43.

    [14] Ibid., 67.

    [15] Ibid., 131.

    [16] Ibid., 155.

    [17] This point is emphasized in Simmel’s final work, The View of Life: Four Metaphysical Essays with Journal Aphorisms, trans. John A.Y. Andrews and Donald N. Levine, with an introduction by Donald N. Levine and Daniel Silver, and an appendix, “Journal Aphorisms, with an Introduction” edited, translated, and with an introduction by John A.Y. Andrews (Chicago: University of Chicago Press, 2010), 351-352.

    [18] Goodstein, Georg Simmel, 171.

    [19] Ibid., 329.

    [20] Ibid., 258.

    [21] Ibid., 254.

    [22] Ibid., 1.

  • Michael Miller — Seeing Ourselves, Loving Our Captors: Mark Jarzombek’s Digital Stockholm Syndrome in the Post-Ontological Age

    Michael Miller — Seeing Ourselves, Loving Our Captors: Mark Jarzombek’s Digital Stockholm Syndrome in the Post-Ontological Age

    a review of Mark Jarzombek, Digital Stockholm Syndrome in the Post-Ontological Age (University of Minnesota Press Forerunners Series, 2016)

    by Michael Miller

    ~

    All existence is Beta, basically. A ceaseless codependent improvement unto death, but then death is not even the end. Nothing will be finalized. There is no end, no closure. The search will outlive us forever

    — Joshua Cohen, Book of Numbers

    Being a (in)human is to be a beta tester

    — Mark Jarzombek, Digital Stockholm Syndrome in the Post-Ontological Age

    Too many people have access to your state of mind

    — Renata Adler, Speedboat

    Whenever I read through Vilém Flusser’s vast body of work and encounter, in print no less, one of the core concepts of his thought—which is that “human communication is unnatural” (2002, 5)––I find it nearly impossible to shake the feeling that the late Czech-Brazilian thinker must have derived some kind of preternatural pleasure from insisting on the ironic gesture’s repetition. Flusser’s rather grim view that “there is no possible form of communication that can communicate concrete experience to others” (2016, 23) leads him to declare that the intersubjective dimension of communication implies inevitably the existence of a society which is, in his eyes, itself an unnatural institution. One can find all over in Flusser’s work traces of his life-long attempt to think through the full philosophical implications of European nihilism, and evidence of this intellectual engagement can be readily found in his theories of communication.

    One of Flusser’s key ideas that draws me in is his notion that human communication affords us the ability to “forget the meaningless context in which we are completely alone and incommunicado, that is, the world in which we are condemned to solitary confinement and death: the world of ‘nature’” (2002, 4). In order to help stave off the inexorable tide of nature’s muted nothingness, Flusser suggests that humans communicate by storing memories, externalized thoughts whose eventual transmission binds two or more people into a system of meaning. Only when an intersubjective system of communication like writing or speech is established between people does the purpose of our enduring commitment to communication become clear: we communicate in order “to become immortal within others (2016, 31). Flusser’s playful positing of the ironic paradox inherent in the improbability of communication—that communication is unnatural to the human but it is also “so incredibly rich despite its limitations” (26)––enacts its own impossibility. In a representatively ironic sense, Flusser’s point is that all we are able to fully understand is our inability to understand fully.

    As Flusser’s theory of communication can be viewed as his response to the twentieth-century’s shifting technical-medial milieu, his ideas about communication and technics eventually led him to conclude that “the original intention of producing the apparatus, namely, to serve the interests of freedom, has turned on itself…In a way, the terms human and apparatus are reversed, and human beings operate as a function of the apparatus. A man gives an apparatus instructions that the apparatus has instructed him to give” (2011,73).[1] Flusser’s skeptical perspective toward the alleged affordances of human mastery over technology is most assuredly not the view that Apple or Google would prefer you harbor (not-so-secretly). Any cursory glance at Wired or the technology blog at Insider Higher Ed, to pick two long-hanging examples, would yield a radically different perspective than the one Flusser puts forth in his work. In fact, Flusser writes, “objects meant to be media may obstruct communication” (2016, 45). If media objects like the technical apparatuses of today actually obstruct communication, then why are we so often led to believe that they facilitate it? And to shift registers just slightly, if everything is said to be an object of some kind—even technical apparatuses––then cannot one be permitted to claim daily communion with all kinds of objects? What happens when an object—and an object as obsolete as a book, no less—speaks to us? Will we still heed its call?

    ***

    Speaking in its expanded capacity as neither narrator nor focalized character, the book as literary object addresses us in a direct and antagonistic fashion in the opening line to Joshua Cohen’s 2015 novel Book of Numbers. “If you’re reading this on a screen, fuck off. I’ll only talk if I’m gripped with both hands” (5), the book-object warns. As Cohen’s narrative tells the story of a struggling writer named Joshua Cohen (whose backstory corresponds mostly to the historical-biographical author Joshua Cohen) who is contracted to ghostwrite the memoir of another Joshua Cohen (who is the CEO of a massive Google-type company named Tetration), the novel’s middle section provides an “unedited” transcript of the conversation between the two Cohens in which the CEO recounts his upbringing and tremendous business success in and around the Bay Area from the late 1970s up to 2013 of the narrative’s present. The novel’s Silicon Valley setting, nominal and characterological doubling, and structural narrative coupling of the two Cohens’ lives makes it all but impossible to distinguish the personal histories of Cohen-the-CEO and Cohen-the-narrator from the cultural history of the development of personal computing and networked information technologies. The history of one Joshua Cohen––or all Joshua Cohens––is indistinguishable from the history of intrusive computational/digital media. “I had access to stuff I shouldn’t have had access to, but then Principal shouldn’t have had such access to me—cameras, mics,” Cohen-the-narrator laments. In other words, as Cohen-the-narrator ghostwrites another Cohen’s memoir within the context of the broad history of personal computing and the emergence of algorithmic governance and surveillance, the novel invites us to consider how the history of an individual––or every individual, it does not really matter––is also nothing more or anything less than the surveilled history of its data usage, which is always written by someone or something else, the ever-present Not-Me (who just might have the same name as me). The Self is nothing but a networked repository of information to be mined in the future.

    While the novel’s opening line addresses its hypothetical reader directly, its relatively benign warning fixes reader and text in a relation of rancor. The object speaks![2] And yet tech-savvy twenty-first century readers are not the only ones who seem to be fed up with books; books too are fed up with us, and perhaps rightly so. In an age when objects are said to speak vibrantly and withdraw infinitely; processes like human cognition are considered to be operative in complex technical-computational systems; and when the only excuse to preserve the category of “subjective experience” we are able to muster is that it affords us the ability “to grasp how networks technically distribute and disperse agency,” it would seem at first glance that the second-person addressee of the novel’s opening line would intuitively have to be a reading, thinking subject.[3] Yet this is the very same reading subject who has been urged by Cohen’s novel to politely “fuck off” if he or she has chosen to read the text on a screen. And though the text does not completely dismiss its readers who still prefer “paper of pulp, covers of board and cloth” (5), a slight change of preposition in its title points exactly to what the book fears most of all: Book as Numbers. The book-object speaks, but only to offer an ominous admonition: neither the book nor its readers ought to be reducible to computable numbers.

    The transduction of literary language into digital bits eliminates the need for a phenomenological, reading subject, and it suggests too that literature––or even just language in a general sense––and humans in particular are ontologically reducible to data objects that can be “read” and subsequently “interpreted” by computational algorithms. As Cohen’s novel collapses the distinction between author, narrator, character, and medium, its narrator observes that “the only record of my one life would be this record of another’s” (9). But in this instance, the record of one’s (or another’s) life is merely the history of how personal computational technologies have effaced the phenomenological subject. How have we arrived at the theoretically permissible premise that “People matter, but they don’t occupy a privileged subject position distinct from everything else in the world” (Huehls 20)? How might the “turn toward ontology” in theory/philosophy be viewed as contributing to our present condition?

    * **

    Mark Jarzombek’s Digital Stockholm Syndrome in the Post-Ontological Age (2016) provides a brief, yet stylistically ironic and incisive interrogation into how recent iterations of post- or inhumanist theory have found a strange bedfellow in the rhetorical boosterism that accompanies the alleged affordances of digital technologies and big data. Despite the differences between these two seemingly unrelated discourses, they both share a particularly critical or diminished conception of the anthro- in “anthropocentrism” that borrows liberally from the postulates of the “ontological turn” in theory/philosophy (Rosenberg n.p.). While the parallels between these discourses are not made explicit in Jarzombek’s book, Digital Stockholm Syndrome asks us to consider how a shared commitment to an ontologically diminished view of “the human” that galvanizes both technological determinism’s anti-humanism and post- or inhumanist theory has found its common expression in recent philosophies of ontology. In other words, the problem Digital Stockholm Syndrome takes up is this: what kind of theory of ontology, Being, and to a lesser extent, subjectivity, appeals equally to contemporary philosophers and Silicon Valley tech-gurus? Jarzombek gestures toward such an inquiry early on: “What is this new ontology?” he asks, and “What were the historical situations that produced it? And how do we adjust to the realities of the new Self?” (x).

    A curious set of related philosophical commitments united by their efforts to “de-center” and occasionally even eject “anthropocentrism” from the critical conversation constitute some of the realities swirling around Jarzombek’s “new Self.”[4] Digital Stockholm Syndrome provocatively locates the conceptual legibility of these philosophical realities squarely within an explicitly algorithmic-computational historical milieu. By inviting such a comparison, Jarzombek’s book encourages us to contemplate how contemporary ontological thought might mediate our understanding of the historical and philosophical parallels that bind the tradition of in humanist philosophical thinking and the rhetoric of twenty-first century digital media.[5]

    In much the same way that Alexander Galloway has argued for a conceptual confluence that exceeds the contingencies of coincidence between “the structure of ontological systems and the structure of the most highly evolved technologies of post-Fordist capitalism” (347), Digital Stockholm Syndrome argues similarly that today’s world is “designed from the micro/molecular level to fuse the algorithmic with the ontological” (italics in original, x).[6] We now understand Being as the informatic/algorithmic byproduct of what ubiquitous computational technologies have gathered and subsequently fed back to us. Our personal histories––or simply the records of our data use (and its subsequent use of us)––comprise what Jarzombek calls our “ontic exhaust…or what data experts call our data exhaust…[which] is meticulously scrutinized, packaged, formatted, processed, sold, and resold to come back to us in the form of entertainment, social media, apps, health insurance, clickbait, data contracts, and the like” (x).

    The empty second-person pronoun is placed on equal ontological footing with, and perhaps even defined by, its credit score, medical records, 4G data usage, Facebook likes, and threefold of its Tweets. “The purpose of these ‘devices,’” Jarzombek writes, “is to produce, magnify, and expose our ontic exhaust” (25). We give our ontic exhaust away for free every time we log into Facebook because it, in return, feeds back to us the only sense of “self” we are able to identify as “me.”[7] If “who we are cannot be traced from the human side of the equation, much less than the analytic side.‘I’ am untraceable” (31), then why do techno-determinists and contemporary oracles of ontology operate otherwise? What accounts for their shared commitment to formalizing ontology? Why must the Self be tracked and accounted for like a map or a ledger?

    As this “new Self,” which Jarzombek calls the “Being-Global” (2), travels around the world and checks its bank statement in Paris or tags a photo of a Facebook friend in Berlin while sitting in a cafe in Amsterdam, it leaks ontic exhaust everywhere it goes. While the hoovering up of ontic exhaust by GPS and commercial satellites “make[s] us global,” it also inadvertently redefines Being as a question of “positioning/depositioning” (1). For Jarzombek, the question of today’s ontology is not so much a matter of asking “what exists?” but of asking “where is it and how can it be found?” Instead of the human who attempts to locate and understand Being, now Being finds us, but only as long as we allow ourselves to be located.

    Today’s ontological thinking, Jarzombek points out, is not really interested in asking questions about Being––it is too “anthropocentric.”[8] Ontology in the twenty-first century attempts to locate Being by gathering data, keeping track, tracking changes, taking inventory, making lists, listing litanies, crunching the numbers, and searching the database. “Can I search for it on Google?” is now the most important question for ontological thought in the twenty-first century.

    Ontological thinking––which today means ontological accounting, or finding ways to account for the ontologically actuarial––is today’s philosophical equivalent to a best practices for data management, except there is no difference between one’s data and one’s Self. Nonetheless, any ontological difference that might have once stubbornly separated you from data about you no longer applies. Digital Stockholm Syndrome identifies this shift with the formulation: “From ontology to trackology” (71).[9] The philosophical shift that has allowed data about the Self to become the ontological equivalent to the Self emerges out of what Jarzombek calls an “animated ontology.”

    In this “animated ontology,” subject position and object position are indistinguishable…The entire system of humanity is microprocessed through the grid of sequestered empiricism” (31, 29). Jarzombek is careful to distinguish his “animated ontology” from the recently rebooted romanticisms which merely turn their objects into vibrant subjects. He notes that “the irony is that whereas the subject (the ‘I’) remains relatively stable in its ability to self-affirm (the lingering by-product of the psychologizing of the modern Self), objectivity (as in the social sciences) collapses into the illusions produced by the global cyclone of the informatic industry” (28).”[10] By devising tricky new ways to flatten ontology (all of which are made via po-faced linguistic fiat), “the human and its (dis/re-)embodied computational signifiers are on equal footing”(32). I do not define my data, but my data define me.

    ***

    Digital Stockholm Syndrome asserts that what exists in today’s ontological systems––systems both philosophical and computational––is what can be tracked and stored as data. Jarzombek sums up our situation with another pithy formulation: “algorithmic modeling + global positioning + human scaling +computational speed=data geopolitics” (12). While the universalization of tracking technologies defines the “global” in Jarzombek’s Being-Global, it also provides us with another way to understand the humanities’ enthusiasm for GIS and other digital mapping platforms as institutional-disciplinary expressions of a “bio-chemo-techno-spiritual-corporate environment that feeds the Human its sense-of-Self” (5).

    Mark Jarzombek, Digital Stockholm Syndrome in the Post-Ontological Age

    One wonders if the incessant cultural and political reminders regarding the humanities’ waning relevance have moved humanists to reconsider the very basic intellectual terms of their broad disciplinary pursuits. How come it is humanities scholars who are in some cases most visibly leading the charge to overturn many decades of humanist thought? Has the internalization of this depleted conception of the human reshaped the basic premises of humanities scholarship, Digital Stockholm Syndrome wonders? What would it even mean to pursue a “humanities” purged of “the human?” And is it fair to wonder if this impoverished image of humanity has trickled down into the formation of new (sub)disciplines?”[11]

    In a late chapter titled “Onto-Paranoia,” Jarzombek finally arrives at a working definition of Digital Stockholm Syndrome: data visualization. For Jarzombek, data-visualization “has been devised by the architects of the digital world” to ease the existential torture—or “onto-torture”—that is produced by Security Threats (59). Security threats are threatening because they remind us that “security is there to obscure the fact that whole purpose is to produce insecurity” (59). When a system fails, or when a problem occurs, we need to be conscious of the fact that the system has not really failed; “it means that the system is working” (61).[12] The Social, the Other, the Not-Me—these are all variations of the same security threat, which is just another way of defining “indeterminacy” (66). So if everything is working the way it should, we rarely consider the full implications of indeterminacy—both technical and philosophical—because to do so might make us paranoid, or worse: we would have to recognize ourselves as (in)human subjects.

    Data-visualizations, however, provide a soothing salve which we can (self-)apply in order to ease the pain of our “onto-torture.” Visualizing data and creating maps of our data use provide us with a useful and also pleasurable tool with which we locate ourselves in the era of “post-ontology.”[13] “We experiment with and develop data visualization and collection tools that allow us to highlight urban phenomena. Our methods borrow from the traditions of science and design by using spatial analytics to expose patterns and communicating those results, through design, to new audiences,” we are told by one data-visualization project (http://civicdatadesignlab.org/).  As we affirm our existence every time we travel around the globe and self-map our location, we silently make our geo-data available for those who care to sift through it and turn it into art or profit.

    “It is a paradox that our self-aestheticizing performance as subjects…feeds into our ever more precise (self-)identification as knowable and predictable (in)human-digital objects,” Jarzombek writes. Yet we ought not to spend too much time contemplating the historical and philosophical complexities that have helped create this paradoxical situation. Perhaps it is best we do not reach the conclusion that mapping the Self as an object on digital platforms increases the creeping unease that arises from the realization that we are mappable, hackable, predictable, digital objects––that our data are us. We could, instead, celebrate how our data (which we are and which is us) is helping to change the world. “’Big data’ will not change the world unless it is collected and synthesized into tools that have a public benefit,” the same data visualization project announces on its website’s homepage.

    While it is true that I may be a little paranoid, I have finally rested easy after having read Digital Stockholm Syndrome because I now know that my data/I are going to good use.[14] Like me, maybe you find comfort in knowing that your existence is nothing more than a few pixels in someone else’s data visualization.

    _____

    Michael Miller is a doctoral candidate in the Department of English at Rice University. His work has appeared or is forthcoming in symplokē and the Journal of Film and Video.

    Back to the essay

    _____

    Notes

    [1] I am reminded of a similar argument advanced by Tung-Hui Hu in his A Prehistory of the Cloud (2016). Encapsulating Flusser’s spirit of healthy skepticism toward technical apparatuses, the situation that both Flusser and Hu fear is one in which “the technology has produced the means of its own interpretation” (xixx).

    [2] It is not my aim to wade explicitly into discussions regarding “object-oriented ontology” or other related philosophical developments. For the purposes of this essay, however, Andrew Cole’s critique of OOO as a “new occasionalism” will be useful. “’New occasionalism,’” Cole writes, “is the idea that when we speak of things, we put them into contact with one another and ourselves” (112). In other words, the speaking of objects makes them objectively real, though this is only possible when everything is considered to be an object. The question, though, is not about what is or is not an object, but is rather what it means to be. For related arguments regarding the relation between OOO/speculative realism/new materialism and mysticism, see Sheldon (2016), Altieri (2016), Wolfendale (2014), O’Gorman (2013), and to a lesser extent Colebrook (2013).

    [3] For the full set of references here, see Bennett (2010), Hayles (2014 and 2016), and Hansen (2015).

    [4] While I cede that no thinker of “post-humanism” worth her philosophical salt would admit the possibility or even desirability of purging the sins of “correlationism” from critical thought all together, I cannot help but view such occasional posturing with a skeptical eye. For example, I find convincing Barbara Herrnstein-Smith’s recent essay “Scientizing the Humanities: Shifts, Negotiations, Collisions,” in which she compares the drive in contemporary critical theory to displace “the human” from humanistic inquiry to the impossible and equally incomprehensible task of overcoming the “‘astro’-centrism of astronomy or the biocentrism of biology” (359).

    [5] In “Modest Proposal for the Inhuman,” Julian Murphet identifies four interrelated strands of post- or inhumanist thought that combine a kind of metaphysical speculation with a full-blown demolition of traditional ontology’s conceptual foundations. They are: “(1) cosmic nihilism, (2) molecular bio-plasticity, (3) technical accelerationism, and (4) animality. These sometimes overlapping trends are severally engaged in the mortification of humankind’s stubborn pretensions to mastery over the domain of the intelligible and the knowable in an era of sentient machines, routine genetic modification, looming ecological disaster, and irrefutable evidence that we share 99 percent of our biological information with chimpanzees” (653).

    [6] The full quotation from Galloway’s essay reads: “Why, within the current renaissance of research in continental philosophy, is there a coincidence between the structure of ontological systems and the structure of the most highly evolved technologies of post-Fordist capitalism? [….] Why, in short, is there a coincidence between today’s ontologies and the software of big business?” (347). Digital Stockholm Syndrome begins by accepting Galloway’s provocations as descriptive instead of speculative. We do not necessarily wonder in 2017 if “there is a coincidence between today’s ontologies and the software of big business”; we now wonder instead how such a confluence came to be.

    [7] Wendy Hui Kyun Chun makes a similar point in her 2016 monograph Updating to Remain the Same: Habitual New Media. She writes, “If users now ‘curate’ their lives, it is because their bodies have become archives” (x-xi). While there is not ample space here to discuss the  full theoretical implications of her book, Chun’s discussion of the inherently gendered dimension to confession, self-curation as self-exposition, and online privacy as something that only the unexposed deserve (hence the need for preemptive confession and self-exposition on the internet) in digital/social media networks is tremendously relevant to Jarzombek’s Digital Stockholm Syndrome, as both texts consider the Self as a set of mutable and “marketable/governable/hackable categories” (Jarzombek 26) that are collected without our knowledge and subsequently fed back to the data/media user in the form of its own packaged and unique identity. For recent similar variations of this argument, see Simanowski (2017) and McNeill (2012).

    I also think Chun’s book offers a helpful tool for thinking through recent confessional memoirs or instances of “auto-theory” (fictionalized or not) like Maggie Nelson’s The Argonauts (2015), Sheila Heti’s How Should a Person Be (2010), Marie Calloway’s what purpose did i serve in your life (2013), and perhaps to a lesser degree Tao Lin’s Richard Yates (2010), Taipei (2013), Natasha Stagg’s Surveys, and Ben Lerner’s Leaving the Atocha Station (2011) and 10:04 (2014). The extent to which these texts’ varied formal-aesthetic techniques can be said to be motivated by political aims is very much up for debate, but nonetheless, I think it is fair to say that many of them revel in the reveal. That is to say, via confession or self-exposition, many of these novels enact the allegedly performative subversion of political power by documenting their protagonists’ and/or narrators’ certain social/political acts of transgression. Chun notes, however, that this strategy of self-revealing performs “resistance as a form of showing off and scandalizing, which thrives off moral outrage. This resistance also mimics power by out-spying, monitoring, watching, and bringing to light, that is, doxing” (151). The term “autotheory,” which was has been applied to Nelson’s The Argonauts in particular, takes on a very different meaning in this context. “Autotheory” can be considered as a theory of the self, or a self-theorization, or perhaps even the idea that personal experience is itself a kind of theory might apply here, too. I wonder, though, how its meaning would change if the prefix “auto” was understood within a media-theoretical framework not as “self” but as “automation.” “Autotheory” becomes, then, an automatization of theory or theoretical thinking, but also a theoretical automatization; or more to the point: what if “autotheory” describes instead a theorization of the Self or experience wherein “the self” is only legible as the product of automated computational-algorithmic processes?

    [8] Echoing the critiques of “correlationism” or “anthropocentrism” or what have you, Jarzombek declares that “The age of anthrocentrism is over” (32).

    [9] Whatever notion of (self)identity the Self might find to be most palatable today, Jarzombek argues, is inevitably mediated via global satellites. “The intermediaries are the satellites hovering above the planet. They are what make us global–what make me global” (1), and as such, they represent the “civilianization” of military technologies (4). What I am trying to suggest is that the concepts and categories of self-identity we work with today are derived from the informatic feedback we receive from long-standing military technologies.

    [10] Here Jarzombek seems to be suggesting that the “object” in the “objectivity” of “the social sciences” has been carelessly conflated with the “object” in “object-oriented” philosophy. The prioritization of all things “objective” in both philosophy and science has inadvertently produced this semantic and conceptual slippage. Data objects about the Self exist, and thus by existing, they determine what is objective about the Self. In this new formulation, what is objective about the Self or subject, in other words, is what can be verified as information about the self. In Indexing It All: The Subject in the Age of Documentation, Information, and Data (2014), Ronald Day argues that these global tracking technologies supplant traditional ontology’s “ideas or concepts of our human manner of being” and have in the process “subsume[d] and subvert[ed] the former roles of personal judgment and critique in personal and social beings and politics” (1). While such technologies might be said to obliterate “traditional” notions of subjectivity, judgment, and critique, Day demonstrates how this simultaneous feeding-forward and feeding back of data-about-the-Self represents the return of autoaffection, though in his formulation self-presence is defined as information or data-about-the-self whose authenticity is produced when it is fact-checked against a biographical database (3)—self-presence is a presencing of data-about-the-Self. This is all to say that the Self’s informational “aboutness”–its representation in and as data–comes to stand in for the Self’s identity, which can only be comprehended as “authentic” in its limited metaphysical capacity as a general informatic or documented “aboutness.”

    [11] Flusser is again instructive on this point, albeit in his own idiosyncratic way­­. Drawing attention to the strange unnatural plurality in the term “humanities,” he writes, “The American term humanities appropriately describes the essence of these disciplines. It underscores that the human being is an unnatural animal” (2002, 3). The plurality of “humanities,” as opposed to the singular “humanity,” constitutes for Flusser a disciplinary admission that not only the category of “the human” is unnatural, but that the study of such an unnatural thing is itself unnatural, as well. I think it is also worth pointing out that in the context of Flusser’s observation, we might begin to situate the rise of “the supplemental humanities” as an attempt to redefine the value of a humanities education. The spatial humanities, the energy humanities, medical humanities, the digital humanities, etc.—it is not difficult to see how these disciplinary off-shoots consider themselves as supplements to whatever it is they think “the humanities” are up to; regardless, their institutional injection into traditional humanistic discourse will undoubtedly improve both(sub)disciplines, with the tacit acknowledgment being that the latter has just a little more to gain from the former in terms of skills, technical know-how, and data management. Many thanks to Aaron Jaffe for bringing this point to my attention.

    [12] In his essay “Algorithmic Catastrophe—The Revenge of Contingency,” Yuk Hui notes that “the anticipation of catastrophe becomes a design principle” (125). Drawing from the work of Bernard Stiegler, Hui shows how the pharmacological dimension of “technics, which aims to overcome contingency, also generates accidents” (127). And so “as the anticipation of catastrophe becomes a design principle…it no longer plays the role it did with the laws of nature” (132). Simply put, by placing algorithmic catastrophe on par with a failure of reason qua the operations of mathematics, Hui demonstrates how “algorithms are open to contingency” only insofar as “contingency is equivalent to a causality, which can be logically and technically deduced” (136). To take Jarzombek’s example of the failing computer or what have you, while the blue screen of death might be understood to represent the faithful execution of its programmed commands, we should also keep in mind that the obverse of Jarzombek’s scenario would force us to come to grips with how the philosophical implications of the “shit happens” logic that underpins contingency-as-(absent) causality “accompanies and normalizes speculative aesthetics” (139).

    [13] I am reminded here of one of the six theses from the manifesto “What would a floating sheep map?,” jointly written by the Floating Sheep Collective, which is a cohort of geography professors. The fifth thesis reads: “Map or be mapped. But not everything can (or should) be mapped.” The Floating Sheep Collective raises in this section crucially important questions regarding ownership of data with regard to marginalized communities. Because it is not always clear when to map and when not to map, they decide that “with mapping squarely at the center of power struggles, perhaps it’s better that not everything be mapped.” If mapping technologies operate as ontological radars–the Self’s data points help point the Self towards its own ontological location in and as data—then it is fair to say that such operations are only philosophically coherent when they are understood to be framed within the parameters outlined by recent iterations of ontological thinking and its concomitant theoretical deflation of the rich conceptual make-up that constitutes the “the human.” You can map the human’s data points, but only insofar as you buy into the idea that points of data map the human. See http://manifesto.floatingsheep.org/.

    [14]Mind/paranoia: they are the same word!”(Jarzombek 71).

    _____

    Works Cited

    • Adler, Renata. Speedboat. New York Review of Books Press, 1976.
    • Altieri, Charles. “Are We Being Materialist Yet?” symplokē 24.1-2 (2016):241-57.
    • Calloway, Marie. what purpose did i serve in your life. Tyrant Books, 2013.
    • Chun, Wendy Hui Kyun. Updating to Remain the Same: Habitual New Media. The MIT Press, 2016.
    • Cohen, Joshua. Book of Numbers. Random House, 2015.
    • Cole, Andrew. “The Call of Things: A Critique of Object-Oriented Ontologies.” minnesota review 80 (2013): 106-118.
    • Colebrook, Claire. “Hypo-Hyper-Hapto-Neuro-Mysticism.” Parrhesia 18 (2013).
    • Day, Ronald. Indexing It All: The Subject in the Age of Documentation, Information, and Data. The MIT Press, 2014.
    • Floating Sheep Collective. “What would a floating sheep map?” http://manifesto.floatingsheep.org/.
    • Flusser, Vilém. Into the Universe of Technical Images. Translated by Nancy Ann Roth. University of Minnesota Press, 2011.
    • –––. The Surprising Phenomenon of Human Communication. 1975. Metaflux, 2016.
    • –––. Writings, edited by Andreas Ströhl. Translated by Erik Eisel. University of Minnesota Press, 2002.
    • Galloway, Alexander R. “The Poverty of Philosophy: Realism and Post-Fordism.” Critical Inquiry 39.2 (2013): 347-366.
    • Hansen, Mark B.N. Feed Forward: On the Future of Twenty-First Century Media. Duke University Press, 2015.
    • Hayles, N. Katherine. “Cognition Everywhere: The Rise of the Cognitive Nonconscious and the Costs of Consciousness.” New Literary History 45.2 (2014):199-220.
    • –––. “The Cognitive Nonconscious: Enlarging the Mind of the Humanities.” Critical Inquiry 42 (Summer 2016): 783-808.
    • Herrnstein-Smith, Barbara. “Scientizing the Humanities: Shifts, Collisions, Negotiations.” Common Knowledge  22.3 (2016):353-72.
    • Heti, Sheila. How Should a Person Be? Picador, 2010.
    • Hu, Tung-Hui. A Prehistory of the Cloud. The MIT Press, 2016.
    • Huehls, Mitchum. After Critique: Twenty-First Century Fiction in a Neoliberal Age. Oxford University Press, 2016.
    • Hui, Yuk. “Algorithmic Catastrophe–The Revenge of Contingency.” Parrhesia 23(2015): 122-43.
    • Jarzombek, Mark. Digital Stockholm Syndrome in the Post-Ontological Age. University of Minnesota Press, 2016.
    • Lin, Tao. Richard Yates. Melville House, 2010.
    • –––. Taipei. Vintage, 2013.
    • McNeill, Laurie. “There Is No ‘I’ in Network: Social Networking Sites and Posthuman Auto/Biography.” Biography 35.1 (2012): 65-82.
    • Murphet, Julian. “A Modest Proposal for the Inhuman.” Modernism/Modernity 23.3(2016): 651-70.
    • Nelson, Maggie. The Argonauts. Graywolf P, 2015.
    • O’Gorman, Marcel. “Speculative Realism in Chains: A Love Story.” Angelaki: Journal of the Theoretical Humanities 18.1 (2013): 31-43.
    • Rosenberg, Jordana. “The Molecularization of Sexuality: On Some Primitivisms of the Present.” Theory and Event 17.2 (2014):  n.p.
    • Sheldon, Rebekah. “Dark Correlationism: Mysticism, Magic, and the New Realisms.” symplokē 24.1-2 (2016): 137-53.
    • Simanowski, Roberto. “Instant Selves: Algorithmic Autobiographies on Social Network Sites.” New German Critique 44.1 (2017): 205-216.
    • Stagg, Natasha. Surveys. Semiotext(e), 2016.
    • Wolfendale, Peter. Object Oriented Philosophy: The Noumenon’s New Clothes. Urbanomic, 2014.
  • David Thomas – On No-Platforming

    David Thomas – On No-Platforming

    by David Thomas

    No-platforming has recently emerged as a vital tactical response to the growing mainstream presence of the self-styled alt-right. Described by proponents as a form of cordon sanitaire, and vilified by opponents as the work of coddled ideologues, no-platforming entails the struggle to prevent political opponents from accessing institutional means of amplifying their views. The tactic has drawn criticism from across the political spectrum. Former US President Barack Obama was himself so disturbed by the phenomenon that during the closing days of his tenure he was moved to remark:

    I’ve heard some college campuses where they don’t want to have a guest speaker who is too conservative or they don’t want to read a book if it has language that is offensive to African-Americans or somehow sends a demeaning signal towards women. …I gotta tell you I don’t agree with that either. I don’t agree that you, when you become students at colleges, have to be coddled and protected from different points of view…Sometimes I realized maybe I’ve been too narrow-minded, maybe I didn’t take this into account, maybe I should see this person’s perspective. …That’s what college, in part, is all about…You shouldn’t silence them by saying, “You can’t come because I’m too sensitive to hear what you have to say” … That’s not the way we learn either. (qtd. Kingkade 2017 [2015])

    Obama’s words here nicely crystalize one traditional understanding of the social utility of free speech. In classical liberal thought, free speech is positioned as the cornerstone of a utilitarian account of political and technological progress, one that views the combat of intellectually dexterous elites as the crucible of social progress. The free expression of informed elite opinion is imagined as an indispensable catalyst to modernity’s ever-accelerating development of new knowledge. The clash of unfettered intellects is said to serve as the engine of history.

    For John Stuart Mill, one of the first to formulate this particular approach to the virtues of free expression, the collision of contrary views was necessary to establish any truth. Mill explicitly derived his concept of the truth-producing “free market of ideas” from Adam Smith’s understanding of how markets work. In both cases, moderns were counselled to entrust themselves to the discretion of a judicious social order, one that was said to emerge spontaneously as rational individuals exerted their vying bids for self-expression and self-actualization. These laissez faire arguments insisted that an optimal ordering of ends and means would ultimately be produced out of the mass of autonomous individual initiatives, one that would have been impossible to orchestrate from the vantage point of any one individual or group. In both cases – free speech and free markets – it was said that if we committed to the lawful exercise of individual freedoms we could be sure that the invisible hand will take care of the rest, sorting the wheat from the chaff, sifting and organizing initiatives according to the outcomes that best befit the social whole, securing our steady collective progress toward the best of all possible worlds. No surprise, then, that so much worried commentary on the rise of the alt-right has cautioned us to abide by the established rules, insisting that exposure to the free speech collider chamber will wear the “rough edges” off the worst ideas, allowing their latent kernels of rational truth to be developed and revealed, whilst permitting what is noxious and unsupportable to be displayed and refuted.

    A key point, then, about no-platforming is that its practice cuts against the grain of this vision of history and against the theory of knowledge on which it is founded. For in contrast to proponents of Mill’s proceduralist epistemology, student practioners of no-platforming have appropriated to themselves the power to directly intervene in the knowledge factories where they live and work, “affirmatively sabotaging” (Spivak 2014) the alt-right’s strategic attempts to build out its political legitimacy. And it is this use of direct action, and the site-specific rejection of Mill’s model of rational debate that it has entailed, that has brought student activists to the attention of university administrators, state leaders, and law enforcement.

    We should not mistake the fact that these students have been made the object of ire precisely because of their performative unruliness, because of their lack of willingness to defer to the state’s authority to decide what constitutes acceptable speech. One thing often left unnoticed in celebrations of the freedoms afforded by liberal democracies is the role that the state plays in conditioning the specific kinds of autonomy that individuals are permitted to exercise. In other words, our autonomy to express opposition as we see fit is already much more intensively circumscribed than recent “free speech” advocates care to admit.

    Representations of no-platforming in the media bring us to the heart of the matter here. Time and again, in critical commentary on the practice, the figure of the wild mob resurfaces, often counter-posed to the disciplined, individuated dignity of the accomplished orator:

    [Person X] believes that he has an obligation to listen to the views of the students, to reflect upon them, and to either respond that he is persuaded or to articulate why he has a different view. Put another way, he believes that one respects students by engaging them in earnest dialogue. But many of the students believe that his responsibility is to hear their demands for an apology and to issue it. They see anything short of a confession of wrongdoing as unacceptable. In their view, one respects students by validating their subjective feelings. Notice that the student position allows no room for civil disagreement. Given this set of assumptions, perhaps it is no surprise that the students behave like bullies even as they see themselves as victims. (Friedersdorf 2015)

    These remarks are exemplary of a certain elective affinity for a particular model citizen – a purportedly non-bullying parliamentarian agent or eloquent spokesperson who is able to establish an argument’s legitimacy with calm rationality. These lofty incarnations of “rational discourse” are routinely positioned as the preferred road to legitimate political influence. Although some concessions are made to the idea of “peaceful protest,” in the present climate even minimal appeals to the politics of collective resistance find themselves under administrative review (RT 2017). Meanwhile, champions of free speech quietly endorse specific kinds of expression. Some tones of voice, some placard messages, some placements of words and bodies are celebrated; others are reviled. In practice, the promotion of ostensibly “free” speech often just serves to idealize and define the parameters of acceptable public conduct.

    No-platforming pushes back against these regulatory mechanisms. In keeping with longstanding tactics of subaltern struggle, its practice demonstrates that politics can be waged through a diversity of means, showing that alongside the individual and discursive propagation of one’s political views, communities can also act as collective agents, using their bodies and their capacity for self-organization to thwart the rise of political entities that threaten their wellbeing and survival. Those conversant with the history of workers’ movements will of course recognize the salience of such tactics. For they lie at the heart of emancipatory class politics, in the core realization that in standing together in defiance of state violence and centralized authority, disenfranchised communities can find ways to intervene in the unfolding of their fates, as they draw together in the unsanctioned shaping and shielding of their worlds.

    It is telling that so much media reportage seems unable to identify with this history, greeting the renewed rise of collective student resistance with a combination of bafflement and recoil. The undercurrent of pearl-clutching disquiet that runs through such commentary might also be said to perform a subtle kind of rhetorical work, perhaps even priming readers to anticipate and accept the moment when police violence will be deployed to restore “order,” to break up the “mob,” and force individuals back onto the tracks that the state has ordained.

    Yet this is not to say there is nothing new about this new wave of free speech struggles. Instead, they supply further evidence that longstanding strategies of collective resistance are being displaced out of the factory systems – where we still tend to look from them – and into what Joshua Clover refers to, following Marx, as the sphere of circulation, into the marketplaces and the public squares where commodities and opinions circulate in search of valorization and validation. Disenfranchised communities are adjusting to the debilitating political legacies of deindustrialization. As waves of automation have rendered workers unable to express their resistance through the slowdown or sabotage of the means of production, the obstinacy of the strike has been stripped down to its core. And as collective resistance to the centralized administration of social conduct now plays out beyond the factory’s walls, it increasingly takes on the character of public confrontation with the state. Iterations of this phenomenon play out in flashpoints as remote and diverse as Berkeley, Ferguson, and Standing Rock. And as new confrontations fall harder on the heels of the old, they make a spectacle of the deteriorating condition of the social contract.

    If it seems odd to compare the actions of students at elite US universities and workers in the industrial factory systems of old, consider the extent to which students have themselves become increasingly subject to proletarianization and precarity – to indebtedness, to credit wages, and to job prospects that are at best uncertain. This transformation of the university system – from bastion of civil society and inculcator of elite modes of conduct, to frenetic producer of indebted precarious workers – helps to account for the apparent inversion of campus radicalism’s orientation to the institution of free speech.

    Longtime observers will recall that the same West Coast campuses that have been key flashpoints in this wave of free speech controversies were once among the most ardent champions of the institution. Strange, then, that in today’s context the heirs to Mario Savio’s calls to anti-racist civil disobedience seem more prone to obstruct than to promote free speech events. Asked about Savio’s likely response to this trend, social scientist and biographer Robert Cohen finds that “Savio would almost certainly have disagreed with the faculty and students who urged the administration to ban Milo Yiannopoulos from speaking on campus, and been heartened by the chancellor’s refusal to ban a speaker” (Cohen 2017). The alt-right has delighted in trolling student radicals over this apparent break with tradition:

    Milo Inc.’s first event will be a return to the town that erupted in riots when he was invited to speak earlier this year. In fact, Yiannopoulos said that he is planning a “week-long celebration of free speech” near U.C. Berkeley, where a speech by his fellow campus agitator, Ann Coulter, was recently canceled after threats of violence. It will culminate in his bestowing something called the Mario Savio Award for Free Speech. (The son of Savio, one of the leaders of Berkeley’s Free Speech movement during the mid-1960s, called the award “some kind of sick joke”.) (Nguyen 2017)

    Yet had Milo named his free speech prize after Savio’s would-be mentor John Searle, then the logic of current events might have appeared a little more legible. For as Lisa Hofmann-Kuroda and Beezer de Martelly have recently reminded us, in the period between 1965 and 1967 when the Free Speech Movement (FSM) was emerging as the home of more militant forms of student resistance, the US government commission Searle to research the movement. The resulting publication would eventually come to serve “as a manual for university administrators on how to most efficiently dismantle radical student protests” (Hofmann-Kuroda and de Martelly 2017). One of the keys to Searle’s method was the effort to “encouraged students to focus on their own … abstract rights to free speech,” a move that was to “shift campus momentum away from Black labor struggles and toward forming a coalition between conservatives and liberals on the shared topic of free speech rights” (Hofmann-Kuroda and de Martelly 2017). Summing up the legacies of this history from today’s vantage, Hofmann-Kuroda and de Martelly remark:

    In hindsight, it becomes clear that the “alt-right”‘s current use of the free speech framework as a cover for the spread of genocidal politics is actually a logical extension of the FSM — not, as some leftists would have it, a co-optation of its originally “radical” intentions. In addition to the increasingly violent “free speech rallies” organized in what “alt-right” members have dubbed “The Battle for Berkeley,” the use of free speech as a legitimating platform for white supremacist politics has begun to spread throughout the country. (Hofmann-Kuroda and de Martelly 2017)

    It is in relation to this institutional history that we might best interpret the alt-right’s use of free speech and the responses of the student left. For as Hofmann-Kuroda and de Martelly suggest, the alt-right’s key avatars such as Milo and Richard Spenser have now succeeded in building out the reach of Searle’s tactics. Their ambitions have extended beyond defusing social antagonisms and shoring up the prevailing status quo; indeed, in an eerie echo of Savio’s hopes for free speech, the alt-right now sees the institution as a site where dramatic social transformations can be triggered.

    But why then is the alt-right apt to see opportunities in this foundational liberal democratic institution, while the student left is proving more prone to sabotage its smooth functioning? It certainly appears that Searle’s efforts to decouple free speech discourse and anti-racist struggle have been successful. Yet to grasp the overall stakes of these struggles it can be helpful to pull back from the abstract debates that Searle proved so adept in promoting, to make a broader assessment of prevailing socio-economic and climatic conditions.

    For in mapping how the terrain has changed since the time of Salvo and Searle we might take account of the extent to which the universal summons to upward mobility, and the global promise of endless material and technological enfranchisement that defined the social experience of postwar modernization, have lately begun to ring rather hollow. Indeed as we close in on the third decade of the new millennium, there seems to be no end to the world system’s economic woes in sight, and no beginning to its substantive reckoning with problem of anthropogenic climate change.

    In response, people are changing the way they orient themselves toward the centrist state. In another instance of his welcome and ongoing leftward drift, Bruno Latour argues that global politics are now defined by the blowback of a catastrophically failed modernization project:

    The thing we share with these migrating peoples is that we are all deprived of land. We, the old Europeans, are deprived because there is no planet for globalization and we must now change the entire way we live; they, the future Europeans, are deprived because they have had to leave their old, devastated lands and will need to learn to change the entire way they live.

    This is the new universe. The only alternative is to pretend that nothing has changed, to withdraw behind a wall, and to continue to promote, with eyes wide open, the dream of the “American way of life,” all the while knowing that billions of human beings will never benefit from it. (Latour 2017)

    Apprehending the full ramifications of the failure of modernization will require us to undertake what the Club of Rome once referred to as a “Copernican revolution of the mind” (Club of Rome 1972: 196). And in many respects the alt-right has been quicker to begin this revolution than the technocratic guardians of the globalist order. In fact, it seems evident that the ethnonationalists look onto the same prospects as Latour, while proscribing precisely the opposite remedies. Meantime, guardians of the “center” remain all too content to repeat platitudinous echoes of Mills’ proceduralism, assuring us all that – evidence to the contrary – the market has the situation in invisible hand.

    This larger historical frame is key to understanding campus radicalism’s turn to no-platforming, which seems to register – on the level of praxis – that the far right has capitalized far more rapidly on emergent conditions that the center or the left. In understanding why this has occurred, it is worth considering the relationship between the goals of the FSM and the socioeconomic conditions that prevailed in the late 1960s and early 1970s when the movement was at its peak.

    For Savio and his anti-racist allies at the FSM, free speech afforded radicals both a platform from to which protest US imperialism with relative impunity, and an institutional lodestar by which to steer a course that veered away from the purges and paranoia of the Stalinist culture of command. It seemed that the institution itself served as a harbinger of a radicalized and “socialized” state, one that was capable of executing modernization initiatives that would benefit everyone.

    The postwar program of universal uplift then seemed apt to roll out over the entire planet, transforming the earth’s surface into a patchwork of independent modern nation states all locked into the same experience of ongoing social and technological enfranchisement. In such a context Savio and other contemporary advocates of free-speech saw the institution as a foreshadowing of the modern civil society into which all would eventually be welcomed as enfranchised bearers of rights. Student activism’s commitment to free speech thus typified the kind of statist radicalism that prevailed in the age of decolonization, a historical period when the postcolonial state seemed poised to socialize wealth, and when the prospect of postcolonial self-determination was apt to be all but synonymous with national modernization programs.

    Yet in contrast to this expansive and incorporative modernizing ethos, the alt-right savior state is instead being modeled around avowedly expulsive and exclusionary initiatives. This is the state reimagined as a gated community writ large, one braced – with its walls, border camps, and guards – to resist the incursion of “alien” others, all fleeing the catastrophic effects of a failed postwar modernization project. While siphoning off natural wealth to the benefit of the enwalled few, this project has unleashed the ravages of climate change and the impassive violence of the border on the exposed many. The alt-right response to this situation is surprisingly consonant with the Pentagon’s current assessment, wherein the US military is marketed as a SWAT team serving at the dispensation of an urban super elite:

    https://vimeo.com/187475823

    Given the lines along which military and official state policy now trends, it is probably a mistake to characterize far-right policy proposals as a wholescale departure from prevailing norms. Indeed, it seems quite evident that – as Latour remarks – the “enlightened elite” have known for some time that the advent of climate change has given the lie to the longstanding promises of the postwar reconstruction:

    The enlightened elites soon started to pile up evidence suggesting that this state of affairs wasn’t going to last. But even once elites understood that the warning was accurate, they did not deduce from this undeniable truth that they would have to pay dearly.

    Instead they drew two conclusions, both of which have now led to the election of a lord of misrule to the White House: Yes, this catastrophe needs to be paid for at a high price, but it’s the others who will pay, not us; we will continue to deny this undeniable truth. (Latour 2017)

    From such vantages it can be hard to determine to what extent centrist policies actually diverge from those of the alt-right. For while they doggedly police the exercise of free expression, representatives of centrist orthodoxy often seem markedly less concerned with securing vulnerable peoples against exposure to the worst effects of climate change and de-development. In fact, it seems all too evident that the centrist establishment will more readily defend people’s right to describe the catastrophe in language of their own choosing than work to provide them with viable escape routes and life lines.

    Contemporary free speech struggles are ultimately conflicts over policy rather than ironic contests over theories of truth. For it has been in the guise of free speech advocacy that the alt-right has made the bulk of its initial gains, promoting its genocidal vision through the disguise of ironic positional play, a “do it for the lolz” mode of summons that marshals the troops with a nod and wink. It seems that in extending the logic of Searle’s work at Berkley, the alt-right has thus managed to “hack” the institution of free speech, navigating it with such a deft touch that defenses of the institution are becoming increasingly synonymous with the mainstream legitimation of their political project.

    Is it then so surprising that factions of the radical left are returning full circle to the foundationally anti-statist modes of collective resistance that defined radical politics at its inception? Here, Walter Benjamin’s concept of “the emergency brake” suggests itself, though we can adjust the metaphor a little to better grasp current conditions (Benjamin 2003: 401). For it is almost as if the student left has responded to a sense that the wheel of history had taken a sickening lurch rightward, by shaking free of paralysis, by grabbing hold of the spokes and pushing back, greeting the overawing complexities of our geopolitical moment with local acts of defiance. It is in this defiant spirit that we might approach the free speech debates, arguing not for the implementation of draconian censorship mechanisms (if there must be a state, better that it is at least nominally committed to freedom of expression than not) but against docile submission to a violent social order—an order with which adherence to the doctrine of free speech is perfectly compatible. The central lesson that we might thus draw from the activities of Berkley’s unruly students is that the time for compliant faith in the wisdom of our “guardians” is behind us (Stengers 2015: 30).

    David Thomas is a Joseph-Armand Bombardier Canada Graduate Scholar in the Department of English at Carleton University. His thesis explores narrative culture in post-workerist Britain, and unfolds around the twin foci of class and climate change.

    Bibliography

    Benjamin, Walter. 2003. Selected Writings Volume 4: 1938 – 1940. Cambridge: Harvard University Press.

    Clover, Joshua. 2016. Riot. Strike. Riot. London: Verso.

    Cohen, Robert. 2017. “What Might Mario Savio Have Said About the Milo Protest at Berkeley?” Nation, February 7. www.thenation.com/article/what-might-mario-savio-have-said-about-the-milo-protest-at-berkeley/

    Friedersdorf, Conor. 2015. “The New Intolerance of Student Activism.” Atlantic, November 9. www.theatlantic.com/politics/archive/2015/11/the-new-intolerance-of-student-activism-at-yale/414810/

    Hofmann-Kuroda, Lisa, and Beezer de Martelly. 2017. “The Home of Free Speech™: A Critical Perspective on UC Berkeley’s Coalition With the Far-Right.” Truth Out, May 17. www.truth-out.org/news/item/40608-the-home-of-free-speech-a-critical-perspective-on-uc-berkeley-s-coalition-with-the-far-right

    Kingkade, Tyler. 2015. “Obama Thinks Students Should Stop Stifling Debate On Campus.” Huffington Post, September 9. [Updated February 2, 2017]: www.huffingtonpost .com/entry/obama-college-political-correctness_us_55f8431ee4b00e2cd5e80198

    Latour, Bruno. 2017.  “The New Climate.” Harpers, May. harpers.org/archive/2017/05/the-new-climate/

    “Right to Protest?: GOP State Lawmakers Push Back Against Public Dissent.” 2017. RT, February 4. www.rt.com/usa/376268-republicans-seek-outlaw-protest/

    Nguyen, Tina. 2017. “Milo Yiannopoulos Is Starting a New, Ugly, For-Profit Troll Circus.” Vanity Fair, April 28. www.vanityfair.com/news/2017/04/milo-yiannopoulos-new-media-venture

    Spivak, Gayatri. 2014. “Herald Exclusive: In conversation with Gayatri Spivak,” by Nazish Brohiup. Dawn, Dec 23. www.dawn.com/news/1152482

    Stengers, Isabelle. 2015. In Catastrophic Times: Resisting the Coming Barbarism. Open Humanities Press. openhumanitiespress.org/books/download/Stengers 2015 In Catastrophic-Times.pdf

  • Arne De Boever — Realist Horror — Review of “Dead Pledges: Debt, Crisis, and Twenty-First-Century Culture”

    Arne De Boever — Realist Horror — Review of “Dead Pledges: Debt, Crisis, and Twenty-First-Century Culture”

    by Arne De Boever

    Review of Annie McClanahan, Dead Pledges: Debt, Crisis, and Twenty-First-Century Culture (Stanford: Stanford University Press, 2017)

    This essay has been peer-reviewed by the boundary 2 editorial collective.

    The Financial Turn

    The financial crisis of 2007-8 has led to a veritable boom of finance novels, that subgenre of the novel that deals with “the economy”.[i] I am thinking of novels such as Jess Walter’s The Financial Lives of the Poets (2009), Jonathan Dee’s The Privileges (2010), Adam Haslett’s Union Atlantic (2010), Teddy Wayne’s Kapitoil (2010), Cristina Alger’s The Darlings (2012), John Lanchester’s Capital (2012), David Foster Wallace’s The Pale King (2012),[ii] Mohsin Hamid’s How To Get Filthy Rich in Rising Asia (2013), Nathaniel Rich’s Odds Against Tomorrow (2013), Meg Wolitzer’s The Interestings (2013)—and those are only a few.

    Literary criticism has followed suit. Annie McClanahan’s Dead Pledges: Debt, Crisis, and Twenty-First Century Culture (published in the post-45 series edited by Kate Marshall and Loren Glass) studies some of those novels. It follows on the heels of Leigh Claire La Berge’s Scandals and Abstraction: Financial Fiction of the Long 1980s (2015) and Anna Kornbluh’s Realizing Capital: Financial and Psychic Economies in Victorian Form (2014), both of which deal with earlier instances of financial fiction. By 2014, McClanahan had already edited (with Hamilton Carroll) a “Fictions of Speculation” special issue of the Journal of American Studies. At the time of my writing, Alison Shonkwiler’s The Financial Imaginary: Economic Mystification and the Limits of Realist Fiction has just appeared, and no doubt, many more will follow. In the Coda to her book, La Berge mentions that scholars are beginning to talk about the “critical studies of finance” to bring together these developments into a thriving field.

    Importantly, Dead Pledges looks not only at novels but also at poetry, conceptual art, photography, and film. Indeed, the “financial turn” involves more than fiction: J.C. Chandor’s Margin Call (2011), Costa-Gavras’ Capital (2012), Martin Scorcese’s The Wolf of Wall Street (2013), and Adam McKay’s The Big Short (2015) were all released in the aftermath of the 2007-8 crisis. American Psycho, the musical, premiered in London in 2013 and moved on to New York in 2016.

    All of this contemporary work builds on and explicitly references earlier instances of thinking and writing about the economy, so it is not as if this interest in the economy is anything new. However, given the finance novel’s particular name one could argue that while the genre of the finance novel—understood more broadly as any novel about the economy–precedes the present, it is only during the financial era, which began in the early 1970s, and especially since the financial crisis of 2007-8 that it has truly come into its own. For the specific challenge that is now set before the finance novel is precisely to render the historic formation of “finance” into fiction. Critics have noted that such a rendering cannot be taken for granted. While capitalism has traditionally been associated with the realist novel (as La Berge and Shonkwiler at the outset of their edited collection Reading Capitalist Realism point out[iii]), literary scholars consider that capitalism’s intensification into financial or finance capitalism or finance tout court also intensifies the challenge to realism that some had already associated with global capitalism.[iv] Abstract and complex, finance exceeds what Julia Breitbach has observed to be some of the key characteristics of realism: “narration”, associated with “readable plots and recognizable characters”; “communication”, allowing “the reader to create meaning and closure”; “reference”, or “language that can refer to external realities, that is, to ‘the world out there’”; and “ethics”, “a return to commitment and empathy”.[v]

    In the late 1980s, and just before the October 19th, 1987 “Black Monday” stock market crash, Tom Wolfe may still have thought that to represent finance, one merely had to flex one’s epistemological muscle: all novelists had to do, Wolfe wrote, is report—to bring “the billion-footed beast of reality” to terms.[vi] However, by the time Bret Easton Ellis’s American Psycho comes around, that novel presents itself as an explicit response to Wolfe,[vii] proposing a financial surrealism or what could perhaps be called a “psychotic realism” (Antonio Scurati) to capture the lives that finance produces. If (as per a famous analysis) late capitalism’s aesthetic was not so much realist but postmodernist, late late capitalism or just-in-time capitalism has only intensified those developments, leading some to propose post-postmodernism as the next phase in this contemporary history.[viii]

    At the same time, realism seems to have largely survived the postmodernist and post-postmodernist onslaughts: in fact, it too has been experiencing a revival,[ix] and one that is visible in, and in some cases dramatized in, the contemporary finance novel (which thereby exceeds the kind of financial realism that Wolfe proposes). Indeed, one reason for this revival could be that in the aftermath of the financial crisis, novelists have precisely sought to render abstract and complex finance legible, and comprehensible, through literature—to bring a realism to the abstract and complex world of finance.

    Given realism’s close association with capitalism, and its post- and post-postmodern crisis under late capitalism and finance, none of this should come as a surprise. Rather, it means that critics can consider the finance novel in its various historical articulations as a privileged site to test realism’s limits and limitations.

    Finance, Credit, Mortgage

    If Karl Marx’s celebrated formula of capital—M-C-M’, with money leading to money that is worth more via the intermediary of the commodity—is quasi-identified with the realist novel, the formula’s shortened, financial variation—M-M’, money leading to money that is worth more without the intermediary of the commodity[x]—has come to mark its challenges. Perhaps in part reflecting this narrative (though this is not explicitly stated in the book), Dead Pledges’ study of the cultural representations of finance starts with a discussion of the realist novel but quickly moves away from it in order to look elsewhere in search of representations of finance.

    McClanahan’s case-studies concern the early twenty-first century, specifically the aftermath of the 2007-8 crisis. However, the historical-theoretical framework of Dead Pledges focuses on credit and debt. It extends some 40 years before that, to the early 1970s and the transformations of the economy that were set in motion then. Dead Pledges thus takes up the history of financialization, which is usually dated back to that time. Neoliberalism, which is sometimes confused with finance and shares some of its history, comes up a few times in the book’s pages but is not a key term in the analysis.

    One could bring in various reasons for the periodization that McClanahan adopts, including—though with some important caveats—the Nixon administration’s unilateral decision in 1971 to abolish the gold standard, thus ultimately ending the Bretton Woods international exchange agreements that had been in place since World War Two and propelling the international markets into the so-called “Nixon shock.” However, in his key text “Culture and Finance Capital” Fredric Jameson already warned against the false suggestion of solidity and tangibility that such a reference to the gold standard (which was really “an artificial and contradictory system in its own right”, as Jameson points out[xi]) might bring. Certainly for McClanahan, who focuses on credit and debt and is not that interested in money, it would make sense to abandon so-called commodity theories of money and fiat theories of money—which have proposed that the origins of money lie in the exchange of goods or a sovereign fiat—for the credit or debt theory of money which, as per the revisionist analyses of for example David Graeber and Felix Martin,[xii] have exposed those other theories’ limitations. Indeed, McClanahan’s book explicitly mentions Graeber and other contemporary theorists of credit and debt (Richard Dienst, Maurizio Lazzarato, Angela Mitropoulos, Fred Moten and Stefano Harvey, Miranda Joseph, Andrew Ross) as companion thinkers, even if none of those writers is engaged in any detail in the book.

    Since the 1970s, consumer debt has exploded in the United States and Dead Pledges ultimately zooms in on a particular form of credit and debt, namely the home mortgage. McClanahan inherits this focus from the collapse of the home mortgage market, which triggered the 2007-8 crisis. McClanahan rehearses the history, and the complicated technical history, of this collapse at various moments throughout the book. Although this history is likely more or less familiar to readers, the repetition of its technical detail (from various angles, depending on the focus of each of McClanahan’s chapters) is welcome. As McClanahan points out, home mortgages used to be “deposit-financed” (6). While there was always a certain amount of what Marx in Capital: Vol. 3 called “fictitious capital”[xiii] (“fiktives Kapital”) in play—banks can write out more mortgages than they actually have money for based on their credit-worthy borrower’s promise to repay (with interest)—the amount of fictitious capital has increased exponentially since the 1970s. More and more frequently mortgages are being funded not through deposits but “through the sale of speculative financial instruments” (6)—basically, through the sale of a borrower’s promise to repay. This development is enabled by the practice of securitization: many mortgages are bundled together into what is called a tranche, which is then sold as a financial instrument—a mortgage backed security (MBS) or collateralized debt obligation (CDO). These kinds of instruments, so-called derivatives, are the hallmark of what in Giovanni Arrighi’s terms we can understand as the phase of capitalism’s financial expansion (see 14). This refers to an economic cycle during which value is produced not so much through the making of commodities but through value’s “extraction” (as Saskia Sassen puts it[xiv]) beyond what can be realized in the commodity—in this particular case, through the creation and especially the circulation of bundles of mortgages.

    As McClanahan explains, securitization is about “creating a secondary market” (6) for the sale of debt. The value of those kinds of debt-backed “commodities” (if we can still call them that) does not so much come from what they are worth as products—indeed, their value is dubious since for example the already mentioned tranches will include both triple A rated mortgages (mortgages with a low risk of default) and subprime mortgages (like the infamous NINJA mortgages that were granted to people with No Income, No Jobs, No Assets). Nevertheless, those MBSs or CDOs often still received a high rating, based on the flawed idea that the risk of value-loss was lessened by mixing the low risk mortgages with the high risk mortgages. What seemed to have mattered most was not so much the value of an MBS or CDO as product but their circulation, which is the mode of value-generation that Jasper Bernes among others has deemed to be central to the financial era. Ultimately, and while they brought the global financial system to the edge of collapse, they also generated extreme value for those who shorted those financial products. And shorted them big, as Adam McKay’s The Big Short would have it (Paramount, 2015; based on Michael Lewis’ 2010 book by the same title). By betting against them, the protagonists of Lewis’ and McKay’s story made an immense profit while everyone else suffered catastrophic losses.

    “Dematerialization” alone and cognate understandings of finance as “performative” and “linguistic”[xv]—in other words, this story as it could be told using the abolition of the gold standard as the central point of reference—cannot tell the whole truth here, especially not since credit and debt can actually be found at the origin of money. However, through those historico-economic developments of credit and debt there emerges a transformed role of credit and debt in our societies, from a “form of exchange that reinforces social cohesion” (185) to “a regime of securitization and exploitable risk, of expropriation and eviction” (182). Dematerialization—or perhaps better, various rematerializations: for example from gold or real estate to securitized mortgage debt—is important but without the material specifics of the history that McClanahan recounts, it does not tell us all that much.

    Echoing David Harvey’s description of the need for “new markets for [capital’s] goods and less expensive labor to produce them” as a “spatial fix” (Harvey qtd. 12), McClanahan reads the history summarized above as a “temporal fix” because “it allows capital to treat an anticipated realization of value as if it had already happened” (13). In 2007-8, of course, that fix turned out to be an epic fuck-up. McClanahan recalls Arrighi’s periodization (after Fernand Braudel) of capitalism as alternating “between epochs of material expansion (investment in production and manufacturing) and phases of financial expansion (investment in stock and capital markets)” (14) and notes that the 2007-8 crisis seems to have marked the end of the phase of financial expansion.

    In Arrighi’s view, that would mean the time has come for the emergence of a new superpower, one that will step in for the U.S. as the global hegemon. A return of American (U.S.) greatness through a return to an era of material expansion (as the current U.S. President Donald J. Trump is proposing) appears unlikely within this framework: at best, it will have some short-lived, anachronistic success before the new hegemon arrives. However, will that new hegemon arrive? According to some, and McClanahan appears to align herself with those, the current crisis of the system “will not lead to the emergence of a new regime of capitalist accumulation under a different imperial superpower” (15). “Instead, it heralds something akin to a ‘terminal crisis’ in which no renewal of capital profitability is possible” (15). Does this then lead to an eternal winter, as Joshua Clover already asked?[xvi] Alternatively, are we finally done with those phases, and ready for something new?

    The Novel: Scale and Character

    If all of this has been theoretical so far, Dead Pledges’ four chapters stand out first as nuanced readings of works of contemporary culture. As McClanahan sees it, culture is the best site to understand debt as a “ubiquitous yet elusive social form” (2). By that, she does not mean we should forget about economic textbooks; but to understand debt as a “social form”, culture is the go-to place. McClanahan’s inquiry starts out traditionally, with a chapter about the contemporary realist novel. In it, she takes on behavioral economics, a subfield of microeconomics. Unlike macroeconomics, microeconomics focuses on individual human decisions. Whereas microeconomic models tend to assume rational agents, behavioralism does not: non-rational human decisions might cause or result from a market crisis.

    What caused the 2007-8 crisis? There are multiple answers, and McClanahan shows that they are in tension with one another. One answer—the macroeconomic one–is that the crisis was the result of an abstract and complex financial system that caved in on itself. Such an explanation tends to avoid individual responsibility. On the other hand, microeconomics, and behavioralism in particular, blames the crisis on the bad decisions of a few individuals, which exculpates institutions. This seemed to be the dominant mode of explanation. In this explanation too, however, the buck seemed to stop nowhere: how many bankers went to jail for the catastrophic market losses they caused? This leads to a larger question: how should one negotiate, in economics, between the macro and the micro, between the individual and the system—how should one assign blame, enforce accountability? How should one regulate? How should one even think, and represent, the connections between systems and individuals?

    One cultural form that has been particularly good at this negotiation is the novel, which tends to tell a macro-story through its representation of the micro, and so seeks “to capture the reality of a structural, even impersonal, economic and social whole” (24) while also considering “individual investors’ ‘personal impulses’” (31). This is what McClanahan finds in Martha McPhee’s Dear Money (2010), Adam Haslitt’s Union Atlantic, and Jonathan Dee’s The Privileges. These novels marry the macro- and the micro-economical; they accomplish what McClanahan presents as a scalar negotiation. However, one should note that in doing so, they keep the behavioralist model intact—for they suggest that individual bad decisions lie at the origin of macroeconomic events. McClanahan shows, however, that as novels Dear Money, Union Atlantic, and The Privileges take on that behavioralist remainder, in other words: the novel’s characteristic “focus on subjective experience and the meaningfulness of being a subject” (33), through their awareness of their place in the genealogy of the novel. McClanahan’s readings ultimately reveal that the novels she looks at cannot save the individual from what she terms “a kind of ontological attenuation or even annulment” (33) that comes with their account of the 2007-8 crisis. Out go the full characters of the realist novel. The crisis demands it.

    What is left? The chapter culminates in a reading of Dee’s novel in which McClanahan cleverly suggests that the novel explores “the formal limits of sympathetic identification” and tells “money’s” story rather than the story of Adam and Cynthia “Morey” (51), who are the novel’s main characters. Thus, the novel is not so much about behavioralist psychology but about money itself. Capital is remade in the novel, McClanahan argues, “in the image of the human” (52), creating the uncanny effect of human beings who are merely stand-ins for money. Adam Morey/Money “has no agency, and he is all automaton, no autonomy. He has no interiority” (53). McClanahan does not note that this description places Adam in line with American Psycho’s “automated teller”[xvii] Patrick Bateman, who in a famous passage observes that while

    there is an idea of a Patrick Bateman, some kind of abstraction, … there is no real me, only an entity, something illusory, and though I can hide my cold gaze and you can shake my hand and feel flesh gripping yours and maybe you can even sense our lifestyles are probably comparable: I simply am not there”.[xviii]

    Like Bateman’s narrative voice, which echoes the abstraction of finance, The Privileges’ voice is that of “investment itself” (52), which swallows human beings up whole.

    If the neoliberal novel, as per Walter Benn Michaels’ analysis (from which McClanahan quotes; 53) reduces society to individuals (and possibly their families, following Maggie Thatcher’s claim), The Privileges as a finance novel goes beyond that and “liquidat[es]” (53) individuals themselves. We are encountering here the terminal crisis of character that writes, in the guise of the realist novel, our financial present. Rich characterization is out. The poor character is the mark of financial fiction.

    Yet, such depersonalization does not capture the full dynamic of financialization either. In Chapter 2, McClanahan draws this out through a discussion of the credit score and its relation to contemporary literature. Although one’s credit score is supposed to be objective, the fact that one can receive different credit scores from different agencies demonstrates that an instability haunts it—and resubjectifies, if not repersonalizes, it. McClanahan starts out with a reading of an ad campaign for a website selling credit reports that quite literally personalizes the scores one can receive. It probably comes as no surprise that one’s ideal score is personalized as a white, tall, and fit young man; the bad score is represented by a short balding guy with a paunch. He also wears a threatening hockey mask.

    McClanahan suggests that what structures the difference here between the objective and the subjective, the impersonal and the personalized, is the difference between neutral credit and morally shameful debt. The former is objective and impersonal; the latter is subjective and personalized. The problem with this distinction, however, is not only that the supposedly objective credit easily lets the subjective slip back in (as is evident from the ad campaign McClanahan discusses); discussions of subjective debt also often lack quantitative and material evidence (when they ignore, for example, “the return in debt collection to material coercion rather than moral persuasion”; 57). Rather than showing how the personal can become “a corrective for credit’s impersonality” and how “objectivity [can become] a solution to the problem of debt’s personalization” (57)—debt always operates on the side of both–McClanahan considers how contemporary literature and conceptual art have turned those issues into “a compelling set of questions to be pursued” (57).

    If in the finance novel, rich characterization is out, a question arises: what alternatives emerge for characterization at the crossroads of “credit, debt, and personhood” (57)? As McClanahan points out, there is a history to this question in the fact that “the practice of credit evaluation borrowed the realist novel’s ways of describing fictional persons as well as the formal habits of reading and interpretation the novel demanded” (59). The relation went both ways: “the realist novel drew on the credit economy’s models of typification … to produce socially legible characters” (59). Because “quantitative or systematized instruments for evaluating the fiscal soundness” of borrowers were absent, creditors used to rely “on subjective evaluations of personal character” to assess “a borrower’s economic riskiness” (59). Such evaluations used to take a narrative form; in other words, the credit report used to be a story. It provided a detail of characterization that readers of literature would know how to interpret. The novel—the information it provided, the interpretation it required—was the model for this, for the credit report.

    Enter the quantitative revolution: in the early 1970s the credit report becomes a credit score, the result of “an empirical technique that uses statistical methodology to predict the probability of repayment by credit applicants” (63). Narrative and character go out the window; the algorithmically generated score is all that counts. It is the end of the person in credit. As McClanahan is quick to point out, however, the credit score nevertheless cannot quite leave the person behind, as the “creditworthiness” that the credit score sums up ultimately “remains a quality of individuals rather than of data” (65). Therefore, the person inevitably slips back in, leading for example to the behavioralist models that McClanahan discusses in Chapter 1. Persons become numbers, but only to inevitably return as persons. McClanahan’s reading of the credit score negotiates this interchange.

    One can find some of this in Gary Shteyngart’s Super Sad True Love Story (2010). If critics have faulted the novel for its caricatures and stereotypes, which “[decline] the conventions of characterization associated with the realist novel” (68), McClanahan argues that Shteyngart’s characters are in fact “emblematic of the contemporary regime of credit scoring” (68). Shteyngart’s use of caricature “captures the creation of excessively particular data-persons”; his “use of stereotype registers the paradox by which a contemporary credit economy also reifies generalized social categories” (71). While the credit score supposedly does not “discriminate by race, gender, age, or class” (71), in fact it does. McClanahan relies in part on Frank Pasquale’s important work in The Black Box Society to note credit scoring systematizes bias “in hidden ways” (Pasquale qtd. 72)—hidden because black boxed. This leads McClanahan back to the ad campaign with which she opened her chapter, now noting “its racialization” (72). The chapter closes with a discussion of how conceptual art and conceptual writing about credit and debt have negotiated the issue of personalization (and impersonalization). If “the personal” in Chapter 1 was associated first and foremost with microeconomics and behavioralism (which McClanahan criticizes), McClanahan shows that it can also do “radical work” (77) in response to credit’s impersonalization as “a simultaneously expanded and denaturalized category … representing social relations and collective subjects as if they were speaking persons and thus setting into motion a complex dialectic between the personal and the impersonal” (77). She does this through a discussion of the work of conceptual artist Cassie Thornton and the photographs of the “We are the 99%” tumblr. Mathew Timmons’ work of conceptual writing CREDIT, on the other hand, plays with the impersonal to “provide an account of what constitutes the personal in the contemporary credit economy” (89).

    Although McClanahan does not explicitly state this, I read the arch of her Chapters 1 and 2 as a recuperation of the personal from its negative role in behavioralism (as well as its naturalized, racist role in the credit scoring that is discussed in Chapter 2), and more broadly from microeconomics. Following Thornton in particular (whose art also features on the cover of Dead Pledges), McClanahan opens up the personal onto the macro of the social and the collective. In Dead Pledges, the novel and especially the realist novel turn out to be productive sites to pursue such a project due to the scalar negotiation and rich characterization that are typical of the genre—and in the credit-crisis novel both of those are under pressure. If the novel gradually disappears from Dead Pledges to give way to photography and film in Chapters 3 and 4, the concern with realism remains. Indeed, McClanahan’s book ultimately seems to want to tease out a realism of the credit-crisis era, and it is that project to which I now turn.

    Foreclosure Photography and Horror Films

    In Chapters 3 and 4, once the novel is out of the way, McClanahan’s brilliance as a cultural studies scholar finally shines. Dead Pledges’ third chapter looks at post-crisis photography and “foreclosure photography” in particular. The term refers to photography of foreclosed homes but evokes the very practice of photography itself, which depends on a shutter mechanism that closes—or rather opens very quickly–in order to capture a reality. This signals a complicity between foreclosure and photography that McClanahan’s chapter explores, for example in a discussion of photographs of forced eviction by John Moore and Anthony Suau, which allow McClanahan to draw out the complicities between photography and the police—but not just the police. She notes, for example, that “[t]he photographer’s presence on the scene is underwritten by the capacity of both the state and the bank to violate individual privacy” (114). Dead Pledges ties that violation of individual privacy to a broader cultural development towards what McClanahan provocatively calls “unhousing” (115), evident for example in how various TV shows allow the camera to enter into the private sanctuary of the home to show how people live. Here, “the sanctity of domestic space [is defended] precisely by violating it” (115). In parallel, “sacred” real estate, the financial security of the domestic property has become transformed—violated—by the camera seeking to record foreclosure. The home now represents precarity. This development happened due to the creation of mortgage backed securities, which turned real estate into liquidity and the home into an uncanny abode.

    The chapter begins with a comparative discussion of photographs in which the home is “rendered ‘feral’—overrun by nature” (103). McClanahan considers the narratives that such photography evokes: one is that of the disintegration of civilization into a lawless zone of barbarism—the story of the home gone wild. Looking at the mobilization of this narrative in representations of Detroit, she discusses its biopolitical, racial dimensions. Often the economic hardship that the photographs document is presented as something that happens to other people. But the being of debt today is such that it is everywhere—in other words the “othering” of the harm it produces (its location “elsewhere”) has become impossible. So even though the photographs McClanahan discusses “represent the feral houses of the crisis as the signs of racial or economic Otherness, these photographs ultimately reveal that indebtedness is a condition more widely shared than ever before, a condition that can no longer be banished to the margins of either national space or of collective consciousness” (113). It is us—all of us.

    The last two sections of the chapter deal with the uncanny aspects of foreclosure photography—with the foreclosed home as the haunted home and the uncanny architectural landscape as the flipside of the financial phase that was supposed to “surmount” (135) the crisis of industrial production but actually merely provided a temporal fix for it. Ghost cities in China—cities without subjects, cities whose assets have never been realized, marking the failed anticipation of credit itself–are the terminal crisis of capital. The uncanny, in fact, becomes a key theoretical focus of this chapter and sets up the discussion of horror films in the next: real estate (in other words, the familiar and secure), becomes the site where the foreign and unstable emerges, and as such the uncanny becomes a perfect concept for McClanahan to discuss the home mortgage crisis.

    Far from being real estate, the house, and in particular the mortgaged home, is haunted by debt; so-called “homeowners” are haunted by the fact that the bank actually “owns” their home. Property is thus rendered unstable and becomes perceived as a real from which we have become alienated. In McClanahan’s vision, it even becomes a hostile entity (see 127). At stake here is ultimately not just the notion of property, but a criticism of property and “the inhospitable forms of domestic life produced by it” (105), an undermining of property—and with it a certain kind of “family”–as the cornerstone of liberalism. If McClanahan is critical of our era’s sanctification of the private through a culture of unhousing, her response is not to make the case for housing but rather to use unhousing to expose the fundamental uncanniness of property. With that comes the profanation (as opposed to the sanctification) of the private (as a criticism of inhospitable forms of domestic life). The domestic is not sacred. Property is not secure. Time to get out of the fortress of the house and the violence it produces. If the housing crisis has produced the precarization of the house, let us use it to reinvent domestic life.

    Given the horror film’s long-standing relationship with real estate—think of the haunted house–it was only a matter of time before the 2007-8 crisis appeared in contemporary horror films. And indeed, in the films that McClanahan looks at, it does appear—as “explicit content” (151). One has to appreciate here McClanahan’s “vulgar” approach: she is interested in the ways in which the horror films she studies “speak explicitly to the relationship between speculation, gentrification, and the ‘opportunities’ presented to investors by foreclosure” (151). Unlike for example American Psycho, which borrows a thing or two from the horror aesthetic, McClanahan’s horror flicks do not shy away from the nuts and bolts of finance; instead, they “almost [obsessively include] figures and terminology of the speculative economy in real estate” (151). This leads McClanahan to suggest that as horror films, they have “all the power of reportage”: they offer “a systematic account rendered with all the explicit mimetic detail one would expect of a realist novel” (151). At the same time, they do not do the kind of reporting Tom Wolfe was advocating back when: indeed, “they draw on the particular, uncanny capacity of the horror genre to defamiliarize, to turn ideological comfort into embodied fear” (151). McClanahan emphasizes, with a nod to Jameson (and his appropriation of Lévi-Strauss’ account of myth[xix]), that this is not just a performance of the “social contradictions” that always haunt narrative’s “imaginary solutions” (151). Instead, the films “oscillate between the imagined and the real or between ‘true stories’ and ‘crazy’ nightmares” (151). There are contradictions here both at the level of form and of content—both representational and material, McClanahan writes—and they remain without resolution. The credit-crisis era requires this sort of realism.

    Darren Lyn Bouseman’s Mother’s Day (Anchor Bay, 2010), for example, a remake of Charles Kaufman’s 1980 film, oscillates between competing understandings of property: “as labor and sentimental attachment”; “as nontransferable value and the site of hospitality”; “as temporal and personal”; “as primarily a matter of contingent need” (157). If those all contradict each other, McClanahan points out that what they have in common is that “they are all incompatible with the contemporary treatment of the house as fungible property and liquid investment” (157). Upkeep, sentimental investment, and use all become meaningless when a hedge fund buys up large quantities of foreclosed homes to make profit in renting. Such a development marks the end of “ownership society ideology in the wake of the crisis” (158). Like Crawlspace (Karz/Vuguru, 2013), another film McClanahan discusses, Mother’s Day reveals a strong interest in the home as fixed asset, and the changes that his asset has undergone due to securitization. Indeed, the two other films that McClanahan looks at, Drag Me to Hell (Universal, 2009) and Dream Home (Edko, 2010), are “more specifically interested in real estate as a speculative asset and in the transformation of uncertainty into risk” (161-2).

    By the time Dream Home ends, with an explicit reference—from its Hong Kong setting–to “America’s subprime mortgage crisis” (170), it is hard not to be entirely convinced that with the horror film, McClanahan has uncovered the perfect genre and medium for the study of the representation of the home mortgage crisis. It is here that realism undergoes its most effective transformation into a kind of horrific realism or what I propose to call realist horror, an aesthetic that, like so much else when it comes to finance, cannot be easily located but instead oscillates between different realms. Indeed, if Dream Home provides key insights into the home mortgage crisis in the U.S., it is worth noting that it does so from its Chinese setting, which McClanahan takes to indicate that many of the changes that happened as part of financialization from the 1970s to the present in the U.S. in fact “occurred first in Asia” (174). This opens up the American (U.S.) focus of McClanahan’s book onto the rest of the world, raising some questions about the scope of the situation that Dead Pledges analyzes: how global is the gloomy, even horrific picture that McClanahan’s book paints? This seems particularly important when it comes to imagining, as McClanahan does in the final part of her book, political responses to debt.

    Debt and the Revolution

    While the home mortgage is McClanahan’s central concern, Dead Pledges closes with a political Coda about student debt. If McClanahan returns here to student loans (a topic that she had already addressed in Chapter 2), it is because they are perhaps the representative example of the securitized debt markets that she has discussed. Given the staggering amount of student debt, the low-regulation environment of student loans, and the default rate on student loans, it is likely that the next major market crash will result from the collapse of the securitized student debt market. It is worth noting, indeed, that some are already shorting this market in the hopes of making a profit from its collapse a few years down the line (The Bigger Short, anyone?). In this situation, McClanahan proposes “sabotage”: like several others, most prominently the Strike Debt movement, she is calling on students to stop paying their debts. As the Strike Debt movement puts it: “If you owe the bank $10,000, you’re at the mercy of the bank. If you owe the bank $10 million, the bank is at your mercy”.[xx] Today, banks are at the mercy of students through the massive amounts of student credit that have been extended.

    McClanahan arrives at this politics of sabotage through her discussion of the collapse of the home mortgage market, and specifically of foreclosure. In the first part of her Coda, she discusses how people have responded to their homes being foreclosed by “acts of vandalism”, like “punch[ing] holes in the walls”, leaving “dead fish to rot in basements”, or breaking “pipes to flood their houses with water or sewage”, which she singles out as a “clever” way of “turning the problem of their home’s asset illiquidity on its head” (186). If these are acts of sabotage, it is because they “[remove] commodities from circulation or [block] the paths by which they (and money) circulate” (186). McClanahan embraces this tactic. From this vantage point, one can understand why, as someone reported to me recently after a visit to Greece, the banks there are holding off on foreclosing on those who have defaulted on their mortgages: by keeping the owners in their homes, the banks are trying to guarantee the protection of their assets—this is clearly the better option especially in view of the absence of renter or buyer demand for the apartments or homes whose owners have defaulted. For the moment, the banks in Greece are paying their borrowers for the maintenance of the bank’s assets.

    A couple of things are worth noting: first, “vandalism” or the destruction of an object does not necessarily coincide with the destruction of that object as a commodity. Indeed, if to destroy the object as commodity is to take it out of circulation—as McClanahan, following Bernes (following Marx), argues (186)—then the question is first and foremost how to block that circulation—and that might involve acts of vandalism, or not. In fact, one might imagine the situation in Greece, which involves labor being invested in the upkeep of a home, ultimately leading to a property claim—to taking the home out of the circulation that makes the bank its money. McClanahan considers such an understanding of property in her reading of Mother’s Day in Chapter 4. However, McClanahan is taking aim at the root of property (as becomes clear in both Chapters 3 and 4), and so the latter might not be a satisfactory solution since it keeps the notion of property intact. In addition, one might want to ask whether the home is the appropriate target for the vandalism? Why not sabotage the bank’s plumbing instead? Leave some fish to rot in the bank’s basement?

    Secondly, in the case of student loans, what is the asset to vandalize? The asset that students acquire through loans is “education.” It is an asset that the bank cannot reclaim although of course the diploma that formalizes it can be and has been taken away. But it is not inconceivable that, if the home mortgage crisis is the model here, the institutions and people providing an education will be vandalized: universities, professors, administrators—rather than the banks. And some (Trump University comes to mind) would certainly deserve it. At my own (private arts) institution, where tuition is currently set at a whopping $46,830, I have seen posters in which students bitterly thank the university president for their student debt or claim that the only thing that unites them as students is their debt. If the students look at the institute’s budget more closely, they are able to see that it is tuition-driven: specifically, the pie-chart clearly shows that (debt-based) tuition pays the faculty’s salaries. This pitches the students not only against the university president or other administrators (whose salaries, needless to say, far exceed those of the faculty) but ultimately against the faculty. McClanahan also notes that faculty retirement may also be involved in this: Student Loan Asset Backed Securities (or, in finance’s inexhaustible generation of acronyms, SLABS) are “tranched and sold to institutional investors, including many pension funds” and so “it’s possible for a professor at a university to be indirectly invested in the debt of her own students” (189). Not just in the present, through their salary, but also in the future, for their retirement.

    It is important to argue about student debt, and some faculty—like McClanahan–are bringing that argument into their classrooms. But it will be interesting to see how that develops once the student debt market collapses and faculty salaries and retirement implode. Kant’s answer to the question “What is Enlightenment?” was famously split between “Argue all you please” and “Obey.” What happens if, in this particular case, the students stop obeying? Unless they identify the agent of their subjection correctly—faculty? administrator? university president? university? bank? government? President?–, it might ring the death knell of the U.S. university system. Of course, that may have been the project all along–now with the students as its driving force.

    It is the political dimension of McClanahan’s book, which is somewhat disavowed in the introduction–McClanahan notes early on that “Dead Pledges is not a work of political economy” (15)–but then becomes prominent in the Coda, that may leave some readers frustrated. This is, on the one hand, because the Coda makes a comparative move from home mortgages to student loans that does not come with the nuanced discussion of economics that McClanahan develops elsewhere in the book (there is no consideration, for example, of how CDOs and SLABS are different: does it make sense to short SLABS? Why? Why not?). However, the economic specifics may be important when trying to decide on the most effective political response. The specific response that Dead Pledges offers—sabotage—may also leave some readers frustrated. While sabotage can be effective as a politics that would break financialization’s extraction of value through circulation, it remains, ultimately, a negative intervention that temporarily interrupts or destroys (perhaps in some cases definitively) its targets. But it seems obvious that as far as politics goes, that response can hardly be sufficient; some kind of positive engagement would be required to imagine the world that will come after it. It seems one would need to ask about the “affirmative”[xxi] dimension of the sabotage that is proposed here.

    In a review[xxii] of Wendy Brown’s book Undoing the Demos: Neoliberalism’s Stealth Revolution,[xxiii] McClanahan has criticized Brown on a number of counts, first of all because of her largely negative description of the collective as something that neoliberalism destroys; and second, because through that description, Brown uncritically projects a pre-neoliberal collective that was somehow unaffected by economic pressures. Sarah Brouillette, with whom McClanahan recently teamed up for her response to the Yale hunger strike,[xxiv] has made a similar point.[xxv] As far as positive descriptions of collectivity go, however, McClanahan’s sabotage may also leave one dissatisfied. Furthermore, one may wonder whether the turn to sabotage as a politics is not partly a consequence of Dead Pledges’ focus on the United States. When considering political responses to the debt crisis, it might be the limits and limitations of that focus—a version of the “there is no alternative” that is often associated with neoliberalism–that prevents for example any consideration of, say, the state’s potentially positive roles in processes of financial regulation or even wealth redistribution. Is sabotage the only politics that the left has left in the U.S.? Might not other parts of the world—for example, certain countries in Europe, certain countries in Latin America—offer alternatives from which the left in the U.S. could learn? I am not being naïve here about what I am proposing: it would require fundamental political changes in the U.S. for this to come about. But again, are those changes entirely beyond the American (U.S.) left—so much so that the political imaginary stops at sabotage? Who was it again that rhymed “sabotage” with “mirage”? Sabotage should target the mirage, to be sure; but it raises the question: does their rhyme also evoke sabotage’s complicity with the mirage? Has leftist politics really come down to leaving dead fish to rot in the basements of what used to be our homes? Of course, it may be unfair to expect that those who are defaulting on their mortgages become the agents of the leftist revolution. But what about the students who emerge as the political subjects of our time at the end of McClanahan’s book? Let us focus, post-sabotage, on what other universities they might imagine–what other states.

    I am thinking of what another revolutionary says during that famous rooftop conversation in Gillo Pontecorvo’s The Battle of Algiers (Rizzolo, Rialti Pictures, 1966):

    It’s hard to start a revolution. Even harder to continue it. And hardest of all to win it. But, it’s only afterwards, when we have won, that the true difficulties begin.

    Work in critical finance studies often recalls how it has become easier for us to imagine the end of the world than the end of capitalism.[xxvi] Point taken. But Pontecorvo’s film can help one adjust this position: yes, it is hard to imagine the end of capitalism; but it is even harder to imagine the world that will come after it.

    There is probably no point in worrying, as I will admit I do, about that world and the “true difficulties” that it will bring. Such worrying may prevent one from starting a revolution in the first place. Best to focus on the battle at hand. Certainly, McClanahan’s Dead Pledges provides the perfect impetus.

    I would like to thank Paul Bové and Sarah Brouillette for their generous editing of this review. 

    Notes

    [i] In an article titled “The Plutocratic Imagination”, Jeffrey J. Williams notes for example that “[s]ince the mid-2000s there has also been a spate of novels foregrounding finance” (Williams, “The Plutocratic Imagination.” Dissent 60:1 (2013): 96.

    [ii] David Foster Wallace may appear to be the odd one out in this list but Jeffrey Severs’ recent David Foster Wallace’s Balancing Books: Fictions of Value (New York, Columbia UP, 2017) justifies his inclusion.

    [iii] Berge, Leigh Claire La and Alison Shonkwiler, eds. Reading Capitalist Realism. Iowa City: U of Iowa P, 2014. 1.

    [iv] One can think here of Alberto Toscano and Jeff Kinkle’s book Cartographies of the Absolute (Winchester, Zero Books: 2015), which took its inspiration from Fredric Jameson’s work on these issues.

    [v] Breitbach, Julia. Analog Fictions for the Digital Age: Literary Realism and Photographic Discourses in Novels after 2000. Rochester: Camden House, 2012. 8.

    [vi] Wolfe, Tom. “Stalking the Billion-Footed Beast: A Literary Manifesto for the New Social Novel.” Harper’s Magazine Nov. 1989, 45-56. Here 52. Using a nickname that was used on the Salomon Brothers trading floor to refer to those who had made a monster bonds trade, Michelle Chihara aptly termed this kind of realism “big swinging dick realism” in a review of La Berge and Kornbluh’s books about financial fiction. See: Chihara, Michelle. “What We Talk About When We Talk About Finance.” Los Angeles Review of Books, 09/18/2015, accessible: https://lareviewofbooks.org/article/what-we-talk-about-when-we-talk-about-finance/.

    [vii] See, for example: Berge, Leigh Claire La. “The Men Who Make the Killings: American Psycho and the Genre of the Financial Autobiography”. In: Berge, Scandals and Abstraction: Financial Fiction of the Long 1980s. Oxford: Oxford UP, 2015. 113-147. Here in particular 139.

    [viii] Nealon, Jeffrey T. Post-Postmodernism: Or, The Cultural Logic of Just-In-Time Capitalism. Stanford: Stanford UP, 2012. The famous analysis evoked in the previous part of the sentence is of course Fredric Jameson’s.

    [ix] Kornbluh’s book, among others, testifies to this: Kornbluh, Anna. Realizing Capital: Financial and Psychic Economies in Victorian Form. New York: Fordham UP, 2014.

    [x] Note that Marx already singled out this shorter version as the formula for “interest-bearing capital”, a situation in which money begets more money without the intermediary of the commodity: Marx, Karl. Capital: Vol. 1. Trans. Ben Fowkes. New York: Penguin, 1990. 257. A discussion of M-M’ as the financial variation of the general formula of capital can be found for example in: Marazzi, Christian. Capital and Language: From the New Economy to the War Economy. Trans. Gregory Conti. Los Angeles: Semiotext (e), 2008.

    [xi] Jameson, Fredric. “Culture and Finance Capital.” In: The Cultural Turn: Selected Writings on the Postmodern 1983-1998. New York: Verso, 2009. 154.

    [xii] Graeber, David. Debt: The First 5,000 Years. New York: Melville House, 2011; Martin, Felix. Money: The Unauthorized Biography—From Coinage to Cryptocurrencies. New York: Vintage, 2015.

    [xiii] Marx, Karl. Capital: Vol. 3. Trans. David Fernbach. New York: Penguin, 1991. 596.

    [xiv] Sassen, Saskia. Expulsions: Brutality and Complexity in the Global Economy. Cambridge (MA): Harvard UP, 2014.

    [xv] See, for example, the already mentioned book by Marazzi or also: Berardi, Franco “Bifo.” Precarious Rhapsody: Semiocapitalism and the Pathologies of the Post-Alpha Generation. London: Minor Compositions, 2009; Berardi, The Uprising: Poetry and Finance. Los Angeles: Semiotext (e), 2012.

    [xvi] Clover, Joshua. “Autumn of the System: Poetry and Financial Capital.” JNT: Journal of Narrative Theory 41:1 (2011): 34-52.

    [xvii] This is how La Berge has perceptively analyzed American Psycho’s mode of narration: Berge, Scandals, 136.

    [xviii] Ellis, Bret Eason. American Psycho. New York: Vintage, 1991. 376-377.

    [xix] See: Jameson, Fredric. The Political Unconscious: Narrative as a Socially Symbolic Act. Ithaca: Cornell UP, 1981.

    [xx] McKee, Yates. “DEBT: Occupy, Postcontemporary Art, and the Aesthetics of Debt Resistance.” South Atlantic Quarterly 112:4 (2013): 784-803. Here 788.

    [xxi] I borrow the notion of “affirmative sabotage” from Gayatri Chakravorty Spivak. See, for example: Evans, Brad (interview with Gayatri Spivak), “When Law is Not Justice”, 07/13/2016, accessible: https://www.nytimes.com/2016/07/13/opinion/when-law-is-not-justice.html?_r.

    [xxii] McClanahan, Annie. “On Becoming Non-Economic: Human Capital Theory and Wendy Brown’s Undoing the Demos.” Theory & Event, forthcoming.

    [xxiii] Brown, Wendy. Undoing the Demos: Neoliberalism’s Stealth Revolution. New York: Zone Books, 2016.

    [xxiv] Brouillette, Sarah, Annie McClanahan, and Snehal Shingavi. “Risk and Reason/The Wrong Side of History: On the Yale University Unionization Efforts”, 05/16/2017, accessible: http://blog.lareviewofbooks.org/essays/risk-reasonthe-wrong-side-history-yale-university-unionization-efforts/.

    [xxv] Brouillette, Sarah. “Neoliberalism and the Demise of the Literary.” In: Huehls, Mitchum and Rachel Greenwald-Smith, eds. Neoliberalism and Contemporary Literature. Baltimore: Johns Hopkins UP, forthcoming. The uncorrected page proofs with which I am working are numbered 277-290.

    [xxvi] The statement is usually attributed to Fredric Jameson.

    Arne De Boever teaches American Studies in the School of Critical Studies at the California Institute of the Arts, where he also directs the MA Aesthetics and Politics program. He is the author of States of Exception in the Contemporary Novel (2012) Narrative Care (2013), and Plastic Sovereignties (2016), and a co-editor of Gilbert Simondon (2012) and The Psychopathologies of Cognitive Capitalism (2013). He edits Parrhesia and the Critical Theory/Philosophy section of the Los Angeles Review of Books and is a member of the boundary 2 collective. His new book, Finance Fictions, is forthcoming with Fordham University Press.

  • Eugene Thacker – Weird, Eerie, and Monstrous: A Review of “The Weird and the Eerie” by Mark Fisher

    Eugene Thacker – Weird, Eerie, and Monstrous: A Review of “The Weird and the Eerie” by Mark Fisher

    by Eugene Thacker

    Review of Mark Fisher, The Weird and the Eerie (Repeater, 2017)

    For a long time, the horror genre was not generally considered worthy of critical, let alone philosophical, reflection; it was the stuff of cheap thrills, pulp magazines, B-movies. Much of this has changed in the ensuing years, as a robust and diverse critical literature has emerged around the horror genre, much of which not only considers the horror genre as a reflection of society, but as an autonomous platform for posing far-reaching questions concerning the fate of the humans species, the species that has named itself. These are sentiments that have preoccupied recent writing on the horror genre, much of which borrows from developments in contemporary philosophy, and is attempting to expand the confines of horror beyond the usual fixation on gore, violence, and shock tactics. This hasn’t always been the case. Even today, writing on genre horror often tends towards “list” books (of the type The Top 100 Italian Horror Films From 1977, Volume IV), or books that are basically print-on-demand databases (The Encyclopedia of Asian Ghost Stories from the Beginning of Time, and Before That). These are rounded out by a plethora of introductory textbooks and surveys, usually aimed at film studies undergraduates (e.g. Key Terms in Cultural Studies: Splatterpunk), and opaque academic monographs of Lacanian psychoanalytic semiotic readings of horror film that themselves seem to be part of some kind of academic cult.

    While such books can be informative and helpful, reading them can be akin to the slightly woozy feeling one has after having gone down a combined Google/Wikipedia/YouTube rabbit-hole, emerging with bewildered eyes and terabytes of regurgitated data. However, recent writing on the horror genre takes a different approach, eschewing the poles of either the popular or the academic for a perhaps yet-to-be-named third space. One book that takes up this challenge is Mark Fisher’s The Weird and the Eerie, published this year. (Fisher is likely known to readers through his blog K-punk, which had been running for almost two decades before his untimely death.) What Fisher’s study shares with other like-minded books is an interest in expanding our understanding of the horror genre beyond the genre itself, and he does this by focusing on one of the deepest threads in the horror genre: the limits of human beings living in a human-centric world.

    As a case study, consider the opening passage from H.P. Lovecraft’s well-known short story “The Call of Cthulhu”:

    The most merciful thing in the world, I think, is the inability of the human mind to correlate all its contents. We live on a placid island of ignorance in the midst of black seas of infinity, and it was not meant that we should voyage far. The sciences, each straining in its own direction, have hitherto harmed us little; but some day the piecing together of dissociated knowledge will open up such terrifying vistas of reality, and of our frightful position therein, that we shall either go mad from the revelation or flee from the deadly light into the peace and safety of a new dark age.

    With this – arguably the most foreboding opener ever written for a story – Lovecraft sets the stage for what is really an extended meditation on human finitude. Originally published in the February 1928 issue of the pulp magazine Weird Tales, “Cthulhu” ostensibly brings together the perspectives of deep time and deep space to reflect on the comparatively myopic and humble non-event that is human civilization – at least that’s how Lovecraft himself puts it. It is well known that Lovecraft took cues from the likes of Edgar Allan Poe, Algernon Blackwood, and Arthur Machen – influences that he himself notes. Equally well known is Lovecraft’s notorious xenophobia (often expressed in his correspondence as outright racism). Yet in spite of – or because of – this, Lovecraft remained unambiguous in his own approach to the horror genre. In his numerous essays, notes, and letters, he notes, with an unflinching misanthropy, how a horror story should evoke “an atmosphere of breathless and unexplainable dread of outer, unknown forces,” forces that point towards a “malign and particular suspension or defeat of those fixed laws of Nature which are our only safeguard against the assaults of chaos and the daemons of unplumbed space.” The “monsters” in such tales were far from the usual line-up of vampires, werewolves, zombies, and demons – all of which, for Lovecraft and his colleagues, end up serving as mere solipsistic reflections of human-centric hopes and fears. They are often described in abstract, elemental, almost primordial ways: “the colour out of space,” “the shadow out of time,” or simply “the lurking fear.”

    The story of “Cthulhu” itself  – which details the discovery of a cult devoted to an ancient, malefic, Elder Deity vaguely resembling a oozing winged cephalopod emerging from a hidden tomb of impossibly-shaped Cyclopean black geometry foretelling not only the end of the world but the deeper futility of the entirety of human civilization – the story itself has since obtained a cult status among horror authors, critics, and fans alike. In the early 20th century, like-minded tales of cosmic misanthropy were written by Lovecraft contemporaries Clark Ashton Smith, Robert E. Howard, and Robert Bloch, as well as by later authors of the weird tale such as Ramsey Campbell, Claitlín Kiernan, China Miéville, and Junji Ito. Like a slow-moving, tentacular meme, the Cthulhu “mythos” has reached far beyond the confines of literary horror. Film adaptations abound (the term “straight-to-video” no longer applies, but is still apt here). Video games, which nearly always end in despair and/or death. Role-playing games, complete with impossibly-shaped 10-sided black dice. A visit to any Comic Con will yield a dizzying array of comics, ‘zines, artwork, posters, bumper stickers, hoodies, Miskatonic University course catalogs, editions of the dreaded Necronomicon, and even Cthulhu plushies for the Lovecraft toddler. An industry is born. Today, distant cousins of Cthulhu can be seen in the Academy Award-nominated Arrival (2016), and the distinctly un-nominated burlesque that is Independence Day: Resurgence (2016). Cthulhu, it seems, has gone mainstream.

    Amid all the fondness for such abysmal and tentacular monstrosities, it is easy to overlook the themes that run through Lovecraft’s short tale, themes at once disturbing and compelling, and which mark the tradition often referred to as “supernatural horror” or “cosmic horror.” When Lovecraft characters happen upon strange creatures like Cthulhu (or worse, the Shoggoths), they don’t have the typical reactions. “Fear” is too simple a term to describe it; it encompasses everything without saying anything. But neither are they overcome by the more literary affects of “terror” or “horror,” like the characters of an old gothic novel. They have neither the time nor the patience for the critical distance afforded by a psychoanalytic “uncanny,” or the literary structures of the “fantastic.” Confronted with Cthulhu, Lovecraft’s characters simply freeze. They become numb. They go dark. Frozen thought. They can’t wrap their heads around that is right before them. What they “feel” is exactly this “inability of the human mind to correlate all its contents.” Forget the fear of death, I’ve just discovered a primordial, other-dimensional, slime-ridden necropolis of obsidian blasphemy that throws into question all human knowledge on this now-forsaken speck of cosmic dust we laughably call “our” planet.

    Yet, in all their pulpy, melodramatic, low-brow seriousness, the questions raised by Lovecraft and other writers in Weird Tales are also philosophical questions. They are questions that address the limits of human knowledge in a rapidly-changing world, a world that seems indifferent to the machinations of science or doctrinal exuberance of religion, impassive before the hubris of technological advance or the lures of political ideology – a cold “crawling chaos” lurking just beneath the fragile fabric of humanity. What the characters of such stories discover (aside from the usual train of madness, dread, and, well, death) is a kind of stumbling humbleness, the human brain discovering its own limit, enlightened only of its own hubris – the humility of thought.

    *

    This theme  – the limits of what can be known, the limits of what can be felt, the limits of what can be done – is central to Fisher’s The Weird and the Eerie. This is markedly different from other approaches to horror, which, however critical they may seem, often regard the horror genre as having an essentially therapeutic function, enabling us to purge, cope with, or work through our collective fears and anxieties. This therapeutic view of horror often becomes polarized between reactionary readings (a horror story that promotes the establishing or re-establishing of norms) or progressive readings (a horror story that promotes otherness, difference, and transgression of norms). And yet, in the final analysis, it is also hard to escape the sense that there is a certain kind of solipsism to the horror genre, that it is we human beings that remain at the center of it all, who have either constructed boundaries and bunkers and have once again staved off another threat to our collective identity, or who have devised clever ways of creating hybrids, fusions, and monstrous couplings with the other, thereby extending humanity’s long dreamed-of share of immortality.

    Whether reactionary or progressive, both responses to the horror genre involve a strategy in which the world in all its strangeness is transformed into a world made in our own image (anthropomorphism), or a world rendered useful for us as human beings (anthropocentrism). In spite of all the horrifying things that happen to the characters in horror stories, there is a sense in which the horror genre is ultimately a kind of humanism, a panegyric to the limitless potential of human knowledge, the immeasurable capacity for human feeling, the infinite promise of human sovereignty. This is, of course, not surprising, given the somber didactics of even the most extreme zombie apocalypses, vampiric mutations, or demonic plagues. Species self-interest is at stake. Humanity may be brought to the brink of extinction, only so that that same humanity may extend its mastery (self-mastery and mastery over its environment), and even obtain some form of ascendency over its own tenuous, existential status. Subtending the survivalist imperative of the horror genre and its pragmatic arsenal of mastering monsters of all kinds is another kind of mastery – a metaphysical mastery.

    But this is only one way of understanding the horror genre. The insight of books like Fisher’s is that the horror genre is also capable of chipping away at this species-specific sovereignty, taking aim at the twin pillars of anthropomorphism and anthropocentrism. Instead of being concerned with species self-interest and mastery, such horror stories tend more towards humility, hubris, and even, in its darkest moments, futility. It is a project that is doomed to failure, of course, and perhaps this why so many of the characters in the tales of Lovecraft, Algernon Blackwood, or Izumi Kyoka find themselves in worlds that are both untenable and unlivable. They end up with nothing but a bit of useless quasi-wisdom, scribbling away madly in a darkened forest room trying to make sense of it all not making any sense. Or they detach themselves from the humdrum human world of plans and projects, finding themselves inexorably pulled headlong into the ambivalent abyss of self-abnegation. Or worse – they simply continue to exist. What results is what we might call a “bleak humanism” – a horror story interested in humanity only to the extent that humanity is defined by its uncertainties, its finitude, its doubts – the humility of being human.

    Fisher’s terms are relatively clear. “What the weird and the eerie have in common is a preoccupation with the strange.” For Fisher, the strange is, quite simply, “a fascination for the outside […] that which lies beyond standard perception, cognition and experience.” But the weird and the eerie are quite different in how they apprehend the strange. As Fisher writes, “the weird is constituted by a presence – the presence of that which does not belong.” There is something exorbitant, out-of-place, and incongruous about the weird. It is the part that does not fit into the whole, or the part that disturbs the whole – threshold worlds populated by portals, gateways, time loops, and simulacra. Fundamental presumptions about self, other, knowledge, and reality will have to be rethought. “The eerie, by contrast, is constituted by a failure of absence or by a failure of presence. There is something where there should be nothing, or there is nothing where there should be something.” Here we encounter disembodied voices, lapses in memory, selves that are others, revelations of the alien within, and nefarious motives buried in the unconscious, inorganic world in which we are embedded.

    The weird and the eerie are not exclusive to the more esoteric regions of cosmic horror; they are also embedded in and bound up with quotidian notions of selfhood and the everyday relationship between self and world. The weird and eerie crop up in those furtive moments when we suspect we are not who we think we are, when we wonder if we do not act so much as we are acted upon. When everything we assumed to be a cause is really an effect. The weird and eerie are, ultimately, inseparable from the fabric of the social, cultural, and political landscape in which we are embedded. Fisher: “Capital is at every level an eerie entity: conjured out of nothing, capital nevertheless exerts more influence than any allegedly substantial entity.” There is a sense in which, for Fisher, the weird and the eerie constitute the poles of our ubiquitous “capitalist realism,” prompting us to re-examine not only presumptions concerning human agency, intentionality, and control, but also inviting a darker, more disturbing reflection on the strange agency of the inanimate and impersonal materiality of the world around us and within us.

    Fisher’s interest in Lovecraft stems from this shift in perspective from the human-centric to the nonhuman-oriented – not simply a psychology of “fear,” but the unnerving, impersonal calm of the weird and eerie. As scholars of the horror genre frequently note, Lovecraft’s tales are distinct from genre fantasy, in that they rarely posit an other world beyond, beneath, or parallel to this one. And yet, anomalous and strange events do take place within this world. Furthermore, they seem to take place according to some logic that remains utterly alien to the human world of moral codes, natural law, and cosmic order. If such anomalies could simply be dismissed as anomalies, as errors or aberrations in nature, then the natural order of the world would remain intact. But they cannot be so easily dismissed, and neither can they simply be incorporated into the existing order without undermining it entirely. Fisher nicely summarizes the dilemma: “a weird entity or object is so strange that it makes us feel that it should not exist, or at least that it should not exist here. Yet if the entity or object is here, then the categories which we have up until now used to the make sense of the world cannot be valid. The weird thing is not wrong, after all: it is our conceptions that must be inadequate.”

    *

    This dilemma (which literary critic Tzvetan Todorov called “the fantastic”) is presented in unique ways by authors of the weird tale and cosmic horror. Such authors refuse to identify the weird with the supernatural, and often refuse the distinction between the natural and supernatural entirely. They do so not via mythology or religion, but via science – or at least a peculiar take on science. In cosmic horror, the strange reality described by science is often far more unreal than any vampire, werewolf, or zombie. Fisher highlights this: “In many ways, a natural phenomenon such as a black hole is more weird than a vampire.” Why? Because the existence of the vampire, anomalous and transgressive as it may seem, actually reinforces the boundary between the natural order “in here” and a transcendent, supernatural order “out there.” “Compare this to a black hole,” Fisher continues, “the bizarre ways in which it bends space and time are completely outside our common experience, and yet a black hole belongs to the natural-material cosmos – a cosmos which must therefore be much stranger than our ordinary experience can comprehend.” Science, for all its explanatory power, inadvertently reveals the hubris of the explanatory impulse of all human knowledge, not just science.

    Authors such as Lovecraft were well aware of this shift in their approach to the horror genre. An oft-cited passage from one of Lovecraft’s letters reads: “…all my tales are based on the fundamental premise that common human laws and interests and emotions have no validity or significance in the vast cosmos-at-large.” To write the truly weird tale, Lovecraft notes, “one must forget that such things as organic life, good and evil, love and hate, and all such local attributes of a negligible and temporary race called mankind, have any existence at all.” So much for humanism, then. But Fisher is also right to note that Lovecraft’s tales are not simply horror tales. As Lovecraft himself repeatedly noted, the affects of fear, terror, and horror are merely consequences of human being confronting an impersonal and indifferent non-human world – what Lovecraft once called “indifferentism” (which, as he jibes, wonders “whether the cosmos gives a damn one way or the other”). There is an allure to the unhuman that is, at the same time, opaque and obscure. As Fisher writes, “it is not horror but fascination – albeit a fascination usually mixed with a certain trepidation – that is integral to Lovecraft’s rendition of the weird…the weird cannot only repel, it must also compel our attention.”

    This reaches a pitch in Fisher’s writing on author Nigel Kneale and his series of Quatermass films and TV shows. The Quatermass and the Pit series, for instance, opens with the shocking discovery of an alien spaceship buried within the bowels of a London tube station (which station I will not say). The strange, quasi-insect remains inside the ship point to another, very different form of life than that of terrestrial life. But the science tells them that the alien spaceship is actually a relic from the distant past. It seems that not only geology and cosmology, but human history will have to be rethought. Gradually, the scientists learn that the alien relics are millions of years old, and in fact a distant, early progenitor of human beings. We, in turns, out, are they – or vice-versa. The Quatermass series not only demonstrates the efficacy of scientific inquiry, it puts forth a further proposition: that science works too well. “Kneale shows that an enquiry into the nature of what the world is like is also inevitably an unraveling of what human beings had taken themselves to be…if human beings fully belong to the so-called natural world, then on what grounds can a special case be made for them?” Reality turns out to be weirder and more eerie than any fantastical world or alien civilization. This is what Fisher calls “Radical Enlightenment,” a kind of physics that goes all the way, a materialism to the nth degree, even at the cost of disassembling the self-aware and self-privileging human brain that conceives of it. Reversals and inversions abound. What if humanity itself is not the cause of world history but the effect of material and physical laws that we can only dimly intuit?

    This theme of Radical Enlightenment runs through Fisher’s book. While he does discuss works of fiction or film one would expect in relation to the horror genre (Lovecraft, Kubrick’s The Shining, David Lynch’s recent films), Fisher also offers ruminations on contemporary works (such as Jonathan Glazer’s 2013 film Under the Skin), as well as a number of evocative comparisons, such as a chapter on the weird effects of time loops in Rainer Werner Fassbinder’s film World on a Wire and Philip K. Dick’s novel Time Out Of Joint. There are also several surprises, including a meditation on the strange “vanishing landscapes” in M.R. James’s ghost stories and Brian Eno’s 1982 ambient album On Land. Also welcome is Fisher’s attentiveness to under-appreciated works in the horror genre, including the disquieting short fiction of Daphne du Maurier. In the span of a few carefully-written pages, Fisher follows the twists and turns of his twin concepts one chapter at a time, one example at a time, until it is revealed exactly how enmeshed the weird and the eerie are in culture generally.

    *

    The Weird and the Eerie is an evocative and carefully-written short study in cultural aesthetics. Far from the familiar line-up of vampires, zombies, and demons, Fisher’s eclectic examples speak directly to one of the central themes of the horror genre: the limits of human knowledge, the metamorphic shapes of fear, and the blurriness of boundaries of all types. His simple conceptual distinction quickly gives way to reversals, permutations, and complications, ultimately refusing any notion of a monstrous or alien unhumanness  “out there”; with Fisher, the unhuman is more likely to reside within the human itself (or as Lovecraft might write it, “the unhuman is discovered to reside within the human itself”).

    Many books on the horror genre are concerned with providing answers, using varieties of taxonomy and psychology to provide a therapeutic application to “our” lives, helping us to cathartically purge collective anxieties and fears. For Fisher, the emphasis is more on questions, questions that target the vanity and presumptuousness of human culture, questions regarding human consciousness elevating itself above all else, questions concerning the presumed sovereignty of the species at whatever cost – perhaps questions it’s better not to pose, at the risk of undermining the entire endeavor to begin with.

    I should let the reader decide which approach makes more sense, given the weird and/or eerie “Waldo-moment” in which we currently find ourselves. But the weird and the eerie are scalable, pervading broad cultural structures as well as the minutiae of personal ruminations. I’ve known Fisher as a colleague for some time. About a week after I had agreed to do this review, I heard via email of Fisher’s suicide. Someone I knew was previously there, over there, doing what they do, they way we so often presume a person’s presence in between moments of punctuated interaction. And then, suddenly, they’re not there. About a week after this, The Weird and the Eerie arrived in the mail. It was hard not to pick up the book and feel it had a kind of aura around it, as if it was some kind of final statement, a last communiqué. I had it on the table in a short stack with other books, and I kept half-expecting it to also vanish, as if its very presence there were incongruous. I would occasionally pick up the book and flip through it, as if secretly hoping to discover pages that weren’t there before. But my copy was the same as all the others. Besides, isn’t that essentially what a book is, a last word written by someone either long dead or who will die in the future? Maybe all books are eerie in this way.

    Eugene Thacker is the author of several books, including In The Dust Of This Planet (Zero Books, 2011) and Cosmic Pessimism (Univocal, 2015).

  • Alexander R. Galloway — Brometheanism

    Alexander R. Galloway — Brometheanism

    By Alexander R. Galloway
    ~

    In recent months I’ve remained quiet about the speculative turn, mostly because I’m reticent to rekindle the “Internet war” that broke out a couple of years ago mostly on blogs but also in various published papers. And while I’ve taught accelerationism in my recent graduate seminars, I opted for radio silence when accelerationism first appeared on the scene through the Accelerationist Manifesto, followed later by the book Inventing the Future. Truth is I have mixed feelings about accelerationism. Part of me wants to send “comradely greetings” to a team of well-meaning fellow Marxists and leave it at that. Lord knows the left needs to stick together. Likewise there’s little I can add that people like Steven Shaviro and McKenzie Wark haven’t already written, and articulated much better than I could. But at the same time a number of difficulties remain that are increasingly hard to overlook. To begin I might simply echo Wark’s original assessment of the Accelerationist Manifesto: two cheers for accelerationism, but only two!

    What’s good about accelerationism? And what’s bad? I love the ambition and scope. Certainly the accelerationists’ willingness to challenge leftist orthodoxies is refreshing. I also like how the accelerationists demand that we take technology and science seriously. And I also agree that there are important tactical uses of accelerationist or otherwise hypertrophic interventions (Eugene Thacker and I have referred to them as exploits). Still I see accelerationism essentially as a tactic mistaken for a strategy. At the same time this kind of accelerationism is precisely what dot-com entrepreneurs want to see from the left. Further, and ultimately most important, accelerationism is paternalistic and thus suffers from the problems of elitism and ultimately reactionary politics.

    Let me explain. I’ll talk first about Srnicek and Williams’ 2015 book Inventing the Future, and then address one of the central themes fueling the accelerationist juggernaut, Prometheanism. Well written, easy to read, and exhaustively footnoted, Inventing the Future is ostensibly a follow up to the Accelerationist Manifesto, although the themes of the two texts are different and they almost never mention accelerationism in the book. (Srnicek in particular is nothing if not shrewd and agile: present at the christening of #A, we also find him on the masthead of the speculative realist reader, and today nosing in on “platform studies.” Wherever he alights next will doubtless portend future significance.) The book is vaguely similar to Michael Hardt and Antonio Negri’s Declaration from 2012 in that it tries to assess the current condition of the left while also providing a set of specific steps to be taken for the future. And while the accelerationists have garnered significantly more attention of late, mostly because it feels so fresh and new, Hardt and Negri’s is the better book (and interestingly Srnicek and Williams never cite them).

    Inventing the Future

    Inventing the Future has essentially two themes. The first consists in a series of denunciations of what they call “folk politics” defined in terms of Occupy, the Zapatistas, Tiqqun, localism, and direct democracy, ostensibly in favor of a new “hegemony” of planetary social democracy (also known as Leninism). The second theme concerns an anti-work polemic focused on the universal basic income (UBI) and shortening the work week. Indeed even as these two authors collaborate and mix their thoughts, there seem to be two books mixed together into one. This produces an interesting irony: while the first half of the book unabashedly denigrates anarchism in favor of Leninism, the second half of the book focuses on that very theme (anti-work) that has defined anarchist theory since the split in the First International, if not since time immemorial.

    What’s so wrong with “folk politics”? There are a few ways to answer this question. First the accelerationists are clearly frustrated by the failures of the left, and rightly so, a left debilitated by “apathy, melancholy and defeat” (5). There’s a demographic explanation as well. This is the cri de coeur of a younger generation seeking to move beyond what are seen as the sclerotic failures of postmodern theory with all of its “culturalist” baggage (which too often is a codeword for punks, queers, women, and people of color — more on that in a moment).

    Folk politics includes “the fetishization of local spaces, immediate actions, transient gestures, and particularisms of all kinds” (3); it privileges the “small-scale, the authentic, the traditional and the natural” (10). The following virtues help fill out the definition:

    immediacy…tactics…inherently fleeting…the past…the voluntarist and spontaneous…the small…withdrawal or exit…the everyday…feeling…the particular…the ethical…the suffering of the particular and the authenticity of the local (10-11)

    Wow, that’s a lot of good stuff to get rid of. Still, they don’t quit there, targeting horizontalism of various kinds. Radical democracy is in the crosshairs too. Anti-representational politics is out as well. All the “from below” movements, from the undercommons to the black bloc, anything that smacks of “anarchism, council communism, libertarian communism and autonomism” (26) — it’s all under indictment. This unceasing polemic culminates in the book’s most potent sentence, if not also its most ridiculous, where the authors dismiss all of the following movements in one fell swoop:

    Occupy, Spain’s 15M, student occupations, left communist insurrectionists like Tiqqun and the Invisible Committee, most forms of horizontalism, the Zapatistas…localism…slow-food (11-12)

    That scoops up a lot of people. And the reader is left to quibble over whatever principal of decision might group all these disparate factions together. But the larger point is clear: for Srnicek and Williams folk politics emerged because of an outdated Left (i.e. the abject failures of social democracy and communism) (16-), and an outmaneuvered Left (i.e. the rampant successes of neoliberalism) (19-). Thus their goal is to update the left with a new ideology, and overhaul its infrastructure allowing it to modernize and scale up to the level of the planet.

    In the second half of the book, particularly in chapters 5 and 6, Srnicek and Williams elaborate their vision for anti-work and post-work. This hinges on the concept of full automation, and they provocatively assert that “the tendencies towards automation and the replacement of human labor should be enthusiastically accelerated” (109). Yet the details are scant. What kind of tech are we talking about? We get some vague references at the outset to “Open-source designs, copyleft creativity, and 3D printing” (1), then again later to “data collection (radio-frequency identification, big data)” and so on (110). But one thing this book does not provide is an examination of the technology of modern capitalism. (Srnicek’s Platform Capitalism is an improvement thematically but not substantively: he provides an analysis of political economy, but no tech audit.) Thus Inventing the Future has a sort of Wizard of Oz problem at its core. It’s not clear what clever devices are behind the curtain, we’re just supposed to assume that they will be sufficiently communistical if we all believe hard enough.

    At the same time the authors come across as rather tone deaf on the question of labor, bemoaning above all “the misery of not being exploited,” as if exploitation is some grand prize awarded to the subaltern. Further, they fail to address adequately the two key challenges of automation, both of which have been widely discussed in political and economic theory: first that automation eliminates jobs for people who very much want and need them, leading to surplus populations, unemployment, migration, and intrenched poverty; and second that automation transforms the organic composition of labor through deskilling and proletarianization, the offshoring of menial labor, and the introduction of technical and specialist labor required to design, build, operate, and repair those seemingly “automagical” machines. In other words, under automation some people work less, but everyone works differently. Automation reduces work for some, but changes (and in fact often increases) work for others. Marx’s analysis of machines in Capital is useful here, where he addresses all of these various tendencies, from the elimination of labor and the increase in labor, to the transformation of the organic composition of labor — the last point being the most significant. (And while machines might help lubricate and increase the productive forces — not a bad thing — it’s clear that machines are absolutely not revolutionary actors for Marx. Optimistic interpretations gleaned from the Grundrisse notwithstanding, Marx defines machines essentially as large batteries for value. I have yet to find any evidence that today’s machines are any different.)

    So the devil is in the details: what kind of technology are we talking about? But perhaps more importantly, if you get rid of the “folk,” aren’t you also getting rid of the people? Srnicek and Williams try to address this in chapter 8, although I’m more convinced by Hardt and Negri’s “multitude,” Harney and Moten’s “undercommons,” or even formulations like “the part of no part” or the “inoperative community” found scattered across a variety of other texts. By the end Srnicek and Williams out themselves as reticular pessimists: let’s not specify “the proper form of organization” (162), let’s just let it happen naturally in an “ecology of organizations” (163). The irony being that we’re back to square one, and these anti-folk evangelists are hippy ecologists after all. (The reference to function over form [169] appears as a weak afterthought to help rationalize their decision, but it re-introduces the problem of techno-fetishism, this time a fetishism of the function.)

    To summarize, accelerationism presents a rich spectrum of problems. The first stems from the notion that technology/automation will save us, replete with vague references to “the latest technological developments” unencumbered by any real details. Second is the question of capitalism itself. Despite the authors’ Marxist tendencies, it’s not at all clear that accelerationism is anti-capitalist. In fact accelerationism would be better described as a form of post-capitalism, what Zizek likes to mock as “capitalism with a friendly face.” What is post-capitalism exactly? More capitalism? A modified form of capitalism? For this reason it becomes difficult to untangle accelerationism from the most visionary dreams of the business elite. Isn’t this exactly what dot-com entrepreneurs are calling for? Isn’t the avant-garde of acceleration taking place right now in Silicon Valley? This leads to a third point: accelerationism is a tactic mistaken for a strategy. Certainly accelerationist or otherwise hypertrophic methods are useful in a provisional, local, which is to say tactical way. But accelerationism is, in my view, naïve about how capitalism works at a strategic level. Capitalism wants nothing more than to accelerate. Adding to the acceleration will help capitalism not hinder it. Capitalism is this accelerating force, from primitive accumulation on up to today. (Accelerationists don’t dispute this; they just simply disagree on the moral status of capitalism.) Fourth and finally is the most important problem revealed by accelerationism, the problem of elitism and reactionary politics. Given unequal technological development, those who accelerate will necessarily do so on the backs of others who are forced to proletarianize. Thus accelerationists are faced with a kind of “internal colonialism” problem, meaning there must be a distinction made between those who accelerate and those who facilitate acceleration through their very bodies. We already know who suffers most under unequal technological acceleration, and it’s not young white male academics living in England. Thus their skepticism toward the “folk” is all too often a paternalistic skepticism toward the wants and needs of the generic population. Hence the need for accelerationists to talk glowingly about things like “engineering consent.” It’s hard to see where this actually leads. Or more to the point who leads: if not Leninists then who, technocrats? Philosopher kings?

    *

    Accelerationism gains much inspiration from the philosophy of Prometheanism. If accelerationism provides a theory of political economy, Prometheanism supplies a theory of the subject. Yet it’s not always clear what people mean by this term. In a recent lecture titled “Prometheanism and Rationalism” Peter Wolfendale defines Prometheanism in such general terms that it becomes a synonym for any number of things: history and historical change; being against fatalism and messianism; being against the aristocracy; being against Fukuyama; being for feminism; the UBI and post-capitalism; the Enlightenment and secularism; deductive logic; overcoming (perceived) natural limits; technology; “automation” (which as I’ve just indicated is the most problematic concept of them all). Even very modest and narrow definitions of Prometheanism — technology for humans to overcome natural limit — present their own problems and wind up largely deflating the sloganeering of it all. “Okay so both the hydrogen bomb and the contraceptive pill are equally Promethean? So then who adjudicates their potential uses?” And we’re left with Prometheanism as the latest YAM philosophy (Yet Another Morality).

    Still, Prometheanism has a particular vision for itself and it’s worth describing the high points. I can think of six specific qualities. (1) Prometheanism defines itself as posthuman or otherwise antihuman. (2) Prometheanism is an attempt to transcend the bounds of physical limitation. (3) Prometheanism promotes freedom, as in for instance the freedom to change the body through hormone therapy. (4) Prometheanism sees itself as politically progressive. (5) Prometheanism sees itself as being technologically savvy. (6) Prometheanism proposes to offer technical solutions to real problems.

    But is any of this true? Interestingly Bernard Stiegler provided an answer to some of these questions already in 1994, and it’s worth returning to his book from that year Technics and Time, 1: The Fault of Epimetheus to fill out a conversation that has, thus far, been mostly one-sided. Stiegler’s book is long and complicated, and touches on many different things including technology and the increased rationalization of life, by way of some of Stiegler’s key influences including Gilbert Simondon, André Leroi-Gourhan, and Bertrand Gille. Let me focus however on the second part of the book, where Stiegler examines the two brothers Epimetheus and Prometheus.

    A myth about powers and qualities, the fable of Epimetheus and Prometheus is recounted by the sophist Protagoras starting at line 320c in Plato’s dialogue of that name. In Stiegler’s retelling of the story, we begin with Epimetheus, who, via a “principle of compensation” governed by notions of difference and equilibrium, hands out powers and qualities to all the animals of the Earth. For instance extra speed might be endowed to the gazelle, but only by way of balanced compensation given to another animal, say a boost in strength bestowed upon the lion. Seemingly diligent in his duties, Epimetheus nevertheless tires before the job is complete, shirking his duties before arriving at humankind, who is left relatively naked without a special power or quality of its own. To compensate humankind, Prometheus absconds with “the gift of skill in the arts and fire” — “τὴν ἔντεχνον σοφίαν σὺν πυρί” — captured from Athena and Hephaestus, respectively, conferring these two gifts to humanity (Plato, “Protagoras,” 321d).

    In this way humans are defined first not via technical supplement but through an elemental fault — this is Stiegler’s lingering poststructuralism — the fault of Epimetheus. Epimetheus forgets about us, leaving us until the end, and hence “Humans only occur through their being forgotten; they only appear in disappearing” (188). But it’s more than that: a fault followed by a theft, and hence a twin fault. Humanity is the “fruit of a double fault–an act of forgetting [by Epimetheus], then of theft [by Prometheus]” (188). Humans are thus a forgotten afterthought, remedied afterward by a lucky forethought.

    “Afterthought” and “forethought” — Stiegler means these terms quite literally. Who is Epimetheus? And who is Prometheus? Greek names often have etymological if not allegorical significance, as is the case here. Both names share the root “-metheus,” cognate with manthánō [μανθάνω], which means learning, study, or cultivation of knowledge. Hence a mathitís [μαθητής] is a learner or a student. (And in fact in a very literal sense “mathematics” simply refers to the things that one learns, not to arithmetic or geometry per se.) The two brothers are thus both varieties of learners, both varieties of thinkers. The key is which variety. The key is the Epi– and the Pro-.

    Epi carries the character of the accidentally and artificial factuality of something happening, arriving, a primordial ‘passibility,’” Stiegler explains. “Epimetheia means heritage. Heritage is always epimathesis. Epimetheia would also mean then tradition-originating in a fault that is always already there and that is nothing but technicity” (206-207). Hence Epimetheus means something like “learning on the basis of,” “thinking after,” or, more simply, or “afterthought” or “hindsight.” This is why Epimetheus forgets, why he is at fault, why he acts foolishly, because these are all the things that generate hindsight.

    Prometheus on the other hand is “foresight” or “fore-thought.” If Epimetheus means “thinking and learning on the basis of,” Prometheus means something more like “thinking and learning in anticipation of.” In this way, Prometheus comes to stand in for cleverness (but also theft), ingenuity, and thus technics as a whole.

    But is that all? Is the lesson simply to restore Epimetheus to his position next to Prometheus? To remember the Epimethean omission along with the Promethean endowment? In fact the old Greek myth isn’t quite finished, and, after initially overlooking the ending, Stiegler eventually broaches the closing section on Hermes. For even after benefiting from its Promethean supplement, humanity remains incomplete. Specifically, the gods notice that Man has a tendency toward war and political strife. Thus Hermes is tasked to implant a kind of socio-political virtue, supplementing humanity with “the qualities of respect for others [αἰδώ] and a sense of justice [δίκη]” (Plato 322c). In other words, a second supplement is necessary, only this time a supplement not rooted in the identitarian logic of heterogeneous qualities. “Another tekhnē is required,” writes Stiegler, “a tekhnē that is no longer paradoxically…the privilege of specialists” (201). This point about specialists is key — all you Leninists take note — because on Zeus’s command Hermes delivers respect and justice generically and equally across all persons, not via the “principle of compensation” based on difference and equilibrium used previously by Epimetheus to divvy up the powers and qualities of the animals. Thus while some people may have a talent for the piano, and others might be gifted in some other way, justice and respect are bestowed equally to all.

    This is why politics is always a question of the “hermeneutic community,” that is, the ad hoc translation and interpretation of real political dynamics; it comes from Hermes (201). At the same time politics also means “the community of those who have no community” because there is no adjudication of heterogenous qualities, no truth or law stipulated in advance, except for the very “conditions” of the political (those “hermeneutic conditions,” namely αἰδώ and δίκη, respect and justice).

    To summarize, the Promethean story has three moments, not one, and all three ought to be given full voice:

    1. Default of origin (being forgotten about by Epimetheus/Hindsight)
    2. Gaining technicity (fire and skills from Prometheus/Foresight)
    3. Revealing the generic (“respect for others and a sense of justice” from Hermes)

    This strikes me as a much better way to think about Prometheanism overall, better than the narrow definition of “using technology to overcome natural limits.” Recognizing all three moments, Prometheanism (if we can still call it that) entails not just technological advancement, but also insufficiency and failure, along with a political consciousness rooted in generic humanity.

    And now would be a good time to pass the baton over to the Xenofeminists, who make much better use of accelerationism than its original authors do. The Xenofeminist manifesto provides a more holistic picture of what might simply be called a “universalism from below” — yes, that very folk politics that Srnicek and Williams seek to suppress — doing justice not only to Prometheus, but to Epimetheus and Hermes as well:

    Xenofeminism understands that the viability of emancipatory abolitionist projects — the abolition of class, gender, and race — hinges on a profound reworking of the universal. The universal must be grasped as generic, which is to say, intersectional. Intersectionality is not the morcellation of collectives into a static fuzz of cross-referenced identities, but a political orientation that slices through every particular, refusing the crass pigeonholing of bodies. This is not a universal that can be imposed from above, but built from the bottom up — or, better, laterally, opening new lines of transit across an uneven landscape. This non-absolute, generic universality must guard against the facile tendency of conflation with bloated, unmarked particulars — namely Eurocentric universalism — whereby the male is mistaken for the sexless, the white for raceless, the cis for the real, and so on. Absent such a universal, the abolition of class will remain a bourgeois fantasy, the abolition of race will remain a tacit white-supremacism, and the abolition of gender will remain a thinly veiled misogyny, even — especially — when prosecuted by avowed feminists themselves. (The absurd and reckless spectacle of so many self-proclaimed ‘gender abolitionists’ campaign against trans women is proof enough of this). (0x0F)


    _____

    Alexander R. Galloway is a writer and computer programmer working on issues in philosophy, technology, and theories of mediation. Professor of Media, Culture, and Communication at New York University, he is author of several books and dozens of articles on digital media and critical theory, including Protocol: How Control Exists after Decentralization (MIT, 2006), Gaming: Essays in Algorithmic Culture (University of Minnesota, 2006); The Interface Effect (Polity, 2012), and most recently Laruelle: Against the Digital (University of Minnesota, 2014), reviewed here in 2014. Galloway has recently been writing brief notes on media and digital culture and theory at his blog, on which this post first appeared.

    Back to the essay

  • Ben Parker — What Is A Theory of the Novel Good For?

    Ben Parker — What Is A Theory of the Novel Good For?

    by Ben Parker

    Review of Guido Mazzoni, Theory of the Novel, translated from the Italian (2011) by Zakiya Hanafi, Harvard University Press, 2017.

    Because the novel is the most important product of modernity, any theory of the novel is also a theory of modernity. That modernity has been characterized in a variety of ways: as an unremitting catastrophe of Being—Georg Lukács’s The Theory of the Novel or René Girard’s Deceit, Desire, and the Novel; as the vulnerable legacy of humanist secularism—Erich Auerbach’s Mimesis; or epistemologically—Michel Foucault’s reading of Don Quixote as a crisis of signification. As Guido Mazzoni tells the story in his Theory of the Novel, modernity has been a long process of liberation from the implicit transcendence of collective cultural projects. We have now arrived at a moment where “the particular life represents the only horizon of sacredness that modern culture still recognizes.” Modernity is therefore the disruptive entropy of “unbelonging,” the triumph of “individualistic, anarchic, dispersive, centrifugal” forces over those of “collective transcendence.” By Mazzoni’s scorekeeping, the signal accomplishments of modernity are human rights, democracy, and relativism, but above all, “the concrete capacity to construct small spheres of autonomy.” The novel therefore marks “the entrance of democracy into literature,” because it is the vehicle par excellence of particularized private experience. Mazzoni prizes the novel for “its ability to make us see the world through the eyes and conscience of someone else, its ability to allow us to step into a possible life that is not ours.”

    Given this endpoint of absolute relativism—“Each person is an epicenter of absolute meaning”—Mazzoni has to construct his history of the novel retrospectively, as a gradual disburdening of the possibility of transcendence and collective horizons. He casts this ontological flattening in the light of an inner liberation of the novel form, although it could as easily be felt as a suffocating reduction. Mazzoni describes the first two centuries (1550-1750) of the novel’s history as an emancipation from the conceptual scaffolding of allegory and moral didacticism, on one hand, and from the strict delineations of classicist poetics (tragedy depicts a higher type of character, and comedy a lower) on the other. Because he was trained as a philologist, Mazzoni plunges the reader into a slough of terminological distinctions attending the birth of the novel: le roman, der Roman, il romanzo, romanice loqui, romanz, romance, novella, nouvelle, novela, novel. But his theory of genre rests upon a dubious metaphysics: rather than timeless Platonic forms, genres are “universals in re,” knots of emerging practices bound up with contemporary definitions and prescriptions. Instead of defining “the novel” retrospectively, which would mean fitting works like Tristam Shandy and The Golden Ass into the same Procrustean bed, Mazzoni sees the genre as the outcome of a complex fusion of heterogeneous conventions and literary corpuses. His approach is to “reconstruct the dialectic between the object and the words that enabled the object to be defined in the first place.” The drawback to this method is that the definition is never immanent to the novels themselves, but is derived from the belletristic scaffolding that is Mazzoni’s preferred archive. The scholarship on display—Mazzoni seems to have read every treatise and preface from the period—is unimpeachably exhaustive, even overwhelming. We learn that Don Quixote, for example, was not welcomed into the world as a novel but as a “comic romance.” But Mazzoni declines to pursue the question, what process of generic self-definition is Don Quixote itself engaged in? Nor does he see the retrospective genealogy of the novel as in large part an invention of the novel itself (as, for instance, the shelf of books in David Copperfield’s library). In any event, the upshot of this formative period is that the novel emerges as the “book of particular life,” a record of private persons, caught up in the “anarchy of the real,” rather than idealized or public figures made into abstract examples.

    Once the novel has broken free from allegory (whose political dimensions, overlooked by Mazzoni, have been detailed by Fredric Jameson), and we find ourselves in the nineteenth century, the next constraint to be discarded is melodrama. Melodrama gets painted as the bad outward form of psychology, which Mazzoni contrasts to the subtle analysis of interior life that culminates in James, Proust, and Woolf.  Thus melodrama turns out to be a convenient sorting mechanism for arriving at a set of all-too familiar preferences: Austen (but not Scott), Flaubert (but not Balzac), Eliot and Tolstoy (but not Dickens or Hugo). As with allegory, melodrama is classed as a transcendental and collective schema, averse to the finer gradations of “real life.” For melodrama, we are informed, belonged to a moment where “history had become a lived experience of the masses,” though “at a certain point this paradigm proved to be unrealistic.” It was no longer “plausible to think that people, subjects, or witnesses of an unprecedented transformation were involved in absolute conflicts.” What we have instead of large-scale history is the gradual extension of “our understanding of the interior life,” an ever-refined representational accuracy comparable to “the gains made in physics, astronomy, or anatomy.”

    By the time we reach the contemporary novel, the sphere of freedom that Mazzoni wants to find in the novel has been narrowed down to the horizon of sheer everydayness. We have exchanged the wild explorations of Robinson Crusoe, Gulliver, Edward Waverley, Natty Bumppo, and Huckleberry Finn for the boredom of Emma Bovary. All we are left with is the bad infinity of “real life” in its banal givenness. Freedom is surreptitiously redefined, from the kind of “unbelonging” of the earlier mode of “lighting out for the territory,” to the unbelonging of grousing individual discontent. No surprise that the contemporary authors Mazzoni endorses are Philip Roth, J.M. Coetzee (singling out Boyhood and Youth), Michel Houellebecq (The Elementary Particles), and Jonathan Littell. He doesn’t provide a reading of any of these novels (although he does cite a negative review of Littell’s The Kindly Ones). Knausgaard’s novel is something like an empirical confirmation of Mazzoni’s thesis about the tendency of the novel towards absolutely private particularity, absent any transcendent justification. Mazzoni’s concluding observation—“Inside our small local worlds, everything at stake has an unquestionable value” —could just as easily have been written by Knausgaard as a summary of the exhausting strife of representability at the heart of his book.

    In outline, then, Mazzoni’s account recapitulates the problematic of Lukács’s Theory of the Novel—“the refusal of the immanence of being to enter into empirical  life,” the pulverization of all transcendent projects—in order to render it unproblematic. What Lukács saw as “the dissonance special to the novel” was its capturing of the devastating ironies and grotesque realizations that the transcendent ideal is exposed to. For Mazzoni, however, such dissonance is simply “implausible,” a failure of perspective insufficiently immersed in the proliferating contingencies of “real life.” So, what for Lukács was the constitutive problematic of the novel—the hard-fought contest between the ideal and an inert (but ultimately victorious) reality—here turns out to be a detachable “extra” or a historical vestige. Mazzoni sees the struggle with the ideal as something that was gradually exorcised or shed during the novel’s development, as opposed to something essential to defining the genre. His argument then turns out to be another entry in the “end of grand narratives” narrative, or an instance of what Alain Badiou calls “democratic materialism”: we no longer believe in any Truths striving to be realized in the world, only in local particulars. With oracular resignation, Mazzoni announces that, starting with some generalized metaphysical eclipse in the nineteenth century, “Universal forces were no longer revealed in the experience of private persons.” One imagines him lecturing the great characters of fiction like a stern guidance counselor, for their stubborn lack of realism, in those moments of Lukácsian “dissonance” where they confront a churning abyss of unbearable meaning underlying an ongoing and inessential life: Don Quixote for attempting to revive chivalry by mounting his gaunt nag and donning a pasteboard visor; or Catherine Earnshaw for proclaiming, “I am Heathcliff!”; or Captain Ahab for hurling himself against the whale as striking at some “inscrutable malice” behind a mask; or Marlow for detecting, in the depths of the Congo, “the stillness of an implacable force brooding over an inscrutable intention.”

    To be sure, Mazzoni’s claim that the novel has freed itself from the transcendental has the force of self-evidence, if one surveys contemporary fiction. Mazzoni’s reading of novels in English cuts off at 2002, but (in addition to Knausgaard) Chris Kraus, Sheila Heti, Ben Lerner, and Rachel Cusk would all be pertinent here, as instances of flattened, quotidian perception, where the “microcosm” of private existence—voided of melodrama or narrative artifice—is elevated to “absolute importance.” Going further back, one could add other instances. John Updike, Frederick Exley, and Renata Adler come immediately to mind. Mazzoni doesn’t mention Norman Mailer, who is on quite another track, but whose “nonfiction novel” would be additional confirmation of the novel’s tendency to represent a reality divested of transcendent impulses. (At this point, however, one wonders whether it were not fictionality itself that represents the final burden of transcendence, whether Mazzoni’s sense of “the novel” is not just headed towards the documentary status of journalism, memoir, travel writing, etc.)

    On the other hand, some of the most acclaimed novels of recent years have resuscitated either melodrama (Hana Yanagihara’s A Little Life), or transcendental (religious) preoccupations (Marilynne Robinson’s work), or allegory (Yann Martel’s Life of Pi). To remark these works are also somewhat middlebrow and embarrassing, would introduce a dimension of aesthetic evaluation that Mazzoni never broaches. It’s worth noting, too, that Mazzoni’s own examples are not unproblematic. Although Houellebecq’s The Elementary Particles does duty for Mazzoni, his more recent The Possibility of an Island and Submission don’t fit the pulverization-of-collective-transcendence thesis at all. Houellebecq emerges, instead, as an (unevenly satirical) utopian thinker, closer to Jonathan Swift in the Houyhnhnms section of Gulliver’s Travels than to Roth’s Zuckerman novels. Mazzoni also cites the autobiographical novels of J.M. Coetzee, but his latest novels, The Childhood of Jesus and The Schooldays of Jesus, whatever else they may be, are obvious violations of Mazzoni’s rule against allegory.

    The unbearable scene he cites from Buddenbrooks, when little Hanno draws two lines under the last entry in the family tree, muttering, “I thought… I thought… there wouldn’t be anything more,” is indeed a powerful image of finitude. But Mann then went on to write the highly allegorical The Magic Mountain and Doctor Faustus. Dostoevsky is invoked in a number of contradictory ways—he is, on one hand, one of the first authors who is “still contemporary,” because of his techniques of characterization, but on the other hand, he presents a regrettable and lingering case of melodrama. What is never mentioned is that Dostoevsky’s oeuvre, from start to finish, is rent through with transcendental preoccupations. To take only the case of The Brothers Karamazov, what does one make of the beautiful moment in the final chapter, where the father of the slain child Ilyusha sees a flower fall on the snow, and rushes “to pick it up as though everything in the world depended on the loss of that flower”? This sense of absolute responsibility, of “everything in the world” depending on one’s posture towards salvation and loss, is the hard core of Dostoevsky’s meaning. If Mazzoni wants to insist that “we cannot go beyond” our immersion in factical being, that it is “the sole layer of existence that… distinguishes us from nothing,” then he will have to lose The Brothers Karamazov as a forward-looking work.

    I wrote above that the novel is the most important product of modernity. I forgot to add that modernity is in large part the product of the novel. The novel is one of the “workshops where ideals are manufactured,” to take an image from Nietzsche’s Genealogy of Morals. For instance, the continuous and rigorous thinking of responsibility throughout the novels of the Victorian period (paradigmatically, Great Expectations, Tess of the D’Urbervilles, and Lord Jim) constitutes as central a development of our ethical life as the subsequent Freudian theorization of same. The self-representation of the nineteenth-century social imaginary is largely created through the ways novels develop of “giving an account of oneself,” in Judith Butler’s phrase. The ultimate trouble with Theory of the Novel is that Mazzoni oscillates between seeing the novel as a co-creator of modernity, whereby “an essential aspect of the Western form of life takes shape and becomes an object of knowledge only through mimesis and fiction,” and seeing the novel (or cultural production as a whole) as validating (or falling into line with) larger systemic results, e.g. “the disintegrative force implicit in modern individualism,” or “the relativistic deflation of collective values.” We don’t know, finally, whether the Western “crisis of transcendence”—what for Lukács was an ongoing schism constitutive of the novel form—is simply a fait accompli restricting literary possibility, or whether one might hold the history of the novel itself accountable for this disintegration. Nor does Mazzoni see the novel as a possible reflection upon these outcomes, a perspective-taking that would refuse the enforcement of deflationary relativism.

    But might not the greatest novels be precisely such refusals? To return again to The Brothers Karamazov, we find there (in the remembrances of Father Zosima) a forestalling of Mazzoni’s conclusions, in almost identical language: “For all men in our age are separated into units, each seeks seclusion in his own hole, each withdraws from the others, hides himself, and hides what he has, and ends by pushing people away from himself… He is accustomed to relying only on himself, he has separated his unit from the whole, he has accustomed his soul to not believing in people’s help, in people or in mankind.” For Dostoevsky, at least, the novel is not a story of emancipation from transcendence. If the novel has nevertheless brought about this anomie and purgation of values, the novel goes on only in a perpetual fight against what it hath wrought.

    Ben Parker is assistant professor of English at Brown University. His current research is on recognition scenes in the nineteenth-century novel. He tweets @exyoungperson.

  • Zachary Loeb — Who Moderates the Moderators? On the Facebook Files

    Zachary Loeb — Who Moderates the Moderators? On the Facebook Files

    by Zachary Loeb

    ~

    Speculative fiction is littered with fantastical tales warning of the dangers that arise when things get, to put it amusingly, too big. A researcher loses control of their experiment! A giant lizard menaces a city! Massive computer networks decide to wipe out humanity! A horrifying blob metastasizes as it incorporates all that it touches into its gelatinous self!

    Such stories generally contain at least a faint hint of the absurd. Nevertheless, silly stories can still contain important lessons, and some of the morals that one can pull from such tales are: that big things keep getting bigger, that big things can be very dangerous, and that sometimes things that get very big very fast wind up doing a fair amount of damage as what appears to be controlled growth is revealed to actually be far from managed. It may not necessarily always be a case of too big, as in size, but speculative fiction features no shortage of tragic characters who accidentally unleash some form of horror upon an unsuspecting populace because things were too big for that sorry individual to control. The mad scientist has a sad corollary in the figure of the earnest scientist who wails “I meant well” while watching their creation slip free from their grasp.

    Granted, if you want to see such a tale of the dangers of things getting too big and the desperate attempts to maintain some sort of control you don’t need to go looking for speculative fiction.

    You can just look at Facebook.

    With its publication of The Facebook Files, The Guardian has pried back the smiling façade of Zuckerberg’s monster to reveal a creature that an overwhelmed staff is desperately trying to contain with less than clear insight into how best to keep things under control. Parsing through a host of presentations and guidelines that are given to Facebook’s mysterious legion of content moderators, The Facebook Files provides insight into how the company determines what is and is not permitted on the website. It’s a tale that is littered with details about the desperate attempt to screen things that are being uploaded at a furious rate, with moderators often only having a matter of seconds in which they can make a decision as to whether or not something is permitted. It is a set of leaks that are definitely worth considering, as they provide an exposé of the guidelines Facebook moderators use when considering whether things truly qualify as revenge porn, child abuse, animal abuse, self-harm, unacceptable violence, and more. At the very least, the Facebook Files are yet another reminder of the continuing validity of Erich Fromm’s wise observation:

    What we use is not ours simply because we use it. (Fromm 2001, 225)

    In considering the Facebook Files it is worthwhile to recognize that the moderators are special figures in this story – they are not really the villains. The people working as actual Facebook moderators are likely not the same people who truly developed these guidelines. In truth, they likely weren’t even consulted. Furthermore, the moderators are almost certainly not the high-profile Facebook executives espousing techno-utopian ideologies in front of packed auditoriums. To put it plainly, Mark Zuckerberg is not checking to see if the thousands of photos being uploaded every second fit within the guidelines. In other words, having a measure of sympathy for the Facebook moderators who spend their days judging a mountain of (often disturbing) content is not the same thing as having any sympathy for Facebook (the company) or for its figureheads. Furthermore, Facebook has already automated a fair amount of the moderating process, and it is more than likely that Facebook would love to be able to ditch all of its human moderators in favor of an algorithm. Given the rate at which it expects them to work it seems that Facebook already thinks of its moderators as being little more than cogs in its vast apparatus.

    That last part helps point to one of the reasons why the Facebook Files are so interesting – because they provide a very revealing glimpse of the type of morality that a machine might be programmed to follow. The Facebook Files – indeed the very idea of Facebook moderators – is a massive hammer that smashes to bits the idea that technological systems are somehow neutral, for it puts into clear relief the ways in which people are involved in shaping the moral guidelines to which the technological system adheres. The case of what is and is not allowed on Facebook is a story playing out in real time of a company (staffed by real live humans) trying to structure the morality of a technological space. Even once all of this moderating is turned over to an algorithm, these Files will serve as a reminder that the system is acting in accordance with a set of values and views that were programmed into it by people. And this whole tale of Facebook’s attempts to moderate sensitive/disturbing content points to the fact that morality can often be quite tricky. And the truth of the matter, as many a trained ethicist will attest, is that moral matters are often rather complex – which is a challenge for Facebook as algorithms tend to do better with “yes” and “no” than they do with matters that devolve into a lot of complex philosophical argumentation.

    Thus, while a blanket “zero nudity” policy might be crude, prudish, and simplistic – it still represents a fairly easy way to separate allowed content from forbidden content. Similarly, a “zero violence” policy runs the risk of hiding the consequences of violence, masking the gruesome realities of war, and covering up a lot of important history – but it makes it easy to say “no videos of killings or self-harm are allowed at all.” Likewise, a strong “absolutely no threats of any sort policy” would make it so that “someone shoot [specific political figure” and “let’s beat up people with fedoras” would both be banned. By trying to parse these things Facebook has placed its moderators in tricky territory – and the guidelines it provides them with are not necessarily the clearest. Had Facebook maintained a strict “black and white” version of what’s permitted and not permitted it could have avoided the swamp through which it is now trudging with mixed results. Again, it is fair to have some measure of sympathy for the moderators here – they did not set the rules, but they will certainly be blamed, shamed, and likely fired for any failures to adhere to the letter of Facebook’s confusing law.

    Part of the problem that Facebook has to contend with is clearly the matter of free speech. There are certainly some who will cry foul at any attempt by Facebook to moderate content – crying out that such things are censorship. While still others will scoff at the idea of free speech as applied to Facebook seeing as it is a corporate platform and therefore all speech that takes place on the site already exists in a controlled space. A person may live in a country where they have a government protected right to free speech – but Facebook has no such obligation to its users. There is nothing preventing Facebook from radically changing its policies about what is permissible. If Facebook decided tomorrow that no content related to, for example, cookies was to be permitted, it could make and enforce that decision. And the company could make that decision regarding things much less absurd than cookies – if Facebook wanted to ban any content related to a given protest movement it would be within its rights to do so (which is not to say that would be good, but to say that it would be possible). In short, if you use Facebook you use it in accordance with its rules, the company does not particularly care what you think. And if you run afoul of one of its moderators you may well find your account suspended – you can cry “free speech” but Facebook will retort with “you agreed to our terms of use, Facebook is a private online space.” Here, though, a person may try to fire back at Facebook that in the 21st century, to a large extent, social media platforms like Facebook have become a sort of new public square.

    And, yet again, that is part of the reason why this is all so tricky.

    Facebook clearly wants to be the new “public square” – it wants to be the space where people debate politics, where candidates have forums, and where activists organize. Yet it wants all of these “public” affairs to take place within its own enclosed “private” space. There is no real democratic control of Facebook, the company may try to train its moderators to respect various local norms but the people from those localities don’t get to have a voice in determining what is and isn’t acceptable. Facebook is trying desperately to have it all ways – it wants to be the key space of the public sphere while simultaneously pushing back against any attempts to regulate it or subject it to increased public oversight. As lackluster and problematic as the guidelines revealed by the Facebook Files are, they still demonstrate that Facebook is trying (with mixed results) to regulate itself so that it can avoid being subject to further regulation. Thus, free speech is both a sword and a shield for Facebook – it allows the company to hide from the accusations that the site is filled with misogyny and xenophobia behind the shield of “free speech” even as the site can pull out its massive terms of service agreement (updated frequently) to slash users with the blade that on the social network there is no free speech only Facebook speech. The speech that Facebook is most concerned with is its own, and it will say and do what it needs to say and do, to protect itself from constraints.

    Yet, to bring it back to the points with which this piece began, many of the issues that the Facebook Files reveal have a lot to do with scale. Sorting out the nuance of an image or a video can take longer than the paltry few seconds most moderators are able to allot to each image/video. And it further seems that some of the judgments that Facebook is asking its moderators to make have less to do with morality or policies than they have to do with huge questions regarding how the moderator can possibly know if something is in accordance with the policies or not. How does a moderator not based in a community really know if something is up to a community’s standard? Facebook is hardly some niche site with a small user base and devoted cadre of moderators committed to keeping the peace – its moderators are overworked members of the cybertariat (a term borrowed from Ursula Huws), the community they serve is Facebook not those from whence the users hail. Furthermore, some of the more permissive policies – such as allowing images of animal abuse – couched under the premise that they may help to alert the authorities seems like more of an excuse than an admission of responsibility. Facebook has grown quite large, and it continues to grow. What it is experiencing is not so much a case of “growing pains” as it is a case of the pains that are inflicted on a society when something is allowed to grow out of control. Every week it seems that Facebook becomes more and more of a monopoly – but there seems to be little chance that it will be broken up (and it is unclear what that would mean or look like).

    Facebook is the science project of the researcher which is always about to get too big and slip out of control, and the Facebook Files reveal the company’s frantic attempt to keep the beast from throwing off its shackles altogether. And the danger there, from Facebook’s stance, is that – as in all works where something gets too big and gets out of control – the point when it loses control is the point where governments step in to try to restore order. What that would look like in this case is quite unclear, and while the point is not to romanticize regulation the Facebook Files help raise the question of who is currently doing the regulating and how are they doing it? That Facebook is having such a hard time moderating content on the site is actually a pretty convincing argument that when a site gets too big, the task of carefully moderating things becomes nearly impossible.

    To deny that Facebook has significant power and influence is to deny reality. While it’s true that Facebook can only set the policy for the fiefdoms it controls, it is worth recognizing that many people spend a heck of a lot of time ensconced within those fiefdoms. The Facebook Files are not exactly a shocking revelation showing a company that desperately needs some serious societal oversight – rather what is shocking about them is that they reveal that Facebook has been allowed to become so big and so powerful without any serious societal oversight. The Guardian’s article leading into the Facebook Files quotes Monika Bickert, ‎Facebook’s head of global policy management, as saying that Facebook is:

    “not a traditional technology company. It’s not a traditional media company. We build technology, and we feel responsible for how it’s used.”

    But a question lingers as to whether or not these policies are really reflective of responsibility in any meaningful sense. Facebook may not be a “traditional” company in many respects, but one area in which it remains quite hitched to tradition is in holding to a value system where what matters most is the preservation of the corporate brand. To put it slightly differently, there are few things more “traditional” than the monopolistic vision of total technological control reified in Facebook’s every move. In his classic work on the politics of technology, The Whale and the Reactor, Langdon Winner emphasized the need to seriously consider the type of world that technological systems were helping to construct. As he put it:

    We should try to imagine and seek to build technical regimes compatible with freedom, social justice, and other key political ends…If it is clear that the social contract implicitly created by implementing a particular generic variety of technology is incompatible with the kind of society we deliberately choose—that is, if we are confronted with an inherently political technology of an unfriendly sort—then that kind of device or system ought to be excluded from society altogether. (Winner 1989, 55)

    The Facebook Files reveal the type of world that Facebook is working tirelessly to build. It is a world where Facebook is even larger and even more powerful – a world in which Facebook sets the rules and regulations. In which Facebook says “trust us” and people are expected to obediently go along.

    Yes, Facebook needs content moderators, but it also seems that it is long-past due for there to be people who moderate Facebook. And those people should not be cogs in the Facebook machine.

    _____

    Zachary Loeb is a writer, activist, librarian, and terrible accordion player. He earned his MSIS from the University of Texas at Austin, an MA from the Media, Culture, and Communications department at NYU, and is currently working towards a PhD in the History and Sociology of Science department at the University of Pennsylvania. His research areas include media refusal and resistance to technology, ideologies that develop in response to technological change, and the ways in which technology factors into ethical philosophy – particularly in regards of the way in which Jewish philosophers have written about ethics and technology. Using the moniker “The Luddbrarian,” Loeb writes at the blog Librarian Shipwreck, where an earlier version of this post first appeared, and is a frequent contributor to The b2 Review Digital Studies section.

    Back to the essay
    _____

    Works Cited

    • Fromm, Erich. 2001. The Fear of Freedom. London: Routledge Classics.
    • Winner, Langdon. 1989. The Whale and the Reactor. Chicago: The University of Chicago Press.
  • Sarah Brouillette — Couple Up: Review of “Family Values: Between Neoliberalism and the New Social Conservatism”

    Sarah Brouillette — Couple Up: Review of “Family Values: Between Neoliberalism and the New Social Conservatism”

    by Sarah Brouillette

    Review of Melinda Cooper, Family Values: Between Neoliberalism and the New Social Conservatism (New York: Zone Books, 2017)

    The basics of neoliberalism are by now well known. Pressured to be wary of public deficit spending, and trying to find ways to rejuvenate depressed economies, neoliberal governments cut spending on welfare and other social services, and turn the programs that do remain into job training “workfare.” Policies at the same time shift to give priority to the needs of businesses wanting to keep wages low, to offshore production, and to make few or no commitments to workers. The power of unions is undercut as a result, so it is decreasingly possible to look to that form of collectivity as a shelter.[1] Politicians, advisors, sympathetic management consultants and business professors meanwhile emphasize private initiative and personal merit as the keys to success. As a result, work has been trending toward the less regular, less routine, less secure, less protected by union membership, with wages stagnant and less likely to be supplemented by things like affordable public education, low rents, tax credits, and childcare benefit payments.

    The working individual suited to this environment will naturally possess certain traits, as people are encouraged to look to themselves for more and more of what they need. Everything becomes a matter of personal responsibility: invest smartly for the future, take out a loan to pay for college, be your own brand, find your joy, “live your life.” If there is a culture of neoliberalism, it is all about interiority and the individual psychic life: therapeutic culture, because there is little state funding for mental health treatment. Find out who you really are, do what you love, look within, take your natural resilience as the base of every struggle and its overcoming; experience setbacks, Pop Idol style, as welcome occasions to overcome every hurdle. Self-improve. Self-actualize.

    The causal relations are sometimes murky and eminently debatable. Don’t governments in fact fund wellness initiatives, especially targeting underprivileged communities? And what about all the counternarratives emphasizing the necessity of communities coming together – the British Tories’ “Big Society,” for instance? But the general account of neoliberalism is quite uniform. It pinpoints the force of biographization, responsibilization, individualization, self-management, a DIY ethos, and customization of personal preference as the lifeblood of the neoliberal order.[2]

    Against all this, Melinda Cooper’s Family Values: Between Neoliberalism and the New Social Conservatism argues that the key social unit of neoliberalism is not the individual but the family, and not just any family but the family in perpetual crisis. She presents the postwar Fordist family wage – basically, a state-backed wage high enough to support a family with only one parent working – as a “mechanism for the normalization of gender and sexual relationships” (8), and for this reason sees no reason to lament its demise. As an “instrument of redistribution,” she writes, it “policed the boundaries between women and men’s work and white and black men’s labor” and was “inseparable from the imperative of sexual normativity” (23). “Few African American men enjoyed the family wage privileges of the unionized industrial labor force,” and their disproportionately high unemployment is evidence of the “multiple exclusions serving to define the boundaries of state-subsidized reproduction” (35-6).

    Just as the “Fordist politics of class … established white, married masculinity as point of access to full social protection” (23), the fundamental concern for neoliberals like Gary Becker was how to respond to the breakdown of this masculinity and the family built around it. “Neoliberals are particularly concerned about the enormous social costs that derive from the breakdown of the stable Fordist family,” Cooper argues. They aim “to reestablish the private family as the primary source of economic security and the comprehensive alternative to the welfare state” (9). Basically, they want the traditional family intact as a compensation for precarity.

    The data show that in the neoliberal era private family wealth is increasingly decisive in “shaping and restricting social mobility” (125), and this is a result of concerted policymaking. In the 1960s, inflation eroded the wealth at the top tiers, as it translated into the deflation of financial assets. Inflation was at the time understood in precisely this way, as a redistributive tax, “intensifying progressive tendencies” of the period: “Free-market economists insinuated that inflation was a form of state-sanctioned fraud – a covert tax designed to extort wealth from investors and transfer is to the lower classes” (127). The neoliberal “paradigm shift in American fiscal and monetary policy” sets about ending this redistributive movement. If the Employment Act of 1946 wanted to “promote maximum employment, production and purchasing power,” where wage and price inflation were understood as signs of growth and as “benign trade-offs to full employment,” the neoliberals overturned all this.

    Figures such as Milton Friedman and Paul Volcker “turn[ed] inflation-targeting into the prime objective of monetary policy,” thus restricting the money supply and pushing up interest rates. Whereas bondholders in the 1970s saw assets depreciate and the Federal Reserve “deferred to the interests of unionized labor and welfare constituencies,” in the new era the Fed would strive “to repress wages and consumer prices in the service of asset price appreciation.” These policies led to a sure turnaround in the distribution of national income; the “share of national income flowing to financial investors went from negative or stagnant in the 1970s to ‘substantially positive’ in the 1980s”; while “labor’s share of national income declined proportionately” (134). By 1983, Cooper writes, “wealth concentration had reverted to its 1962 level and by the end of the decade had plummeted to levels comparable to 1929” (135).

    There has thus been, at the top tier, a massive “resurgence of large family fortunes” (137). Nearly everywhere else, though, with stagnant wages, unemployment, and the transfer of the costs of things like higher education and health care back to families, lack of access to familial wealth can condemn one to a lifetime of debt. Hence Cooper’s argument about the importance of the family: intergenerational familial support in the form of housing, or money, or willingness to be signatories to loans, is a neoliberal necessity for many, and the pressure to combine dependence on parents with married coupledom just compounds the effect.[3] According to statistics gathered by the Pew Research Center, 1960 was the year in which people under 25 were most likely to live independently. In more recent decades, however, young people have been exhorted to invest in the future, save for retirement, and acquire assets (houses and university degrees). At the same time, and often in relation to this, they have been forced into debt and into insecure employment. No wonder they are more inclined to live with parents or partners. Of course, there is such a thing as a non-normative family, and perhaps living independently from relatives is not something we should unduly idealize. Cooper’s interest, though, is in what sort of family arrangements government programs prefer, and how preferences shift given combined pressure from neoliberal economic policy and the new social conservativism. We will return to her idealization of independence, however.

    The more common argument, of course, is that neoliberalism is destructive to family life, as it encourages workers to be “low drag,” moveable, flexible, always working, losing any sense of a private life outside of work, and also alone in leisure in front of a personally selected entertainment service displayed on a privately watched device. Yet, as Annie McClanahan has recently argued, not many people are really these footloose mobile workers.[4] For most employers, it is probably more important that those they hire be replaceable than that they be mobile. Only workers in relatively elite sectors (high tech, higher education, entertainment) are in a better position if they can move from thing to thing without worrying about family obligations.[5]

    This is not to deny that there is now also a more general animus against the restrictions and burdens of family life – the boredom of marriage, and drudgery of raising children (all captured so well by a show like Mad Men, for instance, which crystalizes the individualizing ethos so perfectly). However, there is just as much pressure to maintain the bonds of coupledom, and this tension between rejection and embrace may in fact be the point worth emphasizing. It seems that people are increasingly wondering if marriage is “worth it,” while decreasingly being able to exit it, and this is a cause of general anxiety, finding outlet in things like the dating site for adulterers, Ashley Madison, which was notorious for a minute in 2016 after its user data was stolen. When it turned out that most of the male customers were at least some of the time corresponding with bots rather than real women, I couldn’t help thinking that in a way it didn’t matter: the point is that users find an outlet for their sense of being stuck in a social relation (marriage!) on which they are dependent. Indeed, the bot’s lack of reality, lack of availability, is what makes the “affair” appealingly nonthreatening to the user’s IRL relationships. Moralistic attacks on these men – the fact that some of those caught are family-values conservatives is, to be sure, a rich irony – miss the point: they are not having affairs; they are staying in unhappy marriages that they depend on in various ways.

    They depend on marriage because it is still the normative standard for people (if you aren’t married there is something wrong with you; if you don’t have kids you are deviant in some way). They depend on it in that they can’t afford a house without two salaries, because for tax purposes it is better to be a legally recognized couple, because the lifestyle they aspire to requires it, because caring for children alone is very hard, because shifting work hours and temporary contracts make the second salary a necessity, even if it too is precarious. They depend on it because they are too tired and generally physically weary to try to have any other sort of relationship. Being non-normative can feel like SO. MUCH. WORK. A film like 2009’s Up in the Air makes the point very well: the protagonist is the epitome of the roving high-powered executive entrepreneur (indeed, his job is to fire people), but his story is not a celebration of the escape from normativity. It is rather a lament about the psychic misery of solitude. The message is clear: couple up!  

    How did the family start to lose its normative power? For Cooper, conservatives skewering feminism, and more leftist thinkers trying to understand the foundations of neoliberalism, are in agreement about the force of 1960s and 1970s countercultural and antinormative critiques of the family. In Wolfgang Streeck’s analysis, the revolution in family law and intimate relationships – for example, the availability of no-fault divorce – destroyed the Fordist family wage because women were not stuck in the kitchen dependent on men any more. The family became a more flexible form because, in Cooper’s paraphrase of Streeck, feminists sought “an independent wage on a par with men,” eventually “transforming marriage from a long-term, noncontractual obligation into a contract that could be dissolved at will” (11). Cooper reads Eve Chiapello and Luc Boltanski’s argument as similar, in that they show how “the artistic left prepared the groundwork for the neoliberal assault on economic and social security by destroying its intimate foundations in the postwar family” (12). She quotes Nancy Fraser, also, who has written that “critique of the family wage … now supplies a good part of the romance that invests flexible capitalism with a higher meaning and moral point” (12). In each case, the idea is that feminism is somehow to blame for neoliberalization, because in seeking to free women from certain kinds of normative obligation and dependency they have demonized dependency in general, fetishizing independence from supports of any kind. Against these analyses, Cooper asks: what breakdown of the family, anyway? The apparent post-normativity of contemporary life is entirely compatible with the establishment of new norms. We continue to be form-determined after we no longer see social forms’ normative force. Put simply: the traditional family, which for Cooper is a family coerced into existence by exigency and normativity, is not broken enough.

    The economy in depression no longer affords the state-supported Fordist wage, but the family is re-inscribed and reformulated even as it is queried and undermined by antinormative movements. If the foundations of neoliberal policy are thoroughly economic, neoconservativism enters Cooper’s account as a largely compatible reaction formation. The neoconservative agenda, formed deliberately against the liberation movements of the 1960s and their challenge to the normativity of the traditional family, served neoliberalization far more than the countercultural left’s challenges to social convention. Cooper argues that, whereas nostalgia for the Fordist wage became a “hallmark of the left,” neoconservatives, allied with thrifty neoliberals, preferred “the strategic reinvention of a much older, poor-law tradition of private family responsibility.” In a policy formation that reflected both neoliberal and neoconservative thought, social welfare was not to disappear, but instead to be made into “an immense federal apparatus for policing the private family responsibilities of the poor” (21).

    As a public assistance program targeted at the noncontributing poor – workers paying into funds that would support them in the event of unemployment were always more palatable (34) – the fate of AFDC (Aid for Families with Dependent Children) is one of Cooper’s main cases. It allows her to show how social welfare extended to the poor – especially to single women, especially mothers, especially black mothers – became “associated with a general crisis of the American family” (29). As the composition of the program changed, with the number of African American women signing up outpacing that of white woman, and divorced or never-married women joined the rolls, fears were heightened. Because “racial and sexual normativities were truly foundational to the social order of American Fordism, determining just who would be included and who would be excluded from the redistributive benefits of the social wage” (36), the inclusivity evident in the 1960s in the AFDC’s provision for non-married mothers proved to be short-lived. Arguments for reinstating the stability offered by the traditional family had significant influence at this juncture.

    Nor were these arguments solely made by conservatives. In the 1960s there was in fact significant leftist promotion of the African American male-breadwinner family and a related impetus against “non-normative lifestyles of unattached African American women” (37); hence the tendency to identify the AFDC as a cause of family breakdown while promoting the “male breadwinner’s wage” (41). An article by Richard A. Cloward and Frances Fox Piven, published in The Nation in 1966 and presented as “a strategy to end poverty,” laments that the state was “substituting check-writing machines for male wage earners,” thereby “robb[ing] men of manhood, women of husbands, and children of father.” The authors continue: “To create a stable monogamous family, we need to provide men (especially Negro men) with the opportunity to be men, and that involves enabling them to perform occupationally” (qtd. 42).

    What they saw were “perverse disincentives to family formation built into the AFDC program” (43), whereas women left more to their own devices would naturally be more likely to find men to support them. With the 1970s economic downturn, and anxieties directed at inflation in particular, the program became a touchstone for debates for neoconservatives formulating their “new political philosophy of non-redistributive family values” (47). While neoliberals “called for an ongoing reduction in budget allocations dedicated to welfare—intent on undercutting any possibility that the social wage might compete with the free-market wage,” neoconservatives advocated an expanded role for state in regulating sexuality. On both fronts, the point was the urgent necessity of “reinstating the family as the foundation of social and economic order” (49).

    Cooper discusses Milton Friedman’s concern that the “natural obligations” that “once compelled children to look after their parents in old age” have given way to “an impersonal system of social insurance whose long-term effect is to usurp the place of the family” (58). Friedman wrote that whereas once “Children helped their parents out of love or duty,” they now “contribute to the support of someone else’s parents out of compulsion and fear” (qtd. 58). State-based redistribution was a poor substitute for proper familial support and wealth transmission. For Gary Becker, also, the postwar welfare state destroys the “natural altruism of the family” (60). Becker’s theory of human capital is perhaps the premier theorization of individual self-management and self-appreciation. Michel Foucault treated Becker’s work as exemplary of the way that neoliberal analyses entail “replacement every time of homo economicus as partner of exchange with a homo economicus as entrepreneur of himself, being for himself his own capital … a capital that we will call human capital inasmuch as the ability-machine of which it is the income cannot be separated from the human individual who is its bearer.”[6] Becker also featured recently in a Merriam-Webster tweet of the term “human capital” – “turning people into statistics since 1799,” the tweet quipped – which linked to the full dictionary entry, where Becker’s work is cited as “taking a holistic view of a person’s life and experiences as they can be applied within the workforce.” Becker took personal investment in one’s own human capital appreciation as preferable to state investment (the benefits of high human capital only accruing to oneself, after all), and thus supported rising tuition costs and the student loan industry as a major part of the growing importance of private credit. Yet Cooper shows that his arguments also preferred a supportive wealth-generating family: the older generations would back student loans where necessary, as they naturally want children and grandchildren to bear human capital that self-appreciates at a greater pace and with results that are more lucrative. Becker celebrated Ronald Reagan for restoring kinship bonds.

    Reagan drastically cut the AFDC, before Bill Clinton eliminated it. It was replaced with the TANF program (Temporary Assistance for Needy Families), whose availability was contingent on states’ willingness to track down and enforce paternity obligations. TANF’s defenders claimed it is better for a woman and her children to be reliant on alimony and child support than to turn to the government for assistance (67). Here we get to the heart of Cooper’s refutation of the idea that neoliberalism privileges the footloose free agent. In fact, in her account, neoliberalism is more likely to pressure people to sustain unhealthy and unsustainable family and intimate relationships, including tying children to fathers who do not know them or want them. Clinton’s extensive welfare reform reflected and codified what she calls “a new bipartisan consensus on the social value of monogamous, legally validated relationships.” His government reformed welfare spending while devising “initiatives to promote the moral obligations of family, including a special budget allocation to finance marriage promotion programs and … bonus funds to states that could demonstrate that they had successfully reduced illegitimate births without increasing the abortion rate” (68). Barack Obama’s “healthy marriage and responsible fatherhood” initiatives continued in this direction.

    Cooper suggestively connects these initiatives to the “first experiment in federal relief ever implemented by Congress”: the 1865 creation of the Freedmen’s Bureau following the Emancipation Proclamation of 1863. Before 1863, slaves were precluded from legally sanctioned marriage. The Freedman’s Bureau instructed that freedom to participate in the labor market came with “the right to marry and the responsibility to support wife and child” (79). Its support for freed slaves entailed a vigorous campaign to promote marriage, with Bureau agents authorized to perform marriages and a “sustained pedagogy of domestic life, schooling men in the notion that they were to become the breadwinners of the family and women in a new kind of economic dependence” (80). There were penalties for people cohabiting without marriage; and Bureau-assigned wage scales that penalized women, precisely because of the “social costs of dependency” that fell upon the state if forced to support unmarried women and their children (81).

    Like the more recent insistence that women secure alimony and child support before turning to welfare, these policies empowered men to assert rights over women and children. Indeed, the assumptions upon which they were based were not fundamentally threatened until the 1960s liberalization of family law, which made divorce easier and eased the stigmatizing of non-marital unions and cohabitation. “For an all too brief moment,” she argues, “revised AFDC rules allow divorced or never-married women and their children to live independently of a man while receiving a state-guaranteed income free of moral conditions” (97). That moment is over, however. “The modern child support system serves to demonstrate that the state is willing to enforce—indeed create—legal relationships of familial obligation and dependence where none have been established by mutual consent,” Cooper writes (105).

    We should pause here now on the figure of the never-married woman living independently thanks to welfare. Cooper argues that, in a context of relatively healthy public welfare spending, and of the pressures put on states by countercultural and antinormative activisms, there was a time when social welfare was “making women independent of individual men and freeing them from the obligations of the private family” (97). Hence, the fuel for the neoconservative backlash that soon followed – a backlash that gained traction because of the failing economy to which neoliberals were also turning their attention. A perfect storm. Yet Cooper’s celebration of the period in which social welfare possibly freed women from the constraints of marriage has her falling back into the trap she dismantles elsewhere: nostalgia for state provision.

    The image of the single woman with children, living with a state-based income “free of moral conditions,” reads as an idealization. Certainly, supporting children as a single parent on welfare has never been a cakewalk; and, are we meant to conclude that “freeing” men from the burdens of paternity is an unalloyed boon to women? She needs this figure, though. Cooper’s idealization of the state-supported single mother alerts us to the fact that her ultimate objection is not to social welfare but rather to the restriction of its benefits to the Fordist white male breadwinner, and to the way welfare programs get tied to normative policies and programs emphasizing the preferability of turning to family, especially marriage, to marshal the necessary resources to get by.[7] She avoids the stronger critique of social welfare, which might emphasize the global accumulative regime and resource extraction on which US prosperity was built, how nation-based welfare disperses the benefits of prosperity to some and not others, and the welfare state’s various regulatory and pacifying functions.[8]

    Does neoliberalism feel different to some people simply because it follows on the moment of postwar prosperity and the relatively expansive Keynesian social welfare that flowed from it, in which there was palpable faith in the civic virtue attending government spending on social programs? Neoliberal policies have threatened protections and comforts that these programs offered to some people – people like American and British university professors, who produce the analyses of the unique wrongs of the neoliberal order. Is all the worry about neoliberalism just a symptom of the decline of the hegemony of liberal democracy?

    The economy that supported the pre-neoliberal era of relatively high wages, and relatively generous public deficit spending on welfare and education, was also hugely resource extractive and suburbanizing. The capacity to redistribute wealth more evenly in the US was, in addition, contingent upon broader economic transformation that required dispossessions, expulsions, enclosures, primitive accumulations, US hegemony propped up by global wars, and the origins of the whole phenomenon of US industrial triumph after WWII in wartime accumulation and relative devastation across Europe.[9] Wherever one looks, the accumulation of wealth requires these devastations, making even the lushest times at the ADFC, and the possibility for a temporary flourishing of alternative kinds of family structures, into a troubled gain. For these reasons, it may be that work that avoids the terminology of neoliberalism, or uses it warily – work by Endnotes, by Silvia Federici, or by Robert Brenner, for instance – provides better purchase on contemporary conditions. Because when they fail to name the fundamental, global, totalizing causes of policy shifts, accounts of neoliberalism miss the ruthlessly intensifying dynamics of capital accumulation that are simply propelled onward with extended credit.[10]

    Finally, if Keynesian social welfare is a wage supplement designed to encourage consumer spending, in which sense is it wise to pit it against the dominance of commerce and private interests? If extensive public deficit spending on social programs and neoliberal monetarism are just different ways of managing the economy, and if one takes the capitalist economy as fundamentally anathema to universal human flourishing, to what extent should we worry about the difference that neoliberalism makes? Family Values doesn’t quite answer these questions. However, it does do the crucially important work of historicizing the rise of private credit in relation to family-values conservativism, and dismantling the left-liberal tendency to lament neoliberalization because it clawed back the gains of the immediate postwar period. Without suggesting that no gains were made, Cooper shows how they were thoroughly mitigated by normative racial, sexual and gender ascription – ascription that determined how to divvy up Fordism’s generous provisions, and that continues to push people, especially the already suffering, into unwanted contracts in life and work.

    Notes

    [1] For a recent account along these lines see Wendy Brown, Undoing the Demos: Neoliberalism’s Stealth Revolution (New York: Zone Books, 2015).

    [2] See for instance Ronen Shamir, “The age of responsibilization: on market-embedded morality,” Economy and Society (37.1: 2008): 1-19.

    [3] I discuss Cooper’s blistering account of the student loan industry elsewhere.

    [4] Annie McClanahan, “Becoming Non-Economic: Human Capital Theory and Wendy Brown’s Undoing the Demos,” Theory & Event 20.2 (2017): 510-519.

    [5] Even scholars suggesting that, in being less interested in keeping people in regular work, crisis-era capitalism allows for “queer liberation” from cis-hetero norms, insist in the next breath that some elements of queer life are tolerable and easy assimilated – think pink washing and gay marriage – and some are not.

    [6] Michel Foucault, The Birth of Biopolitics:  Lectures at the Collège de France 1978-1979, trans. Graham Burchell (Palgrave, 2008): 226.

    [7] In an earlier work, where the figure of the state-supported single mother is absent, her take is more ambivalent. She argues that the welfare state “undertakes to protect life by redistributing the fruits of national wealth to all its citizens, even those who cannot work, but in exchange it imposes a reciprocal obligation: its contractors must in turn give their lives to the nation” (Melinda Cooper, Life as Surplus: Biotechnology and Capitalism in the Neoliberal Era [University of Washington Press, 2008]: 8).

    [8] Gavin Walker has recently argued that “the function of ‘welfare’ within capitalism has never been something separate from its workings; rather, it is something co-emergent and central to the operation of the capital-relation itself”: “Rather than being a political development in which capital’s violence is ameliorated through social spending, we should rather understand the welfare state as the primary mechanism through which the process of primitive accumulation can be continuously sustained in the advanced capitalist countries” (“The ‘Ideal Total Capitalist”: On the State-Form in the Critique of Political Economy,” Crisis & Critique 3.3 [2016]: 434-455).

    [9] For an account along these lines see “Misery and Debt,” Endnotes 2 (April 2010): 20-51.

    [10] I owe this point to discussion with Tim Kreiner.