Category: The b2o Review

The b2o Review is a non-peer reviewed publication, published and edited by the boundary 2 editorial collective and specific topic editors, featuring book reviews, interventions, videos, and collaborative projects.  

  • All Hitherto Existing Social Media

    All Hitherto Existing Social Media

    Social Media: A Critical Introduction (Sage, 2013)a review of Christian Fuchs, Social Media: A Critical Introduction
    by Zachary Loeb
    ~
    Legion are the books and articles describing the social media that has come before. Yet the tracts focusing on Friendster, LiveJournal, or MySpace now appear as throwbacks, nostalgically immortalizing the internet that was and is now gone. On the cusp of the next great amoeba-like expansion of the internet (wearable technology and the “internet of things”) it is a challenging task to analyze social media as a concept while recognizing that the platforms being focused upon—regardless of how permanent they seem—may go the way of Friendster by the end of the month. Granted, social media (and the companies whose monikers act as convenient shorthand for it) is an important topic today. Those living in highly digitized societies can hardly avoid the tendrils of social media (even if a person does not use a particular platform it may still be tracking them), but this does not mean that any of us fully understand these platforms, let alone have a critical conception of them. It is into this confused and confusing territory that Christian Fuchs steps with his Social Media: A Critical Introduction.

    It is a book ostensibly targeted at students. Though when it comes to social media—as Fuchs makes clear—everybody has quite a bit to learn.

    By deploying an analysis couched in Marxist and Critical Theory, Fuchs aims not simply to describe social media as it appears today, but to consider its hidden functions and biases, and along the way to describe what social media could become. The goal of Fuchs’s book is to provide readers—the target audience is students, after all—with the critical tools and proper questions with which to approach social media. While Fuchs devotes much of the book to discussing specific platforms (Google, Facebook, Twitter, WikiLeaks, Wikipedia), these case studies are used to establish a larger theoretical framework which can be applied to social media beyond these examples. Affirming the continued usefulness of Marxist and Frankfurt School critiques, Fuchs defines the aim of his text as being “to engage with the different forms of sociality on the internet in the context of society” (6) and emphasizes that the “critical” questions to be asked are those that “are concerned with questions of power” (7).

    Thus a critical analysis of social media demands a careful accounting of the power structures involved not just in specific platforms, but in the larger society as a whole. So though Fuchs regularly returns to the examples of the Arab Spring and the Occupy Movement, he emphasizes that the narratives that dub these “Twitter revolutions” often come from a rather non-critical and generally pro-capitalist perspective that fail to embed adequately uses of digital technology in their larger contexts.

    Social media is portrayed as an example, like other media, of “techno-social systems” (37) wherein the online platforms may receive the most attention but where the, oft-ignored, layer of material technologies is equally important. Social media, in Fuchs’s estimation, developed and expanded with the growth of “Web 2.0” and functions as part of the rebranding effort that revitalized (made safe for investments) the internet after the initial dot.com bubble. As Fuchs puts it, “the talk about novelty was aimed at attracting novel capital investments” (33). What makes social media a topic of such interest—and invested with so much hope and dread—is the degree to which social media users are considered as active creators instead of simply consumers of this content (Fuchs follows much recent scholarship and industry marketing in using the term “prosumers” to describe this phenomenon; the term originates from the 1970s business-friendly futurology of Alvin Toffler’s The Third Wave). Social media, in Fuchs’s description, represents a shift in the way that value is generated through labor, and as a result an alteration in the way that large capitalist firms appropriate surplus value from workers. The social media user is not laboring in a factory, but with every tap of the button they are performing work from which value (and profit) is skimmed.

    Without disavowing the hope that social media (and by extension the internet) has liberating potential, Fuchs emphasizes that such hopes often function as a way of hiding profit motives and capitalist ideologies. It is not that social media cannot potentially lead to “participatory democracy” but that “participatory culture” does not necessarily have much to do with democracy. Indeed, as Fuchs humorously notes: “participatory culture is a rather harmless concept mainly created by white boys with toys who love their toys” (58). This “love their toys” sentiment is part of the ideology that undergirds much of the optimism around social media—which allows for complex political occurrences (such as the Arab Spring) to be reduced to events that can be credited to software platforms.

    What Fuchs demonstrates at multiple junctures is the importance of recognizing that the usage of a given communication tool by a social movement does not mean that this tool brought about the movement: intersecting social, political and economic factors are the causes of social movements. In seeking to provide a “critical introduction” to social media, Fuchs rejects arguments that he sees as not suitably critical (including those of Henry Jenkins and Manuel Castells), arguments that at best have been insufficient and at worst have been advertisements masquerading as scholarship.

    Though the time people spend on social media is often portrayed as “fun” or “creative,” Fuchs recasts these tasks as work in order to demonstrate how that time is exploited by the owners of social media platforms. By clicking on links, writing comments, performing web searches, sending tweets, uploading videos, and posting on Facebook, social media users are performing unpaid labor that generates a product (in the form of information about users) that can then be sold to advertisers and data aggregators; this sale generates profits for the platform owner which do not accrue back to the original user. Though social media users are granted “free” access to a service, it is their labor on that platform that makes the platform have any value—Facebook and Twitter would not have a commodity to sell to advertisers if they did not have millions of users working for them for free. As Fuchs describes it, “the outsourcing of work to consumers is a general tendency of contemporary capitalism” (111).

    screen shot of Karl Marx Community Facebook Page
    screen shot of a Karl Marx Community Page on Facebook

    While miners of raw materials and workers in assembly plants are still brutally exploited—and this unseen exploitation forms a critical part of the economic base of computer technology—the exploitation of social media users is given a gloss of “fun” and “creativity.” Fuchs does not suggest that social media use is fully akin to working in a factory, but that users carry the factory with them at all times (a smart phone, for example) and are creating surplus value as long as they are interacting with social media. Instead of being a post-work utopia, Fuchs emphasizes that “the existence of the internet in its current dominant capitalist form is based on various forms of labour” (121) and the enrichment of internet firms is reliant upon the exploitation of those various forms of labor—central amongst these being the social media user.

    Fuchs considers five specific platforms in detail so as to illustrate not simply the current state of affairs but also to point towards possible alternatives. Fuchs analyzes Google, Facebook, Twitter, WikiLeaks and Wikipedia as case studies of trends to encourage and trends of which to take wary notice. In his analysis of the three corporate platforms (Google, Facebook and Twitter) Fuchs emphasizes the ways in which these social media companies (and the moguls who run them) have become wealthy and powerful by extracting value from the labor of users and by subjecting users to constant surveillance. The corporate platforms give Fuchs the opportunity to consider various social media issues in sharper relief: labor and monopolization in terms of Google, surveillance and privacy issues with Facebook, the potential for an online public sphere and Twitter. Despite his criticisms, Fuchs does not dismiss the value and utility of what these platforms offer, as is captured in his claim that “Google is at the same time the best and the worst thing that has ever happened on the internet” (147). The corporate platforms’ successes are owed at least partly to their delivering desirable functions to users. The corrective for which Fuchs argues is increased democratic control of these platforms—for the labor to be compensated and for privacy to pertain to individual humans instead of to businesses’ proprietary methods of control. Indeed, one cannot get far with a “participatory culture” unless there is a similarly robust “participatory democracy,” and part of Fuchs’s goal is to show that these are not at all the same.

    WikiLeaks and Wikipedia both serve as real examples that demonstrate the potential of an “alternative” internet for Fuchs. Though these Wiki platforms are not ideal they contain within themselves the seeds for their own adaptive development (“WikiLeaks is its own alternative”—232), and serve for Fuchs as proof that the internet can move in a direction akin to a “commons.” As Fuchs puts it, “the primary political task for concerned citizens should therefore be to resist the commodification of everything and to strive for democratizing the economy and the internet” (248), a goal he sees as at least partly realized in Wikipedia.

    While the outlines of the internet’s future may seem to have been written already, Fuchs’s book is an argument in favor of the view that the code can still be altered. A different future relies upon confronting the reality of the online world as it currently is and recognizing that the battles waged for control of the internet are proxy battles in the conflict between capitalism and an alternative approach. In the conclusion of the book Fuchs eloquently condenses his view and the argument that follows from it in two simple sentences: “A just society is a classless society. A just internet is a classless internet” (257). It is a sentiment likely to spark an invigorating discussion, be it in a classroom, at a kitchen table, or in a café.

    * * *

    While Social Media: A Critical Introduction is clearly intended as a text book (each chapter ends with a “recommended readings and exercises” section), it is written in an impassioned and engaging style that will appeal to anyone who would like to see a critical gaze turned towards social media. Fuchs structures his book so that his arguments will remain relevant even if some of the platforms about which he writes vanish. Even the chapters in which Fuchs focuses on a specific platform are filled with larger arguments that transcend that platform. Indeed one of the primary strengths of Social Media is that Fuchs skillfully uses the familiar examples of social media platforms as a way of introducing the reader to complex theories and thinkers (from Marx to Habermas).

    Whereas Fuchs accuses some other scholars of subtly hiding their ideological agendas, no such argument can be made regarding Fuchs himself. Social Media is a Marxist critique of the major online platforms—not simply because Fuchs deploys Marx (and other Marxist theorists) to construct his arguments, but because of his assumption that the desirable alternative for the internet is part and parcel of a desirable alternative to capitalism. Such a sentiment can be found at several points throughout the book, but is made particularly evident by lines such as these from the book’s conclusion: “There seem to be only two options today: (a) continuance and intensification of the 200-year-old barbarity of capitalism or (b) socialism” (259)—it is a rather stark choice. It is precisely due to Fuchs’s willingness to stake out, and stick to, such political positions that this text is so effective.

    And yet, it is the very allegiance to such positions that also presents something of a problem. While much has been written of late—in the popular press in addition to by scholars—regarding issues of privacy and surveillance, Fuchs’s arguments about the need to consider users as exploited workers will likely strike many readers as new, and thus worthwhile in their novelty if nothing else. Granted, to fully go along with Fuchs’s critique requires readers to already be in agreement or at least relatively sympathetic with Fuchs political and ethical positions. This is particularly true as Fuchs excels at making an argument about media and technology, but devotes significantly fewer pages to ethical argumentation.

    The lines (quoted earlier) “A just society is a classless society. A just internet is a classless internet” (257) serve as much as a provocation as a conclusion. For those who ascribe to a similar notion of “a just society” Fuchs book will likely function as an important guide to thinking about the internet; however, to those whose vision of “a just society” is fundamentally different from his, Fuchs’s book may be less than convincing. Social Media does not present a complete argument about how one defines a “just society.” Indeed, the danger may be that Fuchs’s statements in praise of a “classless society” may lead to some dismissing his arguments regarding the way in which the internet has replicated a “class society.” Likewise, it is easy to imagine a retort being offered that the new platforms of “the sharing economy” represent the birth of this “classless society” (though it is easy to imagine Fuchs pointing out, as have other critics from the left, that the “sharing economy” is simply more advertising lingo being used to hide the same old capitalist relations). This represents something of a peculiar challenge when it comes to Social Media, as the political commitment of the book is simultaneously what makes it so effective and that which threatens the book’s potential political efficacy.

    Thus Social Media presents something of a conundrum: how effective is a critical introduction if its conclusion offers a heads-and-tails choice between “barbarity of capitalism or…socialism”? Such a choice feels slightly as though Fuchs is begging the question. While it is curious that Fuchs does not draw upon critical theorists’ writings about the culture industry, the main issues with Social Media seem to be reflections of this black-and-white choice. Thus it is something of a missed chance that Fuchs does not draw upon some of the more serious critics of technology (such as Ellul or Mumford)—whose hard edged skepticism would nevertheless likely not accept Fuchs’s Marxist orientation. Such thinkers might provide a very different perspective on the choice between “capitalism” and “socialism”—arguing that “technique” or “the megamachine” can function quite effectively in either. Though Fuchs draws heavily upon thinkers in the Marxist tradition it may be that another set of insights and critiques might have been gained by bringing in other critics of technology (Hans Jonas, Peter Kropotkin, Albert Borgmann)—especially as some of these thinkers had warned that Marxism may overvalue the technological as much as capitalism does. This is not to argue in favor of any of these particular theorists, but to suggest that Fuchs’s claims would have been strengthened by devoting more time to considering the views of those who were critical of technology, capitalism and of Marxism. Social Media does an excellent job of confronting the ideological forces on its right flank; it could have benefited from at least acknowledging the critics to its left.

    Two other areas that remain somewhat troubling are in regards to Fuchs’s treatment of Wiki platforms and of the materiality of technology. The optimism with which Fuchs approaches WikiLeaks and Wikipedia is understandable given the dourness with which he approaches the corporate platforms, and yet his hopes for them seem somewhat exaggerated. Fuchs claims “Wikipedians are prototypical contemporary communists” (243), partially to suggest that many people are already engaged in commons based online activities and yet it is an argument that he simultaneously undermines by admitting (importantly) the fact that Wikipedia’s editor base is hardly representative of all of the platform’s users (it’s back to the “white boys with toys who love their toys”), and some have alleged that putatively structureless models of organization like Wikipedia’s actually encourage oligarchical forms of order. Which is itself not to say anything about the role that editing “bots” play on the platform or the degree to which Wikipedia is reliant upon corporate platforms (like Google) for promotion. Similarly, without ignoring its value, the example of WikiLeaks seems odd at a moment when the organization seems primarily engaged in a rearguard self-defense whilst the leaks that have generated the most interest of late has been made to journalists at traditional news sources (Edward Snowden’s leaks to Glenn Greenwald, who was writing for The Guardian when the leaks began).

    The further challenge—and this is one that Fuchs is not alone in contending with—is the trouble posed by the materiality of technology. An important aspect of Social Media is that Fuchs considers the often-unseen exploitation and repression upon which the internet relies: miners, laborers who build devices, those who recycle or live among toxic e-waste. Yet these workers seem to disappear from the arguments in the later part of the book, which in turn raises the following question: even if every social media platform were to be transformed into a non-profit commons-based platform that resists surveillance, manipulation, and the exploitation of its users, is such a platform genuinely just if to use it one must rely on devices whose minerals were mined in warzones, assembled in sweatshops, and which will eventually go to an early grave in a toxic dump? What good is a “classless (digital) society” without a “classless world?” Perhaps the question of a “capitalist internet” is itself a distraction from the fact that the “capitalist internet” is what one gets from capitalist technology. Granted, given Fuchs’s larger argument it may be fair to infer that he would portray “capitalist technology” as part of the problem. Yet, if the statement “a just society is a classless society” is to be genuinely meaningful than this must extend not just to those who use a social media platform but to all of those involved from the miner to the manufacturer to the programmer to the user to the recycler. To pose the matter as a question, can there be participatory (digital) democracy that relies on serious exploitation of labor and resources?

    Social Media: A Critical Introduction provides exactly what its title promises—a critical introduction. Fuchs has constructed an engaging and interesting text that shows the continuing validity of older theories and skillfully demonstrates the way in which the seeming newness of the internet is itself simply a new face on an old system. While Fuchs has constructed an argument that resolutely holds its position it is from a stance that one does not encounter often enough in debates around social media and which will provide readers with a range of new questions with which to wrestle.

    It remains unclear in what ways social media will develop in the future, but Christian Fuchs’s book will be an important tool for interpreting these changes—even if what is in store is more “barbarity.”
    _____

    Zachary Loeb is a writer, activist, librarian, and terrible accordion player. He earned his MSIS from the University of Texas at Austin, and is currently working towards an MA in the Media, Culture, and Communications department at NYU. His research areas include media refusal and resistance to technology, ethical implications of technology, alternative forms of technology, and libraries as models of resistance. Using the moniker “The Luddbrarian” Loeb writes at the blog librarianshipwreck. He previously reviewed The People’s Platform by Astra Taylor for boundary2.org.
    Back to the essay

  • From the Decision to the Digital

    From the Decision to the Digital

    Laruelle: Against the Digital

    a review of Alexander R. Galloway, Laruelle: Against the Digital

    by Andrew Culp

    ~
    Alexander R. Galloway’s forthcoming Laruelle: Against the Digital is a welcome and original entry in the discussion of French theorist François Laruelle’s thought. The book is at once both pedagogical and creative: it succinctly summarizes important aspects of Laruelle’s substantial oeuvre by placing his thought within the more familiar terrain of popular philosophies of difference (most notably the work of Gilles Deleuze and Alain Badiou) and creatively extends Laruelle’s work through a series of fourteen axioms.

    The book is a bridge between current Anglophone scholarship on Laruelle, which largely treats Laruelle’s non-standard philosophy through an extension of problematics common to contemporary continental philosophy (Mullarkey 2006, Mullarkey and Smith 2012, Smith 2013, Gangle 2013, Kolozova 2014), and such scholarship’s maturation, which blazes new territory because it takes thought to be “an exercise in perpetual innovation” (Brassier 2003, 25). As such, Laruelle: Against the Digital stands out from other scholarship in that it is not primarily a work of exposition or application of the axioms laid out by Laruelle. This approach is apparent from the beginning, where Galloway declares that he is not a foot soldier in Laruelle’s army and he does not proceed by way of Laurelle’s “non-philosophical” method (a method so thoroughly abstract that Laruelle appears to be the inheritor of French rationalism, though in his terminology, philosophy should remain only as “raw material” to carry thinking beyond philosophy’s image of thought). The significance of Galloway’s Laruelle is that he instead produces his own axioms, which follow from non-philosophy but are of his own design, and takes aim at a different target: the digital.

    The Laruellian Kernel

    Are philosophers no better than creationists? Philosophers may claim to hate irrationalist leaps of faith, but Laruelle locates such leaps precisely in philosophers’ own narcissistic origin stories. This argument follows from Chapter One of Galloway’s Laruelle, which outlines how all philosophy begins with the world as ‘fact.’ For example: the atomists begin with change, Kant with empirical judgment, and Fichte with the principle of identity. And because facts do not speak for themselves, philosophy elects for itself a second task — after establishing what ‘is’ — inventing a form of thought to reflect on the world. Philosophy thus arises out of a brash entitlement: the world exists to be thought. Galloway reminds us of this through Gottfried Leibniz, who tells us that “everything in the world happens for a specific reason” (and it is the job of philosophers to identify it), and Alfred North Whitehead, who alternatively says, “no actual entity, then no reason” (so it is up to philosophers to find one).

    For Laruelle, various philosophies are but variations on a single approach that first begins by positing how the world presents itself, and second determines the mode of thought that is the appropriate response. Between the two halves, Laruelle finds a grand division: appearance/presence, essence/instance, Being/beings. Laruelle’s key claim is that philosophy cannot think the division itself. The consequence is that such a division is tantamount to cheating, as it wills thought into being through an original thoughtless act. This act of thoughtlessly splitting of the world in half is what Laruelle calls “the philosophical decision.”

    Philosophy need not wait for Laruelle to be demoted, as it has already done this for itself; no longer the queen of the sciences, philosophy seems superfluous to the most harrowing realities of contemporary life. The recent focus on Laruelle did indeed come from a reinvigoration of philosophy that goes under the name ‘speculative realism.’ Certainly there are affinities between Laruelle and these philosophers — the early case was built by Ray Brassier, who emphasizes that Laruelle earnestly adopts an anti-correlationalist position similar to the one suggested by Quentin Meillassoux and distances himself from postmodern constructivism as much as other realists, all by positing the One as the Real. It is on the issue of philosophy, however, that Laruelle is most at odds with the irascible thinkers of speculative realism, for non-philosophy is not a revolt against philosophy nor is it a patronizing correction of how others see reality. 1 Galloway argues that non-philosophy should be considered materialist. He attributes to Laruelle a mix of empiricism, realism, and materialism but qualifies non-philosophy’s approach to the real as not a matter of the givenness of empirical reality but of lived experience (vécu) (Galloway, Laruelle, 24-25). The point of non-philosophy is to withdraw from philosophy by short-circuiting the attempt to reflect on what supposedly exists. To be clear: such withdrawal is not an anti-philosophy. Non-philosophy suspends philosophy, but also raids it for its own rigorous pursuit: an axiomatic investigation of the generic. 2

    From Decision to Digital

    A sharp focus on the concept of “the digital” is Galloway’s main contribution — a concept not in the forefront of Laruelle’s work, but of great interest to all of us today. Drawing from non-philosophy’s basic insight, Galloway’s goal in Laruelle is to demonstrate the “special connection” shared by philosophy and digital (15). Galloway asks his readers to consider a withdrawal from digitality that is parallel to the non-philosophical withdrawal from philosophy.

    Just as Laruelle discovered the original division to which philosophy must remain silent, Galloway finds that the digital is the “basic distinction that makes it a possible to make any distinction at all” (Laruelle, 26). Certainly the digital-analog opposition survives this reworking, but not as one might assume. Gone are the usual notions of online-offline, new-old, stepwise-continuous variation, etc. To maintain these definitions presupposes the digital, or as Galloway defines it, “the capacity to divide things and make distinctions between them” (26). Non-philosophy’s analogy for the digital thus becomes the processes of distinction and decision themselves.

    The dialectic is where Galloway provocatively traces the history of digitality. This is because he argues that digitality is “not so much 0 and 1” but “1 and 2” (Galloway, Laruelle, 26). Drawing on Marxist definitions of the dialectical process, he defines the movement from one to two as analysis, while the movement from two to one is synthesis (26-27). In this way, Laruelle can say that, “Hegel is dead, but he lives on inside the electric calculator” (Introduction aux sciences génériques, 28, qtd in Galloway, Laruelle, 32). Playing Badiou and Deleuze off of each other, as he does throughout the book, Galloway subsequently outlines the political stakes between them — with Badiou establishing clear reference points through the argument that analysis is for leftists and synthesis for reactionaries, and Deleuze as a progenitor of non-philosophy still too tied to the world of difference but shrewd enough to have a Spinozist distaste for both movements of the dialectic (Laruelle, 27-30). Galloway looks to Laruelle to get beyond Badiou’s analytic leftism and Deleuze’s “Spinozist grand compromise” (30). His proposal is a withdrawal in the name of indecision that demands abstention from digitality’s attempt to “encode and simulate anything whatsoever in the universe” (31).

    Insufficiency

    Insufficiency is the idea into which Galloway sharpens the stakes of non-philosophy. In doing so, he does to Laruelle what Deleuze does to Spinoza. While Deleuze refashions philosophy into the pursuit of adequate knowledge, the eminently practical task of understanding the conditions of chance encounters enough to gain the capacity to influence them, Galloway makes non-philosophy into the labor of inadequacy, a mode of thought that embraces the event of creation through a withdrawal from decision. If Deleuze turns Spinoza into a pragmatist, then Galloway turns Laruelle into a nihilist.

    There are echoes of Massimo Cacciari, Giorgio Agamben, and Afro-pessimism in Galloway’s Laruelle. This is because he uses nihilism’s marriage of withdrawal, opacity, and darkness as his orientation to politics, ethics, and aesthetics. From Cacciari, Galloway borrows a politics of non-compromise. But while the Italian Autonomist Marxist milieu of which Cacciari’s negative thought is characteristic emphasizes subjectivity, non-philosophy takes the subject to be one of philosophy’s dirty sins and makes no place for it. Yet Galloway is not shy about bringing up examples, such as Bartleby, Occupy, and other figures of non-action. Though as in Agamben, Galloway’s figures only gain significance in their insufficiency. “The more I am anonymous, the more I am present” Galloway repeats from Tiqqun to axiomatically argue the centrality of opacity (233-236). There is also a strange affinity between Galloway and Afro-pessimists, who both oppose the integrationist tendencies of representational systems ultimately premised on the exclusion, exploitation, and elimination of blackness. In spite of potential differences, they both define blackness as absolute foreclosure to being; from which, Galloway is determined to “channel that great saint of radical blackness, Toussaint Louveture,” in order to bring about a “cataclysm of human color” through the “blanket totality of black” that “renders color invalid” and brings about “a new uchromia, a new color utopia rooted in the generic black universe” (188-189). What remains an open question is: how does such a formulation of the generic depart from the philosophy of difference’s becoming-minor, whereby the liberation must first pass through the figures of the woman, the fugitive, and the foreigner?

    Actually Existing Digitality

    One could read Laruelle not as urging thought to become more practical, but to become less so. Evidence for such a claim comes in his retreat to dense abstract writing and a strong insistence against providing examples. Each is an effect of non-philosophy’s approach, which is both rigorous and generic. Although possibly justified, there are those who stylistically object to Laruelle for taking too many liberties with his prose; most considerations tend make up for such flights of fancy by putting non-philosophy in communication with more familiar philosophies of difference (Mullarkey 2006; Kolozova 2014). Yet the strangeness of the non-philosophical method is not a stylistic choice intended to encourage reflection. Non-philosophy is quite explicitly not a philosophy of difference — Laruelle’s landmark Philosophies of Difference is an indictment of Hegel, Heidegger, Nietzsche, Derrida, and Deleuze. To this end, non-philosophy does not seek to promote thought through marginality, Otherness, or any other form of alterity.

    Readers who have henceforth been frustrated with non-philosophy’s impenetrability may be more attracted to the second part of Galloway’s Laruelle. In part two, Galloway addresses actually existing digitality, such as computers and capitalism. This part also includes a contribution to the ethical turn, which is premised on a geometrically neat set of axioms whereby ethics is the One and politics is the division of the One into two. He develops each chapter through numerous examples, many of them concrete, that helps fold non-philosophical terms into discussions with long-established significance. For instance, Galloway makes his way through a chapter on art and utopia with the help of James Turrell’s light art, Laruelle’s Concept of Non-Photography, and August von Briesen’s automatic drawing (194-218). The book is over three hundred-pages long, so most readers will probably appreciate the brevity of many of the chapters in part two. The chapters are short enough to be impressionistic while implying that treatments as fully rigorous as what non-philosophy often demands may be much longer.

    Questions

    While his diagrammatical thinking is very clear, I find it more difficult to determine during Galloway’s philosophical expositions whether he is embracing or criticizing a concept. The difficulty of such determinations is compounded by the ambivalence of the non-philosophical method, which adopts philosophy as its raw material while simultaneously declaring that philosophical concepts are insufficient. My second fear is that while Galloway is quite adept at wielding his reworked concept of ‘the digital,’ his own trademark rigor may be lost when taken up by less judicious scholars. In particular, his attack on digitality could form the footnote for a disingenuous defense of everything analog.

    There is also something deeper at stake: What if we are in the age of non-representation? From the modernists to Rancière and Occupy, we have copious examples of non-representational aesthetics and politics. But perhaps all previous philosophy has only gestured at non-representational thought, and non-philosophy is the first to realize this goal. If so, then a fundamental objection could be raised about both Galloway’s Laruelle and non-philosophy in general: is non-philosophy properly non-thinking or is it just plain not thinking? Galloway’s axiomatic approach is a refreshing counterpoint to Laruelle’s routine circumlocution. Yet a number of the key concepts that non-philosophy provides are still frustratingly elusive. Unlike the targets of Laruelle’s criticism, Derrida and Deleuze, non-philosophy strives to avoid the obscuring effects of aporia and paradox — so is its own use of opacity simply playing coy, or to be understood purely as a statement that the emperor has no clothes? While I am intrigued by anexact concepts such as ‘the prevent,’ and I understand the basic critique of the standard model of philosophy, I am still not sure what non-philosophy does. Perhaps that is an unfair question given the sterility of the One. But as Hardt and Negri remind us in the epigraph to Empire, “every tool is a weapon if you hold it right.” We now know that non-philosophy cuts — what remains to be seen, is where and how deeply.
    _____

    Andrew Culp is a Visiting Assistant Professor of Rhetoric Studies at Whitman College. He specializes in cultural-communicative theories of power, the politics of emerging media, and gendered responses to urbanization. In his current project, Escape, he explores the apathy, distraction, and cultural exhaustion born from the 24/7 demands of an ‘always-on’ media-driven society. His work has appeared Radical Philosophy, Angelaki, Affinities, and other venues.

    _____

    Notes

    1. There are two qualifications worth mentioning: first, Laruelle presents non-philosophy as a scientific enterprise. There is little proximity between non-philosophy’s scientific approach and other sciences, such as techno-science, big science, scientific modernity, modern rationality, or the scientific method. Perhaps it is closest to Althusser’s science, but some more detailed specification of this point would be welcome.
    Back to the essay

    2. Galloway lays out the non-philosophy of generic immanence, The One, in Chapter Two of Laruelle. Though important, Galloway’s main contribution is not a summation of Laruelle’s version of immanence and thus not the focus of this review. Substantial summaries of this sort are already available, including Mullarkey 2006, and Smith 2013.
    Back to the essay

    Bibliography

    Brassier, Ray (2003) “Axiomatic Heresy: The Non-Philosophy of François Laruelle,” Radical Philosophy 121.
    Gangle, Rocco (2013) François Laruelle’s Philosophies of Difference (Edinburgh, UK: Edinburgh University Press).
    Kolozova, Katerina (2014) Cut of the Real (New York, USA: Columbia University Press).
    Hardt, Michael and Antonio Negri (2000) Empire (Cambridge, MA: Harvard University Press).
    Laruelle, François (2010/1986) Philosophies of Difference (London, UK and New York, USA: Continuum).
    Laruelle, François (2011) Concept of Non-Photography (Falmouth, UK: Urbanomic).
    Mullarkey, John (2006) Post-Continental Philosophy (London, UK: Continuum).
    Mullarkey, John and Anthony Paul Smith (eds) (2012) Laruelle and Non-Philosophy (Edinburgh, UK: Edinburgh University Press).
    Smith, Anthony Paul (2013) A Non-Philosophical Theory of Nature (New York, USA: Palgrave Macmillan).

  • Henry A. Giroux — The Responsibility of Intellectuals in the Shadow of the Atomic Plague

    Henry A. Giroux — The Responsibility of Intellectuals in the Shadow of the Atomic Plague

    by Henry A. Giroux

    Seventy years after the horror of Hiroshima, intellectuals negotiate a vastly changed cultural, political and moral geography. Pondering what Hiroshima means for American history and consciousness proves as fraught an intellectual exercise as taking up this critical issue in the years and the decades that followed this staggering inhumanity, albeit for vastly different reasons. Now that we are living in a 24/7 screen culture hawking incessant apocalypse, how we understand Foucault’s pregnant observation that history is always a history of the present takes on a greater significance, especially in light of the fact that historical memory is not simply being rewritten but is disappearing.1 Once an emancipatory pedagogical and political project predicated on the right to study, and engage the past critically, history has receded into a depoliticizing culture of consumerism, a wholesale attack on science, the glorification of military ideals, an embrace of the punishing state, and a nostalgic invocation of the greatest generation. Inscribed in insipid patriotic platitudes and decontextualized isolated facts, history under the reign of neoliberalism has been either cleansed of its most critical impulses and dangerous memories, or it has been reduced to a contrived narrative that sustains the fictions and ideologies of the rich and powerful. History has not only become a site of collective amnesia but has also been appropriated so as to transform “the past into a container full of colorful or colorless, appetizing or insipid bits, all floating with the same specific gravity.”2 Consequently, what intellectuals now have to say about Hiroshima and history in general is not of the slightest interest to nine tenths of the American population. While writers of fiction might find such a generalized, public indifference to their craft, freeing, even “inebriating” as Philip Roth has recently written, for the chroniclers of history it is a cry in the wilderness.3

    At same time the legacy of Hiroshima is present but grasped, as the existential anxieties and dread of nuclear annihilation that racked the early 1950s to a contemporary fundamentalist fatalism embodied in collective uncertainty, a predilection for apocalyptic violence, a political economy of disposability, and an expanding culture of cruelty that has fused with the entertainment industry. We’ve not produced a generation of war protestors or government agitators to be sure, but rather a generation of youth who no longer believe they have a future that will be any different from the present.4 That such connections tying the past to the present are lost signal not merely the emergence of a disimagination machine that wages an assault on historical memory, civic literacy, and civic agency. It also points to a historical shift in which the perpetual disappearance of that atomic moment signals a further deepening in our own national psychosis.

    If, as Edward Glover once observed, “Hiroshima and Nagasaki had rendered actual the most extreme fantasies of world destruction encountered in the insane or in the nightmares of ordinary people,” the neoliberal disimagination machine has rendered such horrific reality a collective fantasy driven by the spectacle of violence, nourished by sensationalism, and reinforced by scourge of commodified and trivialized entertainment.5 The disimagination machine threatens democratic public life by devaluing social agency, historical memory, and critical consciousness and in doing so it creates the conditions for people to be ethically compromised and politically infantilized. Returning to Hiroshima is not only necessary to break out of the moral cocoon that puts reason and memory to sleep but also to rediscover both our imaginative capacities for civic literacy on behalf of the public good, especially if such action demands that we remember as Robert Jay Lifton and Greg Mitchell remark “Every small act of violence, then, has some connection with, if not sanction from, the violence of Hiroshima and Nagasaki.”6

    On Monday August 6, 1945 the United States unleashed an atomic bomb on Hiroshima killing 70,000 people instantly and another 70,000 within five years—an opening volley in a nuclear campaign visited on Nagasaki in the days that followed.7 In the immediate aftermath, the incineration of mostly innocent civilians was buried in official government pronouncements about the victory of the bombings of both Hiroshima and Nagasaki. The atomic bomb was celebrated by those who argued that its use was responsible for concluding the war with Japan. Also applauded was the power of the bomb and the wonder of science in creating it, especially “the atmosphere of technological fanaticism” in which scientists worked to create the most powerful weapon of destruction then known to the world.8 Conventional justification for dropping the atomic bombs held that “it was the most expedient measure to securing Japan’s surrender [and] that the bomb was used to shorten the agony of war and to save American lives.”9 Left out of that succinct legitimating narrative were the growing objections to the use of atomic weaponry put forth by a number of top military leaders and politicians, including General Dwight Eisenhower, who was then the Supreme Allied Commander in Europe, former President Herbert Hoover, and General Douglas MacArthur, all of whom argued it was not necessary to end the war.10 A position later proven to be correct.

    For a brief time, the Atom Bomb was celebrated as a kind of magic talisman entwining salvation and scientific inventiveness and in doing so functioned to “simultaneously domesticate the unimaginable while charging the mundane surroundings of our everyday lives with a weight and sense of importance unmatched in modern times.”11 In spite of the initial celebration of the effects of the bomb and the orthodox defense that accompanied it, whatever positive value the bomb may have had among the American public, intellectuals, and popular media began to dissipate as more and more people became aware of the massive deaths along with suffering and misery it caused.12

    Kenzaburo Oe, the Nobel Prize winner for Literature, noted that in spite of attempts to justify the bombing “from the instant the atomic bomb exploded, it [soon] became the symbol of human evil, [embodying] the absolute evil of war.”13 What particularly troubled Oe was the scientific and intellectual complicity in the creation of and in the lobbying for its use, with acute awareness that it would turn Hiroshima into a “vast ugly death chamber.”14 More pointedly, it revealed a new stage in the merging of military actions and scientific methods, indeed a new era in which the technology of destruction could destroy the earth in roughly the time it takes to boil an egg. The bombing of Hiroshima extended a new industrially enabled kind of violence and warfare in which the distinction between soldiers and civilians disappeared and the indiscriminate bombing of civilians was normalized. But more than this, the American government exhibited a ‘total embrace of the atom bomb,” that signalled support for the first time of a “notion of unbounded annihilation [and] “the totality of destruction.”15

    Hiroshima designated the beginning of the nuclear era in which as Oh Jung points out “Combatants were engaged on a path toward total war in which technological advances, coupled with the increasing effectiveness of an air strategy, began to undermine the ethical view that civilians should not be targeted… This pattern of wholesale destruction blurred the distinction between military and civilian casualties.”16 The destructive power of the bomb and its use on civilians also marked a turning point in American self-identity in which the United States began to think of itself as a superpower, which as Robert Jay. Lifton points out refers to “a national mindset–put forward strongly by a tight-knit leadership group–that takes on a sense of omnipotence, of unique standing in the world that grants it the right to hold sway over all other nations.”17 The power of the scientific imagination and its murderous deployment gave birth simultaneously to the American disimagination machine with its capacity to rewrite history in order to render it an irrelevant relic best forgotten.

    What remains particularly ghastly about the rationale for dropping two atomic bombs was the attempt on the part of its defenders to construct a redemptive narrative through a perversion of humanistic commitment, of mass slaughter justified in the name of saving lives and winning the war.18 This was a humanism under siege, transformed into its terrifying opposite and placed on the side of what Edmund Wilson called the Faustian possibility of a grotesque “plague and annihilation.”19 In part, Hiroshima represented the achieved transcendence of military metaphysics now a defining feature of national identity, its more poisonous and powerful investment in the cult of scientism, instrumental rationality, and technological fanaticism—and the simultaneous marginalization of scientific evidence and intellectual rigour, even reason itself. That Hiroshima was used to redefine America’s “national mission and its utopian possibilities”20 was nothing short of what the late historian Howard Zinn called a “devastating commentary on our moral culture.”21 More pointedly it serves as a grim commentary on our national sanity. In most of these cases, matters of morality and justice were dissolved into technical questions and reductive chauvinism relating matters of governmentally massaged efficiency, scientific “expertise”, and American exceptionalism. As Robert Jay Lifton and Greg Mitchell stated, the atom bomb was symbolic of the power of post-war America rather than a “ruthless weapon of indiscriminate destruction” which conveniently put to rest painful questions concerning justice, morality, and ethical responsibility. They write:

    Our official narrative precluded anything suggesting atonement. Rather the bomb itself had to be “redeemed”: As “a frightening manifestation of technological evil … it needed to be reformed, transformed, managed, or turned into the vehicle of a promising future,” [as historian M. Susan] Lindee argued. “It was necessary, somehow, to redeem the bomb.” In other words, to avoid historical and moral responsibility, we acted immorally and claimed virtue. We sank deeper, that is, into moral inversion.22

    This narrative of redemption was soon challenged by a number of historians who argued that the dropping of the atom bomb had less to do with winning the war than with an attempt to put pressure on the Soviet Union to not expand their empire into territory deemed essential to American interests.23 Protecting America’s superiority in a potential Soviet-American conflict was a decisive factor in dropping the bomb. In addition, the Truman administration needed to provide legitimation to Congress for the staggering sums of money spent on the Manhattan Project in developing the atomic weapons program and for procuring future funding necessary to continue military appropriations for ongoing research long after the war ended.24 Howard Zinn goes even further asserting that the government’s weak defense for the bombing of Hiroshima was not only false but was complicitous with an act of terrorism. Refusing to relinquish his role as a public intellectual willing to hold power accountable, he writes “Can we … comprehend the killing of 200,000 people to make a point about American power?”25 A number of historians, including Gar Alperowitz and Tsuyoshi Hasegawa, also attempted to deflate this official defense of Hiroshima by providing counter-evidence that the Japanese were ready to surrender as a result of a number of factors including the nonstop bombing of 26 cities before Hiroshima and Nagasaki, the success of the naval and military blockade of Japan, and the Soviet’s entrance into the war on August 9th.26

    The narrative of redemption and the criticism it provoked are important for understanding the role that intellectuals assumed at this historical moment to address what would be the beginning of the nuclear weapons era and how that role for critics of the nuclear arms race has faded somewhat at the beginning of the twenty-first century. Historical reflection on this tragic foray into the nuclear age reveals the decades long dismantling of a culture’s infrastructure of ideas, its growing intolerance for critical thought in light of the pressures placed on media, on universities and increasingly isolated intellectuals to support comforting mythologies and official narratives and thus cede the responsibility to give effective voice to unpopular realities.

    Within a short time after the dropping of the atom bombs on Hiroshima and Nagasaki, John Hersey wrote a devastating description of the misery and suffering caused by the bomb. Removing the bomb from abstract arguments endorsing matters of technique, efficiency, and national honor, Hersey first published in The New Yorker and later in a widely read book an exhausting and terrifying description of the bombs effects on the people of Hiroshima, portraying in detail the horror of the suffering caused by the bomb. There is one haunting passage that not only illustrates the horror of the pain and suffering, but also offers a powerful metaphor for the blindness that overtook both the victims and the perpetrators. He writes:

    On his way back with the water, [Father Kleinsorge] got lost on a detour around a fallen tree, and as he looked for his way through the woods, he heard a voice ask from the underbrush, ‘Have you anything to drink?’ He saw a uniform. Thinking there was just one soldier, he approached with the water. When he had penetrated the bushes, he saw there were about twenty men, they were all in exactly the same nightmarish state: their faces were wholly burned, their eye sockets were hollow, the fluid from their melted eyes had run down their cheeks. Their mouths were mere swollen, pus-covered wounds, which they could not bear to stretch enough to admit the spout of the teapot.27

    The nightmarish image of fallen soldiers staring with hollow sockets, eyes liquidated on cheeks and mouths swollen and pus-filled stands as a warning to those who would refuse blindly the moral witnessing necessary to keep alive for future generations the memory of the horror of nuclear weapons and the need to eliminate them. Hersey’s literal depiction of mass violence against civilians serves as a kind of mirrored doubling, referring at one level to nations blindly driven by militarism and hyper-nationalism. At another level, perpetrators become victims who soon mimic their perpetrators, seizing upon their own victimization as a rationale to become blind to their own injustices.

    Pearl Harbor enabled Americans to view themselves as the victims but then assumed the identity of the perpetrators and became willfully blind to the United States’ own escalation of violence and injustice. Employing both a poisonous racism and a weapon of mad violence against the Japanese people, the US government imagined Japan as the ultimate enemy, and then pursued tactics that blinded the American public to its own humanity and in doing so became its own worst enemy by turning against its most cherished democratic principles. In a sense, this self-imposed sightlessness functioned as part of what Jacques Derrida once called a societal autoimmune response, one in which the body’s immune system attacked its own bodily defenses.28 Fortunately, this state of political and moral blindness did not extend to a number of critics for the next fifty years who railed aggressively against the dropping of the atomic bombs and the beginning of the nuclear age.

    Responding to Hersey’s article on the bombing of Hiroshima published in The New Yorker, Mary McCarthy argued that he had reduced the bombing to the same level of journalism used to report natural catastrophes such as “fires, floods, and earthquakes” and in doing so had reduced a grotesque act of barbarism to “a human interest story” that had failed to grasp the bomb’s nihilism, and the role that “bombers, the scientists, the government” and others played in producing this monstrous act.29 McCarthy was alarmed that Hersey had “failed to consider why it was used, who was responsible, and whether it had been necessary.”30 McCarthy was only partly right. While it was true that Hersey didn’t tackle the larger political, cultural and social conditions of the event’s unfolding, his article provided one of the few detailed reports at the time of the horrors the bomb inflicted, stoking a sense of trepidation about nuclear weapons along with a modicum of moral outrage over the decision to drop the bomb—dispositions that most Americans had not considered at the time. Hersey was not alone. Wilfred Burchett, writing for the London Daily Express, was the first journalist to provide an independent account of the suffering, misery, and death that engulfed Hiroshima after the bomb was dropped on the city. For Burchett, the cataclysm and horror he witnessed first-hand resembled a vision of hell that he aptly termed “the Atomic Plague.” He writes:

    Hiroshima does not look like a bombed city. It looks as if a monster steamroller had passed over it and squashed it out of existence. I write these facts as dispassionately as I can in the hope that they will act as a warning to the world. In this first testing ground of the atomic bomb I have seen the most terrible and frightening desolation in four years of war. It makes a blitzed Pacific island seem like an Eden. The damage is far greater than photographs can show.31

    In the end in spite of such accounts, fear and moral outrage did little to put an end to the nuclear arms race, but it did prompt a number of intellectuals to enter into the public realm to denounce the bombing and the ongoing advance of a nuclear weapons program and the ever-present threat of annihilation it posed. In the end, fear and moral outrage did little to put an end to the nuclear arms race, but it did prompt a number of intellectuals to enter into the public realm to denounce the bombing and the ongoing advance of a nuclear weapons program and the ever-present threat of annihilation it posed.

    A number of important questions emerge from the above analysis, but two issues in particular stand out for me in light of the role that academics and public intellectuals have played in addressing the bombing of Hiroshima and the emergence of a nuclear weapons on a global scale, and the imminent threat of human annihilation posed by the continuing existence and danger posed by the potential use of such weapons. The first question focuses on what has been learned from the bombing of Hiroshima and the second question concerns the disturbing issue of how violence and hence Hiroshima itself have become normalized in the collective American psyche.

    In the aftermath of the bombing of Hiroshima, there was a major debate not just about how the emergence of the atomic age and the moral, economic, scientific, military, and political forced that gave rise to it. There was also a heated debate about the ways in which the embrace of the atomic age altered the emerging nature of state power, gave rise to new forms of militarism, put American lives at risk, created environmental hazards, produced an emergent surveillance state, furthered the politics of state secrecy, and put into play a series of deadly diplomatic crisis, reinforced by the logic of brinkmanship and a belief in the totality of war.32

    Hiroshima not only unleashed immense misery, unimaginable suffering, and wanton death on Japanese civilians, it also gave rise to anti-democratic tendencies in the United States government that put the health, safety, and liberty of the American people at risk. Shrouded in secrecy, the government machinery of death that produced the bomb did everything possible to cover up the most grotesque effects of the bomb on the people of Hiroshima and Nagasaki but also the dangerous hazards it posed to the American people. Lifton and Mitchell argue convincingly that if the development of the bomb and its immediate effects were shrouded in concealment by the government that before long concealment developed into a cover up marked by government lies and the falsification of information.33 With respect to the horrors visited upon Hiroshima and Nagasaki, films taken by Japanese and American photographers were hidden for years from the American public for fear that they would create both a moral panic and a backlash against the funding for nuclear weapons.34 For example, the Atomic Energy Commission lied about the extent and danger of radiation fallout going so far as to mount a campaign claiming that “fallout does not constitute a serious hazard to any living thing outside the test site.”35 This act of falsification took place in spite of the fact that thousands of military personal were exposed to high levels of radiation within and outside of the test sites.

    In addition, the Atomic Energy Commission in conjunction with the Departments of Defense, Department of Veterans’ Affairs, the Central Intelligence Agency, and other government departments engaged in a series of medical experiments designed to test the effects of different levels radiation exposure on military personal, medical patients, prisoners, and others in various sites. According to Lifton and Mitchell, these experiments took the shape of exposing people intentionally to “radiation releases or by placing military personnel at or near ground zero of bomb tests.”36 It gets worse. They also note that “from 1945 through 1947, bomb-grade plutonium injections were given to thirty-one patients [in a variety of hospitals and medical centers] and that all of these “experiments were shrouded in secrecy and, when deemed necessary, in lies….the experiments were intended to show what type or amount of exposure would cause damage to normal, healthy people in a nuclear war.”37 Some of the long lasting legacies of the birth of the atomic bomb also included the rise of plutonium dumps, environmental and health risks, the cult of expertise, and the subordination of the peaceful development technology to a large scale interest in using technology for the organized production of violence. Another notable development raised by many critics in the years following the launch of the atomic age was the rise of a government mired in secrecy, the repression of dissent, and the legitimation for a type of civic illiteracy in which Americans were told to leave “the gravest problems, military and social, completely in the hands of experts and political leaders who claimed to have them under control.”38

    All of these anti-democratic tendencies unleashed by the atomic age came under scrutiny during the latter half of the twentieth century. The terror of a nuclear holocaust, an intense sense of alienation from the commanding institutions of power, and deep anxiety about the demise of the future spawned growing unrest, ideological dissent, and massive outbursts of resistance among students and intellectuals all over the globe from the sixties until the beginning of the twenty-first century calling for the outlawing of militarism, nuclear production and stockpiling, and the nuclear propaganda machine. Literary writers extending from James Agee to Kurt Vonnegut, Jr. condemned the death-saturated machinery launched by the atomic age. Moreover, public intellectuals from Dwight Macdonald and Bertrand Russell to Helen Caldicott, Ronald Takaki, Noam Chomsky, and Howard Zinn, fanned the flames of resistance to both the nuclear arms race and weapons as well as the development of nuclear technologies. Others such as George Monbiot, an environmental activist, have supported the nuclear industry but denounced the nuclear arms race. In doing so, he has argued that “The anti-nuclear movement … has misled the world about the impacts of radiation on human health [producing] claims … ungrounded in science, unsupportable when challenged and wildly wrong [and] have done other people, and ourselves, a terrible disservice.”39

    In addition, in light of the nuclear crises that extend from the Three Mile accident in 1979, the Chernobyl disaster in 1986 and the more recent Fukushima nuclear disaster in 2011, a myriad of social movements along with a number of mass demonstrations against nuclear power have developed and taken place all over the world.40 While deep moral and political concerns over the legacy of Hiroshima seemed to be fading in the United States, the tragedy of 9/11 and the endlessly replayed images of the two planes crashing into the twin towers of the World Trade Center resurrected once again the frightening image of what Colonel Paul Tibbetts, Jr., the Enola Gay’s pilot, referred to as “that awful cloud… boiling up, mushrooming, terrible and incredibly tall” after “Little Boy,” a 700-pound uranium bomb was released over Hiroshima. Though this time, collective anxieties were focused not on the atomic bombing of Hiroshima and its implications for a nuclear Armageddon but on the fear of terrorists using a nuclear weapon to wreak havoc on Americans. But a decade later even that fear, however parochially framed, seems to have been diminished if not entirely, erased even though it has produced an aggressive attack on civil liberties and given even more power to an egregious and dangerous surveillance state.

    Atomic anxiety confronts a world in which 9 states have nuclear weapons and a number of them such as North Korea, Pakistan, and India have threatened to use them. James McCluskey points out that “there are over 20,0000 nuclear weapons in existence, sufficient destructive power to incinerate every human being on the planet three times over [and] there are more than 2000 held on hair trigger alert, already mounted on board their missiles and ready to be launched at a moment’s notice.”41 These weapons are far more powerful and deadly than the atomic bomb and the possibility that they might be used, even inadvertently, is high. This threat becomes all the more real in light of the fact that the world has seen a history of miscommunications and technological malfunctions, suggesting both the fragility of such weapons and the dire stupidity of positions defending their safety and value as a nuclear deterrent.42 The 2014 report, To Close for Comfort—Cases of Near Nuclear Use and Options for Policy not only outlines a history of such near misses in great detail, it also makes terrifyingly clear that “the risk associated with nuclear weapons is high.”43 It is also worth noting that an enormous amount of money is wasted to maintain these weapons and missiles, develop more sophisticated nuclear weaponries, and invest in ever more weapons laboratories. McCluskey estimates world funding for such weapons at $1trillion per decade while Arms Control Today reported in 2012 that yearly funding for U.S. nuclear weapons activity was $31 billion.44

    In the United States, the mushroom cloud connected to Hiroshima is now connected to much larger forces of destruction, including a turn to instrumental reason over moral considerations, the normalization of violence in America, the militarization of local police forces, an attack on civil liberties, the rise of the surveillance state, a dangerous turn towards state secrecy under President Obama, the rise of the carceral state, and the elevation of war as a central organizing principle of society. Rather than stand in opposition to preventing a nuclear mishap or the expansion of the arms industry, the United States places high up on the list of those nations that could trigger what Amy Goodman calls that “horrible moment when hubris, accident or inhumanity triggers the next nuclear attack.”45 Given the history of lies, deceptions, falsifications, and retreat into secrecy that characterizes the American government’s strangulating hold by the military-industrial-surveillance complex, it would be naïve to assume that the U.S. government can be trusted to act with good intentions when it comes to matters of domestic and foreign policy. State terrorism has increasingly become the DNA of American governance and politics and is evident in government cover ups, corruption, and numerous acts of bad faith. Secrecy, lies, and deception have a long history in the United States and the issue is not merely to uncover such instances of state deception but to connect the dots over time and to map the connections, for instance, between the actions of the NSA in the early aftermath of the attempts to cover up the inhumane destruction unleashed by the atomic bomb on Hiroshima and Nagasaki and the role the NSA and other intelligence agencies play today in distorting the truth about government policies while embracing an all-compassing notion of surveillance and squelching of civil liberties, privacy, and freedom.

    Hiroshima symbolizes the fact that the United States commits unspeakable acts making it easier to refuse to rely on politicians, academics, and alleged experts who refuse to support a politics of transparency and serve mostly to legitimate anti-democratic, if not totalitarian policies. Questioning a monstrous war machine whose roots lie in Hiroshima is the first step in declaring nuclear weapons unacceptable ethically and politically. This suggests a further mode of inquiry that focuses on how the rise of the military-industrial complex contributes to the escalation of nuclear weapons and what can we learn by tracing it roots to the development and use of the atom bomb. Moreover, it raises questions about the role played by intellectuals both in an out of the academy in conspiring to build the bomb and hide its effects from the American people? These are only some of the questions that need to be made visible, interrogated, and pursued in a variety of sites and public forums.

    One crucial issue today is what role might intellectuals and matters of civic courage, engaged citizenship, and the educative nature of politics play as part of a sustained effort to resurrect the memory of Hiroshima as both a warning and a signpost for rethinking the nature of collective struggle, reclaiming the radical imagination, and producing a sustained politics aimed at abolishing nuclear weapons forever? One issue would be to revisit the conditions that made Hiroshima and Nagasaki possible, to explore how militarism and a kind of technological fanaticism merged under the star of scientific rationality. Another step forward would be to make clear what the effects of such weapons are, to disclose the manufactured lie that such weapons make us safe. Indeed, this suggests the need for intellectuals, artists, and other cultural workers to use their skills, resources, and connections to develop massive educational campaigns.

    Such campaigns not only make education, consciousness, and collective struggle the center of politics, but also systemically work to both inform the public about the history of such weapons, the misery and suffering they have caused, and how they benefit the financial, government, and corporate elite who make huge amounts of money off the arms race and the promotion of nuclear deterrence and the need for a permanent warfare state. Intellectuals today appear numbed by ever developing disasters, statistics of suffering and death, the Hollywood disimagination machine with its investment in the celluloid Apocalypse for which only superheroes can respond, and a consumer culture that thrives on self-interests and deplores collective political and ethical responsibility.

    There are no rationales or escapes from the responsibility of preventing mass destruction due to nuclear annihilation; the appeal to military necessity is no excuse for the indiscriminate bombing of civilians whether in Hiroshima or Afghanistan. The sense of horror, fear, doubt, anxiety, and powerless that followed Hiroshima and Nagasaki up until the beginning of the 21st century seems to have faded in light of both the Hollywood apocalypse machine, the mindlessness of celebrity and consumer cultures, the growing spectacles of violence, and a militarism that is now celebrated as one of the highest ideals of American life. In a society governed by militarism, consumerism, and neoliberal savagery, it has become more difficult to assume a position of moral, social, and political responsibility, to believe that politics matters, to imagine a future in which responding to the suffering of others is a central element of democratic life. When historical memory fades and people turn inward, remove themselves from politics, and embrace cynicism over educated hope, a culture of evil, suffering, and existential despair. Americans now life amid a culture of indifference sustained by an endless series of manufactured catastrophes that offer a source of entertainment, sensation, and instant pleasure.

    We live in a neoliberal culture that subordinates human needs to the demand for unchecked profits, trumps exchange values over the public good, and embraces commerce as the only viable model of social relations to shape the entirety of social life. Under such circumstances, violence becomes a form of entertainment rather than a source of alarm, individuals no longer question society and become incapable of translating private troubles into larger public considerations. In the age following the use of the atom bomb on civilians, talk about evil, militarism, and the end of the world once stirred public debate and diverse resistance movements, now it promotes a culture of fear, moral panics, and a retreat into the black hole of the disimagination machine. The good news is that neoliberalism now makes clear that it cannot provide a vision to sustain society and works largely to destroy it. It is a metaphor for the atom bomb, a social, political, and moral embodiment of global destruction that needs to be stopped before it is too late. The future will look much brighter without the glow of atomic energy and the recognition that the legacy of death and destruction that extends from Hiroshima to Fukushima makes clear that no one can be a bystander if democracy is to survive.

    notes:
    1. This reference refers to a collection of interviews with Michel Foucault originally published by Semiotext(e). Michel Foucault, “What our present is?” Foucault Live: Collected Interviews, 1961–1984, ed. Sylvere Lotringer, trans. Lysa Hochroth and John Johnston,(New York: Semiotext(e), 1989 and 1996), 407–415.
    Back to the essay

    2. Zygmunt Bauman and Leonidas Donskis, Moral Blindness: The loss of Sensitivity in Liquid Modernity, (Cambridge, UK: Polity Press, 2013), p. 33.
    Back to the essay

    3. Daniel Sandstrom Interviews Philip Roth, “My Life as a Writer,” New York Times (March 2, 2014). Online: http://www.nytimes.com/2014/03/16/books/review/my-life-as-a-writer.html
    Back to the essay

    4. Of course, the Occupy Movement in the United States and the Quebec student movement are exceptions to this trend. See, for instance, David Graeber, The Democracy Project: A History, A Crisis, A Movement, (New York, NY,: The Random House Publishing Group, 2013) and Henry A. Giroux, Neoliberalism’s War Against Higher Education (Chicago: Haymarket, 2014).
    Back to the essay

    5. Of course, the Occupy Movement in the United States and the Quebec student movement are exceptions to this trend. See, for instance, David Graeber, The Democracy Project: A History, A Crisis, A Movement, (New York, NY,: The Random House Publishing Group, 2013) and Henry A. Giroux, Neoliberalism’s War Against Higher Education (Chicago: Haymarket, 2014).
    Back to the essay

    6. Ibid., Lifton and Mitchell, p. 345.
    Back to the essay

    7. Jennifer Rosenberg, “Hiroshima and Nagasaki (Part 2),” About.com –20th Century History (March 28, 201). Online: http://history1900s.about.com/od/worldwarii/a/hiroshima_2.htm. A more powerful atom bomb was dropped on Nagasaki on August 9, 1945 and by the end of the year an estimated 70,000 had been killed. For the history of the making of the bomb, see the monumental: Richard Rhodes, The Making of the Atomic Bomb, Anv Rep edition (New York: Simon & Schuster, 2012.
    Back to the essay

    8. The term “technological fanaticism” comes from Michael Sherry who suggested that it produced an increased form of brutality. Cited in Howard Zinn, The Bomb. (New York. N.Y.: City Lights, 2010), pp. 54-55.
    Back to the essay

    9. Oh Jung, “Hiroshima and Nagasaki: The Decision to Drop the Bomb,” Michigan Journal of History Vol 1. No. 2 (Winter 2002). Online:
    http://michiganjournalhistory.files.wordpress.com/2014/02/oh_jung.pdf.
    Back to the essay

    10. See, in particular, Ronald Takaki, Hiroshima: Why America Dropped the Atomic Bomb, (Boston: Back Bay Books, 1996). http://michiganjournalhistory.files.wordpress.com/2014/02/oh_jung.pdf.
    Back to the essay

    11. Peter Bacon Hales, Outside the Gates of Eden: The Dream Of America From Hiroshima To Now. (Chicago. IL.: University of Chicago Press, 2014), p. 17.
    Back to the essay

    12. Paul Ham, Hiroshima Nagasaki: The Real Story of the Atomic Bombings and Their Aftermath (New York: Doubleday, 2011).
    Back to the essay

    13. Kensaburo Oe, Hiroshima Notes (New York: Grove Press, 1965), p. 114.
    Back to the essay

    14. Ibid., Oe, Hiroshima Notes, p. 117.
    Back to the essay

    15. Robert Jay Lifton and Greg Mitchell, Hiroshima in America, (New York, N.Y.: Avon Books, 1995). p. 314-315. 328.
    Back to the essay

    16. Ibid., Oh Jung, “Hiroshima and Nagasaki: The Decision to Drop the Bomb.”
    Back to the essay

    17. Robert Jay Lifton, “American Apocalypse,” The Nation (December 22, 2003), p. 12.
    Back to the essay

    18. For an interesting analysis of how the bomb was defended by the New York Times and a number of high ranking politicians, especially after John Hersey’s Hiroshima appeared in The New Yorker, see Steve Rothman, “The Publication of “Hiroshima” in The New Yorker,” Herseyheroshima.com, (January 8, 1997). Online: http://www.herseyhiroshima.com/hiro.php
    Back to the essay

    19. Wilson cited in Lifton and Mitchell, Hiroshima In America, p. 309.
    Back to the essay

    20. Ibid., Peter Bacon Hales, Outside The Gates of Eden: The Dream Of America From Hiroshima To Now, p. 8.
    Back to the essay

    21. Ibid., Zinn, The Bomb, p. 26.
    Back to the essay

    22. Ibid., Robert Jay Lifton and Greg Mitchell, Hiroshima In America.
    Back to the essay

    23. For a more recent articulation of this argument, see Ward Wilson, Five Myths About Nuclear Weapons (new York: Mariner Books, 2013).
    Back to the essay

    24. Ronald Takaki, Hiroshima: Why America Dropped the Atomic Bomb, (Boston: Back Bay Books, 1996), p. 39
    Back to the essay

    25. Ibid, Zinn, The Bomb, p. 45.
    Back to the essay

    26. See, for example, Ibid., Haseqawa; Gar Alperowitz’s, Atomic Diplomacy Hiroshima and Potsdam: The Use of the Atomic Bomb and the American Confrontation with Soviet Power (London: Pluto Press, 1994) and also Gar Alperowitz, The Decision to Use the Atomic Bomb (New York: Vintage, 1996). Ibid., Ham.
    Back to the essay

    27. John Hersey, Hiroshima (New York: Alfred A. Knopf, 1946), p. 68.
    Back to the essay

    28. Giovanna Borradori, ed, “Autoimmunity: Real and Symbolic Suicides–a dialogue with Jacques Derrida,” in Philosophy in a Time of Terror: Dialogues with Jurgen Habermas and Jacques Derrida (Chicago: University of Chicago Press, 2004), pp. 85-136.
    Back to the essay

    29. Mary McCarthy, “The Hiroshima “New Yorker”,” The New Yorker (November, 1946).
    http://americainclass.org/wp-content/uploads/2013/03/mccarthy_onhiroshima.pdf
    Back to the essay

    30. Ibid., Ham, Hiroshima Nagasaki, p. 469.
    Back to the essay

    31. George Burchett & Nick Shimmin, eds. Memoirs of a Rebel Journalist: The Autobiography of Wilfred Burchett, (UNSW Press, Sydney, 2005), p.229.
    Back to the essay

    32. For an informative analysis of the deep state and a politics driven by corporate power, see Bill Blunden, “The Zero-Sum Game of Perpetual War,” Counterpunch (September 2, 2014). Online: http://www.counterpunch.org/2014/09/02/the-zero-sum-game-of-perpetual-war/
    Back to the essay

    33. The following section relies on the work of both Lifton and Mitchell, Howard Zinn, and M. Susan Lindee.
    Back to the essay

    34. Greg Mitchell, “The Great Hiroshima Cover-up,” The Nation, (August 3, 2011). Online:
    http://www.thenation.com/blog/162543/great-hiroshima-cover#. Also see, Greg Mitchell, “Part 1: Atomic Devastation Hidden For Decades,” WhoWhatWhy (March 26, 2014). Online: http://whowhatwhy.com/2014/03/26/atomic-devastation-hidden-decades; Greg Mitchell, “Part 2: How They Hid the Worst Horrors of Hiroshima,” WhoWhatWhy, (March 28, 2014). Online:
    http://whowhatwhy.com/2014/03/28/part-2-how-they-hid-the-worst-horrors-of-hiroshima/; Greg Mitchell, “Part 3: Death and Suffering, in Living Color,” WhoWhatWhy (March 31, 2014). Online: http://whowhatwhy.com/2014/03/31/death-suffering-living-color/
    Back to the essay

    35. Ibid., Robert Jay Lifton and Greg Mitchell, Hiroshima In America, p. 321.
    Back to the essay

    36. Ibid., Robert Jay Lifton and Greg Mitchell, Hiroshima In America, p. 322.
    Back to the essay

    37. Ibid. Robert Jay Lifton and Greg Mitchell, Hiroshima In America, p. 322-323.
    Back to the essay

    38. Ibid. Robert Jay Lifton and Greg Mitchell, Hiroshima In America, p. 336.
    Back to the essay

    39. George Monbiot, “Evidence Meltdown,” The Guardian (April 5, 2011). Online: http://www.monbiot.com/2011/04/04/evidence-meltdown/
    Back to the essay

    40. Patrick Allitt, A Climate of Crisis: America in the Age of Environmentalism (New York: Penguin, 2015); Horace Herring, From Energy Dreams to Nuclear Nightmares: Lessons from the Anti-nuclear Power Movement in the 1970s (Chipping Norton, UK: Jon Carpenter Publishing, 2006; Alain Touraine, Anti-Nuclear Protest: The Opposition to Nuclear Energy in France (Cambridge, UK: Cambridge University Press, 1983); Stephen Croall, The Anti-Nuclear Handbook New York: Random House, 1979). On the decade that enveloped the anti-nuclear moment with a series of crisis, see Philip Jenkins, Decade of Nightmares: The End of the Sixties and the Making of Eighties America (New York: Oxford University Press, 2008).
    Back to the essay

    41. James McCluskey, “Nuclear Crisis: Can the Sane Prevail in Time?” Truthout (June 10, 2014). Online: http://www.truth-out.org/opinion/item/24273
    Back to the essay

    42. See, for example, the list of crisis, near misses, and nuclear war mongering that characterizes United States foreign policy in the last few decades, see, Noam Chomsky, “How Many Minutes to Midnight? Hiroshima Day 2014.” Truthout (August 5, 2014). Online: http://www.truth-out.org/news/item/25388-how-many-minutes-to-midnight-hiroshima-day-2014
    Back to the essay

    43. Patricia Lewis, Heather Williams, Benoît Pelopidas and Sasan Aghlani, To Close for Comfort —Cases of Near Nuclear Use and Options for Policy (London: Chatham House, 2014). Online: http://www.chathamhouse.org/sites/files/chathamhouse/home/chatham/public_html/sites/default/files/20140428TooCloseforComfortNuclearUseLewisWilliamsPelopidasAghlani.pdf
    Back to the essay

    44. Jim McCluskey, “Nuclear Deterrence: The Lie to End All Lies,” Truthout (Oct 29, 2012). Online: http://www.truth-out.org/opinion/item/12381
    Back to the essay

    45. Amy Goodman, “Hiroshima and Nagasaki, 69 Year Later,” TruthDig (August 6, 2014). Online: http://www.truthdig.com/report/item/hiroshima_and_nagasaki_69_years_later_20140806
    Back to the essay

  • The Eversion of the Digital Humanities

    The Eversion of the Digital Humanities

    image
    by Brian Lennon

    on The Emergence of the Digital Humanities by Steven E. Jones

    1

    Steven E. Jones begins his Introduction to The Emergence of the Digital Humanities (Routledge, 2014) with an anecdote concerning a speaking engagement at the Illinois Institute of Technology in Chicago. “[M]y hosts from the Humanities department,” Jones tells us,

    had also arranged for me to drop in to see the fabrication and rapid-prototyping lab, the Idea Shop at the University Technology Park. In one empty room we looked into, with schematic drawings on the walls, a large tabletop machine jumped to life and began whirring, as an arm with a router moved into position. A minute later, a student emerged from an adjacent room and adjusted something on the keyboard and monitor attached by an extension arm to the frame for the router, then examined an intricately milled block of wood on the table. Next door, someone was demonstrating finely machined parts in various materials, but mostly plastic, wheels within bearings, for example, hot off the 3D printer….

    What exactly, again, was my interest as a humanist in taking this tour, one of my hosts politely asked?1

    It is left almost entirely to more or less clear implication, here, that Jones’s humanities department hosts had arranged the expedition at his request, and mainly or even only to oblige a visitor’s unusual curiosity, which we are encouraged to believe his hosts (if “politely”) found mystifying. Any reader of this book must ask herself, first, if she believes this can really have occurred as reported: and if the answer to that question is yes, if such a genuinely unlikely and unusual scenario — the presumably full-time, salaried employees of an Institute of Technology left baffled by a visitor’s remarkable curiosity about their employer’s very raison d’être — warrants any generalization at all. For that is how Jones proceeds: by generalization, first of all from a strained and improbably dramatic attempt at defamiliarization, in the apparent confidence that this anecdote illuminating the spirit of the digital humanities will charm — whom, exactly?

    It must be said that Jones’s history of “digital humanities” is refreshingly direct and initially, at least, free of obfuscation, linking the emergence of what it denotes to events in roughly the decade preceding the book’s publication, though his reading of those events is tendentious. It was the “chastened” retrenchment after the dot-com bubble in 2000, Jones suggests (rather, just for example, than the bubble’s continued inflation by other means) that produced the modesty of companies like our beloved Facebook and Twitter, along with their modest social networking platform-products, as well as the profound modesty of Google Inc. initiatives like Google Books (“a development of particular interest to humanists,” we are told2) and Google Maps. Jones is clearer-headed when it comes to the disciplinary history of “digital humanities” as a rebaptism of humanities computing and thus — though he doesn’t put it this way — a catachrestic asseveration of traditional (imperial-nationalist) philology like its predecessor:

    It’s my premise that what sets DH apart from other forms of media studies, say, or other approaches to the cultural theory of computing, ultimately comes through its roots in (often text-based) humanities computing, which always had a kind of mixed-reality focus on physical artifacts and archives.3

    Jones is also clear-headed on the usage history of “digital humanities” as a phrase in the English language, linking it to moments of consolidation marked by Blackwell’s Companion to Digital Humanities, the establishment of the National Endowment for the Humanities Office for the Digital Humanities, and higher-education journalism covering the annual Modern Language Association of America conventions. It is perhaps this sensitivity to “digital humanities” as a phrase whose roots lie not in original scholarship or cultural criticism itself (as was still the case with “deconstruction” or “postmodernism,” even at their most shopworn) but in the dependent, even parasitic domains of reference publishing, grant-making, and journalism that leads Jones to declare “digital humanities” a “fork” of humanities computing, rather than a Kuhnian paradigm shift marking otherwise insoluble structural conflict in an intellectual discpline.

    At least at first. Having suggested it, Jones then discards the metaphor drawn from the tree structures of software version control, turning to “another set of metaphors” describing the digital humanities as having emerged not “out of the primordial soup” but “into the spotlight” (Jones, 5). We are left to guess at the provenance of this second metaphor, but its purpose is clear: to construe the digital humanities, both phenomenally and phenomenologically, as the product of a “shift in focus, driven […] by a new set of contexts, generating attention to a range of new activities” (5).

    Change; shift; new, new, new. Not a branch or a fork, not even a trunk: we’re now in the ecoverse of history and historical time, in its collision with the present. The appearance and circulation of the English-language phrase “digital humanities” can be documented — that is one of the things that professors of English like Jones do especially well, when they care to. But “changes in the culture,” much more broadly, within only the last ten years or so? No scholar in any discipline is particularly well trained, well positioned, or even well suited to diagnosing those; and scholars in English studies won’t be at the top of anyone’s list. Indeed, Jones very quickly appeals to “author William Gibson” for help, settling on the emergence of the digital humanities as a response to what Gibson called “the eversion of cyberspace,” in its ostensibly post-panopticist colonization of the physical world.6 It makes for a rather inarticulate and self-deflating statement of argument, in which on its first appearance eversion, ambiguously, appears to denote the response as much as its condition or object:

    My thesis is simple: I think that the cultural response to changes in technology, the eversion, provides an essential context for understanding the emergence of DH as a new field of study in the new millennium.7

    Jones offers weak support for the grandiose claim that “we can roughly date the watershed moment when the preponderant collective perception changed to 2004–2008″ (21). Second Life “peaked,” we are told, while World of Warcraft “was taking off”; Nintendo introduced the Wii; then Facebook “came into its own,” and was joined by Twitter and Foursquare, then Apple’s iPhone. Even then (and setting aside the question of whether such benchmarking is acceptable evidence), for the most part Jones’s argument, such as it is, is that something is happening because we are talking about something happening.

    But who are we? Jones’s is the typical deference of the scholar to the creative artist, unwilling to challenge the latter’s utter dependence on meme engineering, at least where someone like Gibson is concerned; and Jones’s subsequent turn to the work of a scholar like N. Katherine Hayles on the history of cybernetics comes too late to amend the impression that the order of things here is marked first by gadgets, memes, and conversations about gadgets and memes, and only subsequently by ideas and arguments about ideas. The generally unflattering company among whom Hayles is placed (Clay Shirky, Nathan Jurgenson) does little to move us out of the shallows, and Jones’s profoundly limited range of literary reference, even within a profoundly narrowed frame — it’s Gibson, Gibson, Gibson all the time, with the usual cameos by Bruce Sterling and Neal Stephenson — doesn’t help either.

    Jones does have one problem with the digital humanities: it ignores games. “My own interest in games met with resistance from some anonymous peer reviewers for the program for the DH 2013 conference, for example,” he tells us (33). “[T]he digital humanities, at least in some quarters, has been somewhat slow to embrace the study of games” (59). “The digital humanities could do worse than look to games” (36). And so on: there is genuine resentment here.

    But nobody wants to give a hater a slice of the pie, and a Roman peace mandates that such resentment be sublated if it is to be, as we say, taken seriously. And so in a magical resolution of that tension, the digital humanities turns out to be constituted by what it accidentally ignores or actively rejects, in this case — a solution that sweeps antagonism under the rug as we do in any other proper family. “[C]omputer-based video games embody procedures and structures that speak to the fundamental concerns of the digital humanities” (33). “Contemporary video games offer vital examples of digital humanities in practice” (59). If gaming “sounds like what I’ve been describing as the agenda of the digital humanities, it’s no accident” (144).

    Some will applaud Jones’s niceness on this count. It may strike others as desperately friendly, a lingering under a big tent as provisional as any other tent, someday to be replaced by a building, if not by nothing. Few of us will deny recognition to Second Life, World of Warcraft, Wii, Facebook, Twitter, etc. as cultural presences, at least for now. But Jones’s book is also marked by slighter and less sensibly chosen benchmarks, less sensibly chosen because Jones’s treatment of them, in a book whose ambition is to preach to the choir, simply imputes their cultural presence. Such brute force argument drives the pathos that Jones surely feels, as a scholar — in the recognition that among modern institutions, it is only scholarship and the law that preserve any memory at all — into a kind of melancholic unconscious, from whence his objects return to embarrass him. “[A]s I write this,” we read, “QR codes show no signs yet of fading away” (41). Quod erat demonstrandum.

    And it is just there, in such a melancholic unconscious, that the triumphalism of the book’s title, and the “emergence of the digital humanities” that it purports to mark, claim, or force into recognition, straightforwardly gives itself away. For the digital humanities will pass away, and rather than being absorbed into the current order of things, as digital humanities enthusiasts like to believe happened to “high theory” (it didn’t happen), the digital humanities seems more likely, at this point, to end as a blank anachronism, overwritten by the next conjuncture in line with its own critical mass of prognostications.

    2

    To be sure, who could deny the fact of significant “changes in the culture” since 2000, in the United States at least, and at regular intervals: 2001, 2008, 2013…? Warfare — military in character, but when that won’t do, economic; of any interval, but especially when prolonged and deliberately open-ended; of any intensity, but especially when flagrantly extrajudicial and opportunistically, indeed sadistically asymmetrical — will do that to you. No one who sets out to historicize the historical present can afford to ignore the facts of present history, at the very least — but the fact is that Jones finds such facts unworthy of comment, and in that sense, for all its pretense to worldliness, The Emergence of the Digital Humanities is an entirely typical product of the so-called ivory tower, wherein arcane and plain speech alike are crafted to euphemize and thus redirect and defuse the conflicts of the university with other social institutions, especially those other institutions who command the university to do this or do that. To take the ambiguity of Jones’s thesis statement (as quoted above) at its word: what if the cultural response that Jones asks us to imagine, here, is indeed and itself the “eversion” of the digital humanities, in one of the metaphorical senses he doesn’t quite consider: an autotomy or self-amputation that, as McLuhan so enjoyed suggesting in so many different ways, serves to deflect the fact of the world as a whole?

    There are few moments of outright ignorance in The Emergence of the Digital Humanities — how could there be, in the security of such a narrow channel?6 Still, pace Jones’s basic assumption here (it is not quite an argument), we might understand the emergence of the digital humanities as the emergence of a conversation that is not about something — cultural change, etc. — as much as it is an attempt to avoid conversing about something: to avoid discussing such cultural change in its most salient and obvious flesh-and-concrete manifestations. “DH is, of course, a socially constructed phenomenon,” Jones tells us (7) — yet “the social,” here, is limited to what Jones himself selects, and selectively indeed. “This is not a question of technological determinism,” he insists. “It’s a matter of recognizing that DH emerged, not in isolation, but as part of larger changes in the culture at large and that culture’s technological infrastructure” (8). Yet the largeness of those larger changes is smaller than any truly reasonable reader, reading any history of the past decade, might have reason to expect. How pleasant that such historical change was “intertwined with culture, creativity, and commerce” (8) — not brutality, bootlicking, and bank fraud. Not even the modest and rather opportunistic gloom of Gibson’s 2010 New York Times op-ed entitled “Google’s Earth” finds its way into Jones’s discourse, despite the extended treatment that Gibson’s “eversion” gets here.

    From our most ostensibly traditional scholarly colleagues, toiling away in their genuine and genuinely book-dusty modesty, we don’t expect much respect for the present moment (which is why they often surprise us). But The Emergence of the Digital Humanities is, at least in ambition, a book about cultural change over the last decade. And such historiographic elision is substantive — enough so to warrant impatient response. While one might not want to say that nothing good can have emerged from the cultural change of the period in question, it would be infantile to deny that conditions have been unpropitious in the extreme, possibly as unpropitious as they have ever been, in U.S. postwar history — and that claims for the value of what emerges into institutionality and institutionalization, under such conditions, deserve extra care and, indeed defense in advance, if one wants not to invite a reasonably caustic skepticism.

    When Jones does engage in such defense, it is weakly argued. To construe the emergence of the digital humanities as non-meaninglessly concurrent with the emergence of yet another wave of mass educational automation (in the MOOC hype that crested in 2013), for example, is wrong not because Jones can demonstrate that their concurrence is the concurrence of two entirely segregated genealogies — one rooted in Silicon Valley ideology and product marketing, say, and one utterly and completely uncaused and untouched by it — but because to observe their concurrence is “particularly galling” to many self-identified DH practitioners (11). Well, excuse me for galling you! “DH practitioners I know,” Jones informs us, “are well aware of [the] complications and complicities” of emergence in an age of precarious labor, “and they’re often busy answering, complicating, and resisting such opportunistic and simplistic views” (10). Argumentative non sequitur aside, that sounds like a lot of work undertaken in self-defense — more than anyone really ought to have to do, if they’re near to the right side of history. Finally, “those outside DH,” Jones opines in an attempt at counter-critique, “often underestimate the theoretical sophistication of many in computing,” who “know better than many of their humanist critics that their science is provisional and contingent” (10): a statement that will only earn Jones super-demerits from those of such humanist critics — they are more numerous than the likes of Jones ever seem to suspect — who came to the humanities with scientific and/or technical aptitudes, sometimes with extensive educational and/or professional training and experience, and whose “sometimes world-weary and condescending skepticism” (10) is sometimes very well-informed and well-justified indeed, and certain to outlive Jones’s winded jabs at it.

    Jones is especially clumsy in confronting the charge that the digital humanities is marked by a forgetting or evasion of the commitment to cultural criticism foregrounded by other, older and now explicitly competing formations, like so-called new media studies. Citing the suggestion by “media scholar Nick Montfort” that “work in the digital humanities is usually considered to be the digitization and analysis of pre-digital cultural artifacts, not the investigation of contemporary computational media,” Jones remarks that “Montfort’s own work […] seems to me to belie the distinction,”7 as if Montfort — or anyone making such a statement — were simply deluded about his own work, or about his experience of a social economy of intellectual attention under identifiably specific social and historical conditions, or else merely expressing pain at being excluded from a social space to which he desired admission, rather than objecting on principle to a secessionist act of imagination.8

    3

    Jones tells us that he doesn’t “mean to gloss over the uneven distribution of [network] technologies around the world, or the serious social and political problems associated with manufacturing and discarding the devices and maintaining the server farms and cell towers on which the network depends” — but he goes ahead and does it anyway, and without apology or evident regret. “[I]t’s not my topic in this book,” we are told, “and I’ve deliberately restricted my focus to the already-networked world” (3). The message is clear: this is a book for readers who will accept such circumscription, in what they read and contemplate. Perhaps this is what marks the emergence of the digital humanities, in the re-emergence of license for restrictive intellectual ambition and a generally restrictive purview: a bracketing of the world that was increasingly discredited, and discredited with increasing ferocity, just by the way, in the academic humanities in the course of the three decades preceding the first Silicon Valley bubble. Jones suggests that “it can be too easy to assume a qualitative hierarchical difference in the impact of networked technology, too easy to extend the deeper biases of privilege into binary theories of the global ‘digital divide’” (4), and one wonders what authority to grant to such a pronouncement when articulated by someone who admits he is not interested, at least in this book, in thinking about how an — how any — other half lives. It’s the latter, not the former, that is the easy choice here. (Against a single, entirely inconsequential squib in Computer Business Review entitled “Report: Global Digital Divide Getting Worse,” an almost obnoxiously perfunctory footnote pits “a United Nations Telecoms Agency report” from 2012. This is not scholarship.)

    Thus it is that, read closely, the demand for finitude in the one capacity in which we are non-mortal — in thought and intellectual ambition — and the more or less cheerful imagination of an implied reader satisfied by such finitude, become passive microaggressions aimed at another mode of the production of knowledge, whose expansive focus on a theoretical totality of social antagonism (what Jones calls “hierarchical difference”) and justice (what he calls “binary theories”) makes the author of The Emergence of the Digital Humanities uncomfortable, at least on its pages.

    That’s fine, of course. No: no, it’s not. What I mean to say is that it’s unfair to write as if the author of The Emergence of the Digital Humanities alone bears responsibility for this particular, certainly overdetermined state of affairs. He doesn’t — how could he? But he’s getting no help, either, from most of those who will be more or less pleased by the title of his book, and by its argument, such as it is: because they want to believe they have “emerged” along with it, and with that tension resolved, its discomforts relieved. Jones’s book doesn’t seriously challenge that desire, its (few) hedges and provisos notwithstanding. If that desire is more anxious now than ever, as digital humanities enthusiasts find themselves scrutinized from all sides, it is with good reason.
    _____

    Brian Lennon is Associate Professor of English and Comparative Literature at Pennsylvania State University and the author of In Babel’s Shadow: Multilingual Literatures, Monolingual States (University of Minnesota Press, 2010).
    _____

    notes:
    1. Jones, 1.
    Back to the essay

    2. Jones, 4. “Interest” is presumed to be affirmative, here, marking one elision of the range of humanistic critical and scholarly attitudes toward Google generally and the Google Books project in particular. And of the unequivocally less affirmative “interest” of creative writers as represented by the Authors Guild, just for example, Jones has nothing to say: another elision.
    Back to the essay

    3. Jones, 13.
    Back to the essay

    4. See Gibson.
    Back to the essay

    5. Jones, 5.
    Back to the essay

    6. As eager as any other digital humanities enthusiast to accept Franco Moretti’s legitimation of DH, but apparently incurious about the intellectual formation, career and body of work that led such a big fish to such a small pond, Jones opines that Moretti’s “call for a distant reading” stands “opposed to the close reading that has been central to literary studies since the late nineteenth century” (Jones, 62). “Late nineteenth century” when exactly, and where (and how, and why)? one wonders. But to judge by what Jones sees fit to say by way of explanation — that is, nothing at all — this is mere hearsay.
    Back to the essay

    7. Jones, 5. See also Montfort.
    Back to the essay

    8. As further evidence that Montfort’s statement is a mischaracterization or expresses a misunderstanding, Jones suggests the fact that “[t]he Electronic Literature Organization itself, an important center of gravity for the study of computational media in which Montfort has been instrumental, was for a time housed at the Maryland Institute for Technology in the Humanities (MITH), a preeminent DH center where Matthew Kirschenbaum served as faculty advisor” (Jones, 5–6). The non sequiturs continue: “digital humanities” includes the study of computing and media because “self-identified practitioners doing DH” study computing and media (Jones, 6); the study of computing and media is also “digital humanities” because the study of computing and digital media might be performed at institutions like MITH or George Mason University’s Roy Rosenzweig Center for History and New Media, which are “digital humanities centers” (although the phrase “digital humanities” appears nowhere in their names); “digital humanities” also adequately describes work in “media archaeology” or “media history,” because such work has “continued to influence DH” (Jones, 6); new media studies is a component of the digital humanities because some scholars suggest it is so, and others cannot be heard to object, at least after one has placed one’s fingers in one’s ears; and so on.
    Back to the essay

    (feature image: “Bandeau – Manifeste des Digital Humanities,” uncredited; originally posted on flickr.)

  • The Lenses of Failure

    The Lenses of Failure

    The Art of Failure

    by Nathan Altice

    On Software’s Dark Souls II and Jesper Juul’s The Art of Failure

    ~

    I am speaking to a cat named Sweet Shalquoir. She lounges on a desk in a diminutive house near the center of Majula, a coastal settlement that harbors a small band of itinerant merchants, tradespeople, and mystics. Among Shalquoir’s wares is the Silvercat ring, whose circlet resembles a leaping, blue-eyed cat.

    ‘You’ve seen that gaping hole over there? Well, there’s nasty little vermin down there,’ Shalquoir says, observing my window shopping. ‘Although who you seek is even further below.’ She laughs. She knows her costly ring grants its wearer a cat-like affinity for lengthy drops. I check my inventory. Having just arrived in Majula, I have few souls on hand.

    I turn from Shalquoir and exit the house ringless. True to her word, a yawning chasm opens before me, its perimeter edged in slabbed stonework and crumbling statues but otherwise unmarked and unguarded. One could easily fall in while sprinting from house to house in search of Majula’s residents. Wary of an accidental fall, I nudge toward its edge.

    The pit has a mossy patina, as if it was once a well for giants that now lies parched after drinking centuries of Majula’s sun. Its surface is smooth save for a few distant torches sawing at the dark and several crossbeams that bisect its diameter at uneven intervals. Their configuration forms a makeshift spiral ladder. Corpses are slung across the beams like macabre dolls, warning wanderers fool enough to chase after nasty little vermin. But atop the first corpse gleams a pinprick of ethereal light, both a beacon to guide the first lengthy drop and a promise of immediate reward if one survives.

    Silvercat ring be damned, I think I can make it.

    I position myself parallel to the first crossbeam, eyes fixed on that glimmering point. I jump.

    The Jump

    [Dark Souls II screenshots source: ItsBlueLizardJello via YouTube]

    For a breathless second, I plunge toward the beam. My aim is true—but my body is weak. I collapse, sprawled atop the lashed wooden planks, inches from my coveted jewel. I evaporate into a green vapor as two words appear in the screen’s lower half: ‘YOU DIED.’

    Decisions such as these abound in Dark Souls II, the latest entry in developer From Software’s cult-to-crossover-hit series of games bearing the Souls moniker. The first, Demon’s Souls, debuted on the PlayStation 3 in 2009, attracting players with its understated lore, intricate level design, and relentless difficulty. Spiritual successor Dark Souls followed in 2011 and its direct sequel Dark Souls II released earlier this year.

    Each game adheres to standard medieval fantasy tropes: there are spellcasters, armor-clad knights, parapet-trimmed castles, and a variety of fire-spewing dragons. You select one out of several archetypal character classes (e.g., Cleric, Sorcerer, Swordsman), customize a few appearance options, then explore and fight through a series of interconnected, yet typically non-linear, locations populated by creatures of escalating difficulty. What distinguishes these games from the hundreds of other fantasy games those initial conditions could describe are their melancholy tone and their general disregard for player hand-holding. Your hero begins as little more than a voiceless, fragile husk with minimal direction and fewer resources. Merely surviving takes precedence over rescuing princesses or looting dungeons. The Souls games similarly reveal little about their settings or systems, driving some players to declare them among the worst games ever made while catalyzing others to revisit the game’s environs for hundreds of hours. Vibrant communities have emerged around the Souls series, partly in an effort to document the mechanics From Software purposefully obscures and partly to construct a coherent logic and lore from the scraps and minutiae the game provides.

    Dark Souls II Settings

    Unlike most action games, every encounter in Dark Souls II is potentially deadly, from the lowliest grunts to the largest boss creatures. To further raise the stakes, death has consequences. Slaying foes grants souls, the titular items that fuel both trade and character progression. Spending souls increases your survivability, whether you invest them directly in your character stats (e.g. Vitality) or a more powerful shield. However, dying forfeits any souls you are currently carrying and resets your progress to the last bonfire (i.e., checkpoint) you rested beside. The catch is that dying or resting resets any creatures you have previously slain, giving your quest a moribund, Sisyphean repetition that grinds impatient players to a halt. And once slain, you have one chance to recover your lost souls. A glowing green aura marks the site of your previous bereavement. Touch that mark before you die again and you regain your cache; fail to do so and you lose it forever. You will often fail to do so.

    What many Souls reviewers find refreshing about the game’s difficulty is actually a more forgiving variation of the death mechanics found in early ASCII-based games like Rogue (1980), Hack (1985), and NetHack (1987), wherein ‘permadeath’—i.e., death meant starting the game anew—was a central conceit. And those games were almost direct ‘ports’ of tabletop roleplaying progenitors like Dungeons & Dragons, whose early versions were skewed more toward the gritty realism of pulp literature than the godlike power fantasies of modern roleplaying games. A successful career in D&D meant accumulating enough treasure to eventually retire from dungeon-delving, so one could hire other hapless retainers to loot on your behalf. Death was frequent and expected because dungeons were dangerous places. And unless one’s Dungeon Master was particularly lenient, death was final. A fatal mistake meant re-rolling your character. In this sense, the Souls games stand apart from their videogame peers because of the conservatism of their design. Though countless games ape D&D’s generic fantasy setting and stat-based progress model, few adopt the existential dread of its early forms.

    Dark Souls II’s adherence to opaque systems and traditional difficulty has alienated players unaccustomed to the demands of earlier gaming models. For those repeatedly stymied by the game’s frustrations, several questions arise: Why put forth the effort in a game that feels so antagonistic toward its players? Is there any reward worth the frequent, unforgiving failure? Aren’t games supposed to be fun—and is failing fun?

    YOU DIED

    Games scholar Jesper Juul raises similar questions in The Art of Failure, the second book in MIT’s new Playful Thinking series. His central thesis is that games present players a ‘paradox of failure’: we do not like to fail, yet games perpetually make us do so; weirder still, we seek out games voluntarily, even though the only victory they offer is over a failure that they themselves create. Despite games’ reputation as frivolous fun, they can humiliate and infuriate us. Real emotions are at stake. And, as Juul argues, ‘the paradox of failure is unique in that when you fail in a game, it really means that you were in some way inadequate’ (7). So when my character plunges down the pit in Majula, the developers do not tell me ‘Your character died,’ even though I have named that character. Instead the games remind us, ‘YOU DIED.’ YOU, the player, the one holding the Xbox 360 controller.

    The strength of Juul’s argument is that he does not rely on a single discipline but instead approaches failure via four related ‘lenses’: philosophy, psychology, game design, and fiction (30). Each lens has its own brief chapter and accompanying game examples, and throughout Juul interjects anecdotes from his personal play experience alongside lessons he’s learned co-designing a number of experimental video games. The breadth of examples is wide, ranging from big-budget games like Uncharted 2, Meteos, and Skate 2 to more obscure works like Flywrench, September 12, and Super Real Tennis.

    Juul’s first lens (chapter 2) links up his paradox of failure to a longstanding philosophical quandary known as the ‘paradox of painful art.’ Like video games, art tends to elicit painful emotions from viewers, whether a tragic stage play or a disturbing novel, yet contrary to the notion that we seek to avoid pain, people regularly pursue such art—even enjoy it. Juul provides a summary of positions philosophers have offered to explain this behavior, categorized as follows: deflationary arguments skirt the paradox by claiming that art doesn’t actually cause us pain in the first place; compensatory arguments acknowledge the pain, but claim that the sum of painful vs. pleasant reactions to art yield a net positive; and a-hedonistic arguments deny that humans are solely pleasure-seekers—some of us pursue pain.

    Juul’s commonsense response is that we should not limit human motivation to narrow, atemporal explanations. Instead, a synthesis of categories is possible, because we can successfully manage multiple contradictory desires based on immediate and long-term (i.e., aesthetic) time frames. He writes, ‘Our moment-to-moment desire to avoid unpleasant experiences is at odds with a longer-term aesthetic desire in which we understand failure, tragedy, and general unpleasantness to be necessary for our experience’ (115). In Dark Souls II, I faced a particularly challenging section early on when my character, a sorcerer, was under-powered and under-equipped to face a strong, agile boss known as The Pursuer. I spent close to four hours running the same path to the boss, dying dozens of times, with no net progress.

    Facing the Pursuer

    For Juul, my continued persistence did not betray a masochistic personality flaw (not that I didn’t consider it), nor would he trivialize my frustration (which I certainly felt), nor would he argue that I was eking out more pleasure than pain during my repeated trials (I certainly wasn’t). Instead, I was tolerating immediate failure in pursuit of a distant aesthetic goal, one that would not arrive during that game session—or many sessions to come. And indeed, this is why Juul calls games the ‘art of failure,’ because ‘games hurt us and then induce an urgency to repair our self-image’ (45). I could only overcome the Pursuer if I learned to play better. Juul writes, ‘Failure is integral to the enjoyment of game playing in a way that it is not integral to the enjoyment of learning in general. Games are a perspective on failure and learning as enjoyment, or satisfaction’ (45). Failure is part of what makes a game a game.

    Chapter 3 proceeds to the psychological lens, allowing Juul to review the myriad ways we experience failure emotionally. For many games, the impact can be significant: ‘To play a game is to take an emotional gamble. The higher the stakes, in terms of time investment, public acknowledgement, and personal importance, the higher are the potential losses and rewards’ (57). Failure doesn’t feel good, but again, paradoxically, we must first accept responsibility for our failures in order to then learn from them. ‘Once we accept responsibility,’ Juul writes, ‘failure also concretely pushes us to search for new strategies and learning opportunities in a game’ (116). But why can’t we learn without the painful consequences? Because most of us need prodding to be the best players we can be. In the absence of failure, players will cheese and cheat their way to favorable outcomes (59).

    Juul concludes that games help us grow—‘we come away from any skill-based game changed, wiser, and possessing new skills’ (59)—but his more interesting point is how we buffer the emotional toll of failure by diverting or transforming it. ‘Self-defeating’ players react to failure by lessening their efforts, a laissez-faire attitude that makes failure expected and thus less painful. ‘Spectacular’ failures, on the other hand, elevate negativity to an aesthetic focal point. When I laugh at the quivering pile of polygons clipped halfway through the floor geometry by the Pursuer’s blade, I’m no longer lamenting my own failure but celebrating the game’s.

    Chapter 4 provides a broad view of how games are designed to make us fail and counters much conventional wisdom about prevailing design trends. For instance, many players complain that contemporary games are too easy, that we don’t fail enough, but Juul argues that those players are confusing failure with punishment. Failure is now designed to be more frequent than in the past, but punishment is far less severe. Death in early arcade or console games often meant total failure, resetting your progress to the beginning of the game. Death in Dark Souls II merely forfeits your souls in-hand—any spent souls, found items, gained levels, or cached equipment are permanent. Punishment certainly feels severe when you lose tens of thousands of souls, but the consequences are far less jarring than losing your final life in Ghost ’n’ Goblins.

    Juul outlines three different paths through which games lead us to success or failure—skill, chance, and labor—but notes that his categories are neither exhaustive nor mutually exclusive (75, 82). The first category is likely the most familiar for frequent game players: ‘When we fail in a game of skill, we are therefore marked as deficient in a straightforward way: as lacking the skills required to play the game’ (74). When our skills fail us, we only have ourselves to blame. Chance, however, ‘marks us in a different way…as being on poor terms with the gods, or as simply unlucky, which is still a personal trait that we would rather not have’ (75). With chance in play, failure gains a cosmic significance.

    Labor is one of the newer design paths, characterized by the low-skill, slow-grind style of play frequently maligned in Farmville and its clones, but also found in better-regarded titles like World of Warcraft (and RPGs in general). In these games, failure has its lowest stakes: ‘Lack of success in a game of labor therefore does not mark us as lacking in skill or luck, but at worst as someone lazy (or too busy). For those who are afraid of failure, this is close to an ideal state. For those who think of games as personal struggles for improvement, games of labor are anathema’ (79). Juul’s last point is an important lesson for critics quick to dismiss the ‘click-to-win’ genre outright. For players averse to personal or cosmic failure, games of labor are a welcome respite.

    Juul’s final lens (chapter 5) examines fictional failure. ‘Most video games,’ he writes, ‘represent our failures and successes by letting our performance be mirrored by a protagonist (or society, etc.) in the game’s fictional world. When we are unhappy to have failed, a fictional character is also unhappy’ (117). Beginning with this conventional case, Juul then discusses games that subvert or challenge the presumed alignment of player/character interests, asking whether games can be tragic or present situations where character failure might be the desired outcome. While Juul concedes that ‘the self-destruction of the protagonist remains awkward,’ complicity—a sense of player regret when facing a character’s repugnant actions—offers a ‘better variation’ of game tragedy (117). Juul argues that complicity is unique to games, an experience that is ‘more personal and stronger than simply witnessing a fictional character performing the same actions’ (113). When I nudge my character into Majula’s pit, I’m no longer a witness—I’m a participant.

    The Art of Failure’s final chapter focuses the prior lens’ viewpoints on failure into a humanistic concluding point: ‘Failure forces us to reconsider what we are doing, to learn. Failure connects us personally to the events in the game; it proves that we matter, that the world does no simply continue regardless of our actions’ (122). For those who already accept games as a meaningful, expressive medium, Juul’s conclusion may be unsurprising. But this kind of thoughtful optimism is also part of the book’s strength. Juul’s writing is approachable and jargon-free, and the Playful Thinking series’ focus on depth, readability, and pocket-size volumes makes The Art of Failure an ideal book to pass along to friends and colleagues that might question your ‘frivolous’ videogame hobby—or, more importantly, justify why you often spend hours swearing at the screen while purportedly in pursuit of ‘fun.’

    The final chapter also offers a tantalizingly brief analysis of how Juul’s lenses might refract outward, beyond games, to culture at large. Specifically targeting the now-widespread corporate practice of gamification, wherein game design principles are applied as motivators and performance measures for non-leisure activities (usually work), Juul reminds us that the technique often fails because workplace performance goals ‘rarely measure what they are supposed to measure’ (120). Games are ideal for performance measurement because of their peculiar teleology: ‘The value system that the goal of a game creates is not an artificial measure of the value of the player’s performance; the goal is what creates the value in the first place by assigning values to the possible outcomes of a game’ (121). This kind of pushback against digital idealism is an important reminder that games ‘are not a pixie dust of motivation to be sprinkled on any subject’ (10), and Juul leaves a lot of room for further development of his thesis beyond the narrow scope of videogames.

    For the converted, The Art of Failure provides cross-disciplinary insights into many of our unexamined play habits. While playing Dark Souls II, I frequently thought of Juul’s triumvirate of design paths. Dark Souls II is an exemplary hybrid—though much of your success is skill-based, chance and labor play significant roles. The algorithmic systems that govern item drops or boss attacks can often sway one’s fortunes toward success or failure, as many speedrunners would attest. And for as much ink is spilt about Dark Souls II being a ‘hardcore’ game with ‘old-school’ challenge, success can also be won through skill-less labor. Summoning high-level allies to clear difficult paths or simply investing hours grinding souls to level your character are both viable supplements for chance and skill.

    But what of games that do not fit these paths? How do they contend with failure? There is a rich tradition of experimental or independent artgames, notgames, game poems, and the like that are designed with no path to failure. Standout examples like Proteus, Dys4ia, and Your Lover Has Turned Into a Flock of Birds require no skills beyond operating a keyboard or mouse, do not rely on chance, and require little time investment. Unsurprisingly, games like these are often targeted as ‘non-games,’ and Juul’s analysis leaves little room for games that skirt these borderlines. There is a subtext in The Art of Failure that draws distinctions between ‘good’ and ‘bad’ design. Early on, Juul writes that ‘(good) games are designed such that they give us a fair chance’ (7) and ‘for something to be a good game, and a game at all, we expect resistance and the possibility of failure’ (12).

    There are essentialist, formalist assumptions guiding Juul’s thesis, leading him to privilege games’ ‘unique’ qualities at the risk of further marginalizing genres, creators, and hybrid play practices that already operate at the margins. To argue that complicity is unique to games or that games are the art of failure is to make an unwarranted leap into medium specificity and draw borderlines that need not be drawn. Certainly other media can draw us into complicity, a path well-trodden in cinema’s exploration of voyeurism (Rear Window, Blow-Up) and extreme horror (Saw, Hostel). Can’t games simply be particularly strong at complicity, rather than its sole purveyor?

    I’m similarly unconvinced that games are the quintessential art of failure. Critics often contend that video games are unique as a medium in that they require a certain skill threshold to complete. While it is true that finishing Super Mario Bros. is different than watching the entirety of The Godfather, we can use Juul’s own multi-path model to understand how we might fail at other media. The latter example certainly requires more labor—one can play dozens of Super Mario runs during The Godfather’s 175-minute runtime. Further, watching a film lauded as one of history’s greatest carries unique expectations that many viewers may fail to satisfy, from the societal pressure to agree on its quality to the comprehensive faculties necessary to follow its narrative. Different failures arise from different media—I’ve failed reading Infinite Jest more than I’ve failed completing Dark Souls II. And any visit to a museum will teach you that many people feel as though they fail at modern art. Tackling Dark Souls II’s Pursuer or Barnett Newman’s Onement, I can be equally daunting.

    When scholars ask, as Juul does, what games can do, they must be careful that by doing so they do not also police what games can be. Failure is a compelling lens through which to examine our relationship to play, but we needn’t valorize it as the only means to count as a game.
    _____


    Nathan Altice is an instructor of sound and game design at Virginia Commonwealth University and author of the platform study of the NES/Famicom, I AM ERROR (MIT, 2015). He writes at metopal.com and burns bridges at @circuitlions.

  • Adventures in Reading the American Novel

    Adventures in Reading the American Novel

    image

    by Sean J. Kelly

    on Reading the American Novel 1780-1865 by Shirley Samuels

    Shirley Samuels’s Reading the American Novel 1780-1865 (2012) is an installment of the Reading the Novel series edited by Daniel R. Schwarz, a series dedicated to “provid[ing] practical introductions to reading the novel in both the British and Irish, and the American traditions.” While the volume does offer a “practical introduction” to the American novel of the antebellum era—its major themes, cultural contexts, and modes of production—its primary focus is the expansion of the American literary canon, particularly with regard to nineteenth-century women writers. In this respect, Samuels’s book continues a strong tradition of feminist cultural and historicist criticism pioneered by such landmark studies as Jane Tompkins’s Sensational Designs: The Cultural Work of American Fiction 1790-1860 (1985) and Cathy N. Davidson’s Revolution and the Word: The Rise of the Novel in America (1986). Tompkins’s explicit goal was to challenge the view of American literary history codified by F.O. Matthiessen’s monumental work, American Renaissance: Art and Expression in the Age of Emerson and Whitman (1941). In particular, Tompkins was concerned with reevaluating what she wryly termed the “other American Renaissance,” namely the “entire body of work” 1 of popular female sentimental writers such as Harriet Beecher Stowe, Maria Cummins, and Susan Warner, whose narratives “offer powerful examples of the way a culture thinks about itself.” 2

    Recent decades have witnessed a growing scholarly interest in not only expanding the literary canon through the rediscovery of “lost” works by women writers such as Tabitha Gilman Tenney3
    and P.D. Manvill4, to name a few, but also reassessing how the study of nineteenth-century sentimentalism and material culture might complicate, extend, and enrich our present understandings of the works of such canonical figures as Cooper, Hawthorne, and Melville. In this critical vein, Samuels asks, “what happens when a student starts to read Nathaniel Hawthorne’s The Scarlet Letter (1850), not simply in relation to its Puritan setting but also in relation to the novels that surround it?” (160). Reading the American Novel engages in both of these critical enterprises—rediscovery and reassessment of nineteenth-century American literature—by promoting what she describes as “not a sequential, but a layered reading” (153). In her “Afterward,” Samuels explains:

    Such a reading produces a form of pleasure layered into alternatives and identities where metaphors of confinement or escape are often the most significant. What produces the emergence of spatial or visual relations often lies within the historical attention to geography, architecture, or music as elements in this fiction that might re-orient the reader. With such knowledge, the reader can ask the fiction to perform different functions. What happens here? The spatial imagining of towns and landscapes corresponds to the minute landscape of particular bodies in time. Through close attention to the movements of these bodies, the critic discovers not only new literatures, but also new histories” (153).

    It is this “richly textured” (2) type of reading—a set of hermeneutic techniques to be deployed tactically across textual surfaces (including primary texts, marginalia, geographical locations, and “particular bodies in time” [153])—that leads, eventually, to Samuels’s, and the reader’s, greatest discoveries. The reader may find Samuels’s approach to be a bit disorienting initially. This is because Reading the American Novel traces not the evolution of a central concept in the way that Elizabeth Barnes, in States of Sympathy: Seduction and Democracy in the American Novel (1997), follows the development of seduction from late eighteenth-century to the domestic fiction of the 1860s. Rather, Samuels introduces a constellation of loosely-related motifs or what she later calls “possibilities for reading” (152)—“reading by waterways, by configurations of home, by blood and contract” (152)—that will provide the anchoring points for the set of disparate and innovative readings that follow.

    Samuels’s introductory chapter, “Introduction to the American Novel: From Charles Brockden Brown’s Gothic Novels to Caroline Kirkland’s Wilderness,” considers the development of the novel from the standpoint of cultural production and consumption, arguing that a nineteenth-century audience would have “assumed that the novel must act in the world” (4). In addition, Samuels briefly introduces the various motifs, themes, and sites of conflict (e.g. “Violence and the Novel,” “Nationalism,” Landscapes and Houses,” “Crossing Borders,” “Water”) that will provide the conceptual frameworks for her layers of reading in the subsequent chapters. If her categories at first appear arbitrary, this is because, as Samuels points out, “the novel in the United States does not follow set patterns” (20). The complex conceptual topography introduced in Chapter 1 reflects the need for what she calls a “fractal critical attention, the ability to follow patterns that fold ideas into one another while admiring designs that appear to arise organically, as if without volition” (20).

    The second chapter of the book, “Historical Codes in Literary Analysis: The Writing Projects of Nathaniel Hawthorne, Elizabeth Stoddard, and Hannah Crafts,” examines the value of archival research by considering the ways in which “historical codes . . . include[ing] abstractions such as iconography as well as the minutiae derived from historical research . . . are there to be interpreted and deciphered as much as to be deployed” (28). Samuels’s reading of Hawthorne, for example, links the fragmentary status of the author’s late work, The Dolliver Romance (1863-1864), to the more general “ideological fragmentation” (28) apparent in Hawthorne’s emotional exchange of letters with his editor, James T. Fields, concerning the representation of President Lincoln and his “increasing material difficulty of holding a pen” (25).

    Samuels’s third chapter, “Women, Blood, and Contract: Land Claims in Lydia Maria Child, Catharine Sedgwick, and James Fenimore Cooper,” explores the prevalence of “contracts involving women and blood” (45) in three early nineteenth-century historical romances, Child’s Hobomok (1824), Cooper’s The Last of the Mohicans (1826), and Sedgwick’s Hope Leslie (1827). In these works, Samuels argues, the struggle over national citizenship and westward expansion is dramatized against the “powerfully absent immediate context” (45) of racial politics. She maintains that in such dramas “the gift of women’s blood” (62)—often represented in the guise of romantic desire and sacrifice— “both obscures and exposes the contract of land” (62).

    Chapter four, “Black Rivers, Red Letters, and White Whales: Mobility and Desire in Catharine Williams, Nathaniel Hawthorne, and Herman Melville,” extends Samuels’s meditation on the figure of women’s bodies in relation to “the promise or threat of reproduction” (68) in the narrative of national identity; however, in her readings of Williams’ Fall River (1834), Hawthorne’s The Scarlet Letter (1850), and Melville’s Moby Dick (1851), the focus shifts from issues of land and contracts to the representation of water as symbolic of “national dispossession” (68) and “anxieties about birth” (68).

    Samuels’s fifth chapter, “Promoting the Nation in James Fenimore Cooper and Harriet Beecher Stowe,” returns to the question of the historical romance, critically examining how Cooper’s 1841 novel, The Deerslayer, might be read as evidence of “ambivalent nationalism” (102), as it links “early American nationalism and capitalism to violence against women and children” (109). Samuels then considers the possibility of applying such ambivalence to Stowe’s abolitionist vision for the future of America limned in Uncle Tom’s Cabin (1852), a vision founded, in part, on Stowe’s conceptual remapping of the Puritan jeremiad onto the abolitionist discourse of divine retribution and national apocalypse (111-112). Because Stowe “set out to produce a history of the United States that would have become obsolete in the moment of its telling” (111), Samuels argues that we witness a break in the development of historical fiction caused by the Civil War, a “gap” during which “the purpose of nationalism with respect to the historical novel changes” (113).

    Chapter six, “Women’s Worlds in the Nineteenth-Century Novel: Susan B. Warner, Elizabeth Stuart Phelps, Fanny Fern, E.D.E.N. Southworth, Harriet Wilson, and Louisa May Alcott,” and the book’s Afterward—in my opinion, the strongest sections of the book—survey a wide variety of nineteenth-century American women writers, including: Warner, Fern, Southworth, Wilson, Alcott, Caroline Kirkland, and Julia Ward Howe, among others. These discussions explore the ways in which writing functions as a type of labor which “gives the woman a face with which to face the world” (145). Samuels seeks to challenge the over-simplification of “separate spheres” ideology (153) by offering careful critical attention to the ways in which the labor of writing shapes identities in a multiplicity of distinct cultural locations. Hence, Samuels writes: “It is difficult to summarize motifs that appear in women’s writing in the nineteenth century. To speak of women’s worlds in the novel raises the matter of: what women?” (143).

    Admittedly, there are moments when Samuels’s layered readings necessitate extended swaths of summary; the works that become the primary focus of Samuels’s analyses, such as Catharine Williams’ Fall River and the novels of Elizabeth Stuart Phelps and E.D.E.N. Southworth, may be unfamiliar to many readers. At other instances, the very intricacy, novelty, and ambitiousness of Samuels’s reading performances begin to challenge the reader’s desire for linear consistency. Her interpretive strategies, which prioritize reading at the margins, the textual rendering of historical codes, and provocative juxtapositions, produce, at times, a kind of tunneling effect. The reader is swept breathlessly along, relieved when the author pauses to say: “But to return to my opening question” (82). Ultimately however, Samuels’s critical approaches throughout this book pose an important challenge to our conventional ways of assigning value and significance to nineteenth-century popular fiction. By reading canonical works such as Moby Dick and The Scarlet Letter with and against the popular crime novel Fall River, for example, she is able to map similarities between all three works in order to create “a more complete fiction” (83). All of these novels, she writes, “lure New Englanders to die. To read them together is to recover the bodies of laboring women and men from watery depths” (83). This type of creative reading, to invoke Ralph Waldo Emerson’s phrase, allows us potentially to tease out significant conflicts and tensions in well-known works that might have otherwise remained invisible in a conventional reading. “What happens,” she asks, “when we remember that Captain Ahab is a father?” (83). Because Samuels offers not only insightful interpretations of nineteenth-century American novels but also introduces new and creative ways to read—and ways to think about the meaning of reading as a critical practice—Reading the American Novel must be viewed as a valuable addition to American literary scholarship.

    _____

    Sean J. Kelly is Associate Professor of English at Wilkes University. His articles on nineteenth-century American literature and culture have recently appeared in PLL, The Edgar Allan Poe Review, and Short Story.

    _____

    notes:
    1. Tompkins, Jane. Sensational Designs: The Cultural Work of American Fiction 1790-1860. New York: Oxford UP, 1985. 147
    Back to the essay

    2. Ibid. xi
    Back to the essay

    3. Tenney, Tabitha Gilman. Female Quixotism: Exhibited in the Romantic Opinions and Extravagant
    Adventures of Dorcasina Sheldon
    . 1801. Intro. Cathy N. Davidson. New York: Oxford UP, 1992.
    Back to the essay

    4. Manvill, P.D. Lucinda; Or, the Mountain Mourner: Being Recent Facts, in a Series of Letters, from Mrs.
    Manvill, in the State of New York, to Her Sister in Pennsylvania
    . 1807. Intro. Mischelle B. Anthony. Syracuse: Syracuse UP, 2009.
    Back to the essay

  • Transgender Studies Today: An Interview with Susan Stryker

    Transgender Studies Today: An Interview with Susan Stryker

    _____________________________________________________________________________________

    Petra Dierkes-Thrun interviews Susan Stryker, leader of an unprecedented initiative in transgender studies at the University of Arizona, and one of two founding co-editors of the new journal TSQ: Transgender Studies Quarterly (together with Paisley Currah). Stryker is Associate Professor of Gender and Women’s Studies, and Director of the Institute for LGBT Studies at the University of Arizona. The author or editor of numerous books and articles on transgender and queer topics for popular and scholarly audiences alike, she won an Emmy Award for the documentary film Screaming Queens: The Riot at Compton’s Cafeteria, a Lambda Literary Award for The Transgender Studies Reader, and the Ruth Benedict Book Prize for The Transgender Studies Reader 2.
    _____________________________________________________________________________________

    Transgender Studies initiative at the University of Arizona. Left to Right (Front): Paisley Currah, Susan Stryker, Monica Casper, Francisco Galarte; (Back): Eric Plemons, Max Strassfeld, Eva Hayward. Not pictured: TC Tolbert.
    Transgender Studies initiative at the University of Arizona. Left to Right (Front): Paisley Currah, Susan Stryker, Monica Casper, Francisco Galarte; (Back): Eric Plemons, Max Strassfeld, Eva Hayward. Not pictured: TC Tolbert. Photo by Paisley Currah.

     

    DIERKES-THRUN:  The University of Arizona recently initiated an unprecedented cluster hire in transgender studies and is actively working towards a graduate degree program in transgender studies. Can you tell us a bit more about the history and the thinking behind this strong, coordinated move at your institution?

    STRYKER: After the University of Arizona (UA) recruited me away from my previous job to direct the Institute for LGBT Studies in 2011, I came in saying that I wanted to put equal emphasis on the “T” in that acronym, and they were supportive of that. But none of us anticipated that the T was going to become the tail that wagged the dog, so to speak. It would not have happened had I not been courted by another, much more prestigious university during my second year on the job. UA asked what it would take to retain me, and I said I wanted to do something unprecedented, something I would not be able to do at that other university, something that would transform my field, while also putting UA on the map in a bold new way. I said I wanted to launch a transgender studies initiative, which represents my vision of the field’s need to grow. The institution said yes to what I proposed, and to the upper administration’s credit, they saw an opportunity in what I pitched.

    The truly unprecedented institutional commitment came in the form of strategic hiring support for a transgender studies faculty cluster. As UA has been quick to point out to conservative critics of this initiative, no new funds were identified to create these faculty lines—they came from existing pools of discretionary funds, and represent a shifting towards emerging areas of study of faculty lines freed up by retirement or resignation. That said, no university anywhere in the world has ever conducted a faculty cluster hire in transgender studies. Four lines were made available: two in the College of Social and Behavioral Sciences, and two in colleges elsewhere in the University. We wound up filling three of those positions last year—hiring in medical anthropology, feminist science and technology studies, and religious studies—and are in negotiations about where to place the remaining line.

    UA has a strong institutional culture of interdisciplinary collaboration, as well as a good track record of supporting LGBT issues, so this fit right in. They understand that transgender issues have a lot of cultural saliency at the moment, and that studying the rapid shifts in contemporary gender systems, including the emergence of historically new forms of gender expression, particularly in the context of the biomedical technologization of “life itself,” is a legitimate field of study and research. Pragmatically, they saw the initiative as a way to attract and retain innovative and diverse faculty members, to bring in out-of-state tuition dollars, to compete for external research grants, and to push back against the popular misconception that Arizona is only a politically reactionary place. From the institution’s perspective, there was no advocacy agenda at work here, just an opportunity to increase the bottom line by building on existing faculty and research strengths.

    The lowest-hanging fruit, which can be accomplished with relatively little bureaucracy, is a graduate concentration, minor, or designated emphasis in transgender studies, and there is definitely support for that. We hope to have that in place within a year. It is also possible that a currently existing MA program in Gender and Women’s Studies could be adapted relatively easily to accommodate a transgender studies emphasis, but that involves a lot of inside-the-ballpark negotiation with current GWS faculty. Actually creating a new, stand-alone graduate program at the state’s land grant university would require approval by the Arizona Board of Regents, and ultimately by the Governor’s Office, so that will be a longer and tougher row to hoe.

    The final element of the initiative is approval to pursue establishing a new research enterprise called the “Center for Critical Studies of the Body.” The rationale here was to provide a non-identitarian rubric that could bring transgender studies into dialog with other interdisciplinary fields, such as the study of disability, trauma, sports, medical humanities, etc. No funds were provided for this, just a green light for starting the process of cobbling a center together.

    Of course, it’s vital to ask the question why, in an era when the teaching of Chicano/a studies is literally being outlawed in Arizona public schools, when xenophobic attitudes inform the state’s border politics, attention to transgender identities and practices can appear palatable. How does institutional investment in transgender studies at this particular historical juncture play into a deep logic of “managing difference” through expert knowledges, or get positioned as less threatening than calls for racial and economic justice? As the person heading up this initiative, I want to be attentive to ways I can use trans studies to advance other concerns that currently have a harder time getting traction in Arizona. I think my deepest challenge in trying to spearhead this initiative lies in resisting the ways that transgender studies can be co-opted for neoliberal uses that fall short of its radical transformative potential.

    DIERKES-THRUN: The University of Arizona also provided financial and logistical support for the establishment of a new journal of record for the field of transgender studies, TSQ: Transgender Studies Quarterly, published by Duke University Press in 2014, with you and Paisley Currah (Professor of Political Science at Brooklyn College and the CUNY Graduate Center) as founding co-editors. How did that come about?

    STRYKER: Launching this journal had been a long-term project of mine and Paisley’s and was already well underway before the opportunity to launch the broader transgender studies initiative came up, but it nevertheless constitutes an important element of what has become the bigger project. UA has significantly supported the establishment of  TSQ by contributing about one-third of the start-up costs. Those funds were cobbled together from a lot of different institutional sources, including the Provost’s Office, the office of the Vice President for Research, the College of Social and Behavioral Sciences, the Department of Gender and Women’s Studies, and the Institute for LGBT Studies.

    DIERKES-THRUN: For our readers who are just now becoming acquainted with transgender studies as a diverse intellectual and academic field, how would you summarize its most important constants and changes over the past two decades? What are some important subareas and affiliated fields for transgender studies?

    STRYKER: I’d recommend taking a look at the tables of contents in the two volumes of The Transgender Studies Reader. The first volume, from 2006, offers a genealogy of field formation, highlighting historical ties to scientific sexology, feminism, and poststructuralist theory.

    It includes work from the “transgender moment” of the early 1990s that changed the conversation on trans issues and tackles many of the topics that were of interest in the field’s first decade—questions of self-representation, diversity within trans communities, the increasing visibility of trans-masculinities. The second volume, from 2013, showcases the rapid evolution of the field in the 21st century, which is self-consciously moving in strongly transnational directions away from the Anglophone North American biases of the field’s first decade. There has been much more attention paid to the relationship between transgender issues and other structural forms of inequality and injustice, and, post 9/11, to questions about borders, surveillance, and security—and the ways that non-conventionally gendered bodies experience heightened scrutiny and limitations on movement, and can be seen as posing a terroristic threat to the body politic. There are increasing affinities with posthumanist work, as well as with animal studies, critical life studies, and the so-called “new materialism.” The first several issues of TSQ suggest something of current directions in the field: they address decolonization, cultural production, population studies, transanimalities, higher education studies, archives, transfeminism, political economy, sex classification, translation, surgery, sinophone studies, and psychoanalytic theory.

    DIERKES-THRUN: Can you say something about the trans- and international context of transgender studies today? What are the most important challenges there and why should we be thinking about them?

    STRYKER: The field has indeed been moving in a strongly transnational direction for more than a decade. I was particularly pleased that The Transgender Studies Reader 2 was awarded the 2013 Ruth Benedict Prize from the Association for Queer Anthropology/American Anthropological Association, precisely because the field of transgender studies challenges us to think anew about how we understand sex/gender/identity cross-culturally. I think one of the biggest intellectual challenges has to do with fully acknowledging that some of the fundamental categories that we use to understand “human being”—like man and woman—are not ontologically given, but rather are themselves historically and cultural variable and contingent. Translation is also a huge problem—how do we facilitate the exchange of knowledge across language and culture, when the very categories we use to organize and recognize our own being and that of others can be so deeply incommensurable?

    DIERKES-THRUN: In the introduction to the inaugural issue of TSQ, the editors write, “Transgender studies promises to make a significant intellectual and political intervention into contemporary knowledge production in much the same manner that queer theory did twenty years ago.” What are some of the most needed intellectual and political interventions that you anticipate transgender studies can and will make?

    TSQ coverSTRYKER: First and foremost, I see it creating more space for critical conversations that involve transgender speakers. Bringing trans studies into the academy is one way of bringing more trans people into the academy. Of course I’m not arguing that trans studies is something that on trans people can participate in. Far from it—anybody can develop an expertise in this area, or feel that they have some sort of stake in it. But just as disability activists said in the 70s and 80, “nothing about us without us.” What’s most significant is creating an opportunity for the privileged and powerful kinds of knowledge production that takes place in the academy (about trans topics or any other area that involves people) to be not just objectifying knowledge, what we might call “knowledge of,” but also “knowledge with,” knowledge that emerges from a dialog that includes trans people who bring an additional kind of experiential or embodied knowledge along with their formal, expert knowledges. It’s the same rationale for any kind of diversity hiring initiative. People have different kinds of “situated knowledges” that derive from how they live their bodily differences in the world. It’s important to have people in critical conversations who come from different perspectives based on race/ethnicity, gender, ability, national origin, first languages, etc. Transgender represents a different kind of difference that offers a novel perspective on how gender systems, and therefore society, work.

    DIERKES-THRUN: You also say, in the same TSQ introduction, that transgender studies “offers fertile ground for conversations about what the posthuman might practically entail (as well as what, historically, it has already been).” The posthuman is a topic of interest to many of our readers. Could you map out for us what specific or broader contributions transgender studies can make to past and future discussions of the posthuman?

    STRYKER: The first thing we say of a new child is “It’s a girl” or It’s a boy.” Through the operation of language, we move a body across the line that separates mere biological organism from human community, transforming the status of a nonhuman “it” into a person through the conferral of a gender status. It has been very difficult to think of the human without thinking of it through the binary gender schema. I think a lot of the violence and discrimination trans people face derives from a fundamental inability on the part of others to see us as fully human because we are considered improperly gendered, and thus lower on the animacy hierarchy, therefore closer to death and inanimacy, therefore more expendable and less valuable than humans. A transgender will to life thus serves as a point from which to critique the human as a universal status attributed to all members of the species, and to reveal it instead as a narrower set of criteria wielded by some to dehumanize others.

    DIERKES-THRUN: The journal description announces that TSQ “will publish interdisciplinary work that explores the diversity of gender, sex, sexuality, embodiment, and identity in ways that have not been adequately addressed by feminist and queer scholarship.” What have been some of feminist and queer theory’s most important blind spots when it comes to thinking about the transgender experience?

    STRYKER: Transgender Studies emerged as an interdisciplinary field in the early 1990s, at roughly the same time as queer theory. There’s been a robust conversation about the relationship between the two, especially given the simultaneous formation of what’s come to be called the “LGBT” community. I contend that trans studies, as it was first articulated, shared an agenda with queer studies in the sense that it critiqued heteronormative society from a place of oppositional difference. It argued that “queer” was not just a five letter word for homosexual, but rather that queer encompassed a range of “different differences” that all had a stake in contesting various sorts of oppressive and coercive normativities related to sex, sexuality, identity, and embodiment. As queer theory developed, however, issues of sexuality really did remain in the forefront. From a transgender studies perspective, the whole distinction between homo and hetero sexualities depends on a prior agreement about what constitutes “sex,” on who’s a man and who’s a woman. Destabilizing those material referents, or needing to account for their sequentiality, their fuzzy boundaries, their historicity or cultural specificity, or their hybridity really opens up a whole different set of questions. In addition, trans studies is not organized primarily around issues of sexuality; equally important are questions of gender, bodily difference, heath care provision, technology studies, and a host of other things that have not been central to queer studies. So the debate between queer and trans studies has been about whether they are different parts of the same big intellectual and critical project, employing the same transversal methodologies for bringing into analytical focus and contesting oppressive normativities, or whether they overlap with one another—sharing some interests but not others—or whether they are really two different enterprises, concerned with different objects of study.

    My personal answer is all of the above, sometimes. At its most radical, trans studies offers a critique of the ways in which gay and lesbian liberation and civil rights struggles have advanced themselves by securing greater access to citizenship for homosexuals precisely through the reproduction of gender normativities—the liberal “I’m just like a straight person except for who I have sex with” argument. What actually provides the commonality there between homo and hetero is an agreement about who is a man and who is a woman, and how we can tell the difference between the two. Trans studies puts pressure on that tacit agreement.

    With regard to feminism, I think the major innovation transgender studies offers has to do with how gender hierarchies operate. In the most conventional feminist frameworks, what has seemed most important is to better understand and thereby better resist the subordination of women to men. Without contesting that basic tenet, transgender studies suggests that it is also necessary to understand how contesting the hierarchized gender binary itself can increase vulnerabilities to structural oppression for those people who don’t fit in, or who refuse to be fixed in place. That is, in addition to needing to address power structures that privilege normatively gendered men and masculinity over normatively gendered women and femininity, we also need to address a wide range of gender nonnormativities, atypicalities, transitivities, and fluidities. I see this as extending, rather than challenging, fundamental feminist insights.

    DIERKES-THRUN: Many of our readers may not know this, but traditionally, the relationship between queer theory and transgender studies and activism has been quite contentious. Is the fact that there is now a separate academic journal for trans studies indicative of an ongoing divide with queer studies, despite what you call the recent “transgender turn”?

    STRYKER: There’s a big enough and deep enough conversation on trans topics to merit and sustain an independent journal for the field, that’s all. There is more publishable scholarship on trans issues and topic than will ever fit into GLQ, given that journal’s broader scope, or that can ever fit into one-off special issues of disciplinary or interdisciplinary journals devoted to trans topics. Worrying that the advent of TSQ signals a divergence or parting of the ways between queer and trans studies is an overblown concern. Personally, I’d hate to see queer and trans studies drift further apart, because I feel strongly committed to both. I think trans studies is expansive enough to encompass a lot of queer scholarship on sex/gender nonnormativity, while also advancing scholarship on transgender-related topics that queer studies has never been particularly interested in.

    DIERKES-THRUN: As someone who has worked as a historian, social activist for trans rights and documentary filmmaker on trans history, how would you describe the state of our society’s understanding and attitudes towards transgender today? Does it feel like the tide has finally shifted?

    STRYKER: I think it is a mixed bag. Pretty much everybody today knows that there is this thing called “transgender”, but they can’t say exactly what it is. They know if they want to be considered progressive they are supposed to be OK with it, even if they secretly feel squeamish or judgmental or confused. That’s an improvement over the situation in decades past, when pretty much everybody agreed that there were these sick people and freaks and weirdoes who wanted to cross-dress or take hormones or cut up their genitals, but they were not important, and society really didn’t have to pay any attention to such a marginal and stigmatized phenomenon. So yes, there has been a shift, but yes, there is still a long way to go.

    DIERKES-THRUN: Which projects are you working on now?

    STRYKER: I have a really heavy administrative load right now. I was already trying to run a research institute, teach, commute between my job in Tucson and my home in San Francisco, and launch a new peer-reviewed journal, before the trans studies initiative became a possibility. That has definitely been a “be careful what you ask for” lesson, in terms of workload. I feel like I don’t write anything these days that doesn’t start with the words “Executive Summary” and end with the words “Total Budget.” It will probably be like that for a couple more years, especially until I complete my agreed-upon term of service as director of the Institute for LGBT Studies at the end of 2016.

    But there are a couple of projects percolating along on the back burner. At the time I came to Arizona, I was working on an experimental media project called Christine in the Cutting Room, about the 1950s transsexual celebrity Christine Jorgensen, who burst onto the global stage when news of her sex-change surgery made headlines around the world. The project was sparked for me by a comment Jorgensen made in an interview with television journalist Mike Wallace. She was talking about her pre-fame job as a film cutter in the newsreel division at RKO Studios in New York, and said that she “used to work on one side of the camera” because she “didn’t know how to appear on the other side.” That gave me the idea of approaching the question of transsexuality from an aesthetic perspective, as a technique of visualization, accomplished through media manipulation. I saw Jorgensen using cinematic techniques of media cutting, suturing, image creation, and projection to move her from one side of the camera to the other, by moving herself from one kind of “cutting room” to another. I have always been interested in ways of exploring trans experience outside the pervasive psychomedical framework, and this project lets me do that. I mix archival audiovisual media of Jorgensen herself, found sound and images, electronic glitch music, and a scripted voice-over narration performed by an actress playing Jorgensen. At some point I hope to edit this material into a narrative film, but I have found it also works well as a multimedia installation in galleries and clubs.

    I am also trying to write a book. I’ve finally hit on a way to piece together into one overarching argument lots of fragments of abandoned or incomplete projects on embodiment and technology, the early Mormons, members of San Francisco’s elite Bohemian Club, transsexuals, urban history, and popular music. My working title is Identity is a War Machine: The Somatechnics of Gender, Race, and Whiteness. It’s about the processes through which we incorporate—literally somaticize—culturally specific and historically revisable categories of individual identity within biopolitical regimes of governmentality. I won’t say any more about it at this time, because this book itself could be one of my many unfinished projects.

    DIERKES-THRUN: Transgender as a topic of public curiosity seems to be everywhere in U.S. media culture these days, from Laverne Cox and Orange Is the New Black to Chelsea Manning, Andreja Pejic and others. (There is also a lot of naïve conflation with drag and cross-dressing, as the media treatment of Conchita Wurst illustrates.) Do you worry about the glamorization and commodification of certain kinds of trans bodies in the media and the silence around others? Are famous celebrity spokespeople like Laverne Cox or Janet Mock good or bad for the movement, from your perspective?

    STRYKER: In the wake of the repeal of the U.S. military’s Don’t-Ask-Don’t-Tell policy regarding homosexual service members, and after the Supreme Court decisions on marriage equality, transgender has emerged in some quarters as the “next big thing” in minority rights. I have a lot of problems with that way of framing things, and am very leery of the ways that story functions as a neoliberal progress narrative, and of the ways in which protecting trans people (now that gays have been taken care of) can exemplify the values of inclusivity and diversity, so that the US or the West can use support for trans rights to assert influence over other parts of the world who purportedly do not do as good a job on this front. What is truly amazing to me, after having been out as trans for nearly a quarter century, is the extent to which it is now becoming possible for some trans people to access what I call “transnormative citizenship,” while at the same time truly horrific life circumstances persist for other trans people. Race really does seem to be the dividing line that allows some trans people to be cultivated for life, invested in, recognized, and enfolded into the biopolitical state, while allowing others to be consigned to malignant neglect or lethal violence. The contemporary celebrity culture of transgender plays to both sides of this dichotomy. It’s increasingly possible to see trans people represented as successful, beautiful, productive, or innovative (and I salute those trans people who have accomplished those things). At the same time, you see people like Laverne Cox and Janet Mock using their platform to call attention the persistence of injustices, particularly for trans women of color. I am truly inspired by the way they both speak out on race, classism, the prison-industrial complex, and sex-work.

  • Futures of American Studies Institute: States of American Studies

    Futures of American Studies Institute: States of American Studies

    banner american studies

    Don Pease and The Futures of American Studies Institute readies for the summer institute from June 16-22:

    The seventeenth year of the Institute is the fifth of a five-year focus on “State(s) of American Studies.” The term “state(s)” in the title is intended to refer at once to the “state” as an object of analysis, to the state as an imagined addressee and interlocutor for Americanist scholarship, as well as to the re-configured state(s) of the fields and areas of inquiry in American Studies both inside and outside the United States. As such, we are inviting both scholars well known as “Americanists” internationally and those whose theoretical frameworks, objects of study, and disciplinary inclinations promise to transform the field’s self- understanding.

    Hit the jump for details.

    2014C_Dartmouth_Futures

  • Anti-Zionism as Antisemitism

    Anti-Zionism as Antisemitism

    imageThe Case of Italy,

    an intervention by John Champagne

    ~

    In several recent essays and articles on the relationship between Italian Jews in the diaspora and contemporary Israeli political and military actions toward the Palestinians, an interesting series of contradictions emerge. In some instances, critique of the military policies of the state of Israel is equated with antisemitism, even when that critique is proffered by Italian Jews. The argument, presented, for example, by Ugo Volli in his “Zionism: a Word that not Everyone Understands,” is that there is a connection between military and political attacks on Israel and what he terms a worldwide and constant economic and cultural campaign of de-legitimation and demonization of that state.1 Volli further contends that these two are directed not simply at Israelis, but at all Jews. “For this reason,” writes Volli, “there is no fundamental distinction between antizionism and antisemitism, between hate for Israel and for the Jews. All of this is well noted and not worth explaining here in greater detail.”2 This position dates from at least July of 1982, when, in response to critiques of the Israeli invasion of Lebanon voiced by Italian Jews in the diaspora, Jewish journalist Rosellina Balbi published in La Repubblica “Davide, discolpati!” an article defending Israel’s actions as defensive rather than offensive.3 In this article, Balbi equated antizionism with antisemitism by noting that any critique of the state of Israel has punctually provoked across Europe “tremors of anti-semitism.” Just a few months later, Italian war correspondent Oriana Fallaci suggested to an audience at Harvard that no one in the US will speak out against Israel because of “the contemporary fear of being blackmailed with the accusation of hating the Jews.”4

    A professor of Semiotics at the Università degli Studi di Torino and self-described political activist, Volli is also a journalist who has written for major Italian dailies as well as informazionecoretta.com, an Italian website whose stated goal is to guarantee that the public receives correct information on Israel.5 Antisemitism is a frequent theme in Volli’s work. Most recently, for example, he has argued that “history shows that antisemitism generates hatred of Israel, and not the inverse.”6 Such a position leads Volli to conclude that “the European Left” is antisemitic, as is “almost a third of the population” in Croatia, Belgium, and Spain.7

    Volli’s “Zionism” appeared in Shalom, the official monthly magazine of information and culture of the Comunità Ebraica di Roma.8 But who is the audience to which his article is directed? The word Comunità (with a capital C) is perhaps best translated as “Congregation.” It has a structure and a constitution.9 As the statutes of the Union of the Jewish Italian Communities (of which the Roman Comunità is a member) explain, in order to fully avail one’s-self of the resources of the Comunità, one must be an official member.10 The process is formalized via a declaration of one’s Jewishness.11 This declaration can be challenged, in which case, one can file an appeal.12 Under the advice of the rabbi, the consiglio or parliament – a body of twenty eight representatives elected directly by the members of the Comunità every four years – has the final say.13 Specific processes are also outlined for formally leaving the Comunità.14

    The history of the structure of the Italian Jewish Communities is a complex one. It encompasses a great span of time, including both Renaissance ghetto life, wherein Jews practicing different “rites” – not only the familiar Sephardic and Ashkenazi, but also the Italian, Sicilian, Levantine, and Catalan rites– were required to worship in a single synagogue. Another significant moment was Fascism, when all forms of religious worship were legally organized and regulated as part of the overall fascicization of Italian society.15 Royal Decree n.1731 of October 30, 1930, created the Union of Italian-Jewish Communities (Unione delle Comunità Israelitiche Italiane,) which represented Italian Judaism in its relations with the state.16

    The term “comunità,” however, can also refer to the English “community.” When one speaks of the Jewish “comunità,” therefore, one might be using the term in this looser sense. This might include, for example, non-religious Italian Jews, or out of town Jews attending the synagogue or other events presented by the Comunità’s museum and archive, or someone like Natalia Ginzburg – who, although ultimately converting to Catholicism, understood her Judaism as what one writer has called a “moral identity” – or even atheist Jews like the scientist Rita Levi-Montalcini.17 And while the Comunità is officially Orthodox, not all of its members keep kosher, for example, or wear the yarmulke outside of Temple.

    Writing in Shalom, Volli would appear to be addressing an audience composed of both the Comunità and the community, as well as non Jews. Many of the latter have contact with the Comunità via its museum in particular, which anticipates audiences from all over the world. Wall text and brochures, for example, are in Italian, Hebrew, and English, and both English and Italian tours of the two synagogues housed in the museum are provided daily. The tour guides inform the museum goers about the existence of Shalom, and copies of the magazine are available free.

    Noting in passing that, on the left, there are “numerous noted intellectuals of Jewish origin actively marshaled against the existence of Israel, from [Noam] Chomski to [Ilan] Pappé to Judith Butler,” as well as less aggressive (and, according to the author, therefore more insidious) organizations like J Street and its European counterpart, J Call, Volli ends his article by calling for a continuing defense of Zionism. He particularly cites for approbation critiques of Israel appearing recently in the official organs of the Italian Jewish press. In Volli’s eyes, to be Jewish is – or should be – to support the state of Israel. (While Volli claims only to be speaking against those who seek the dismantling of the state, in attacking J Street, an organization that explicitly calls for a two-state solution, he tips his hand.)

    However, the assumption that all Italian Jews are somehow representative of the state of Israel – a conclusion that would seem to follow logically from Volli’s argument – is also labeled antisemitism by other Jewish intellectuals working in the Italian academy today. For example, Marianna Scherini, doctor of research in Anthropology, History and Theory of Culture at the Università di Siena, begins her argument that, in their coverage of the 1982 war, both the Italian leftist press and the Italian daily newspapers offered converging, critical analyses of Israel, with a discussion of a new (post-war) antisemitism that is specifically anti-Israeli in its content.18 Due to the aforementioned war in Lebanon and the accompanying massacre of Palestinians in the refuge camps of Sabra and Shatila, 1982 was a particularly painful moment for the Italian Jewish community. Perpetrated by Christian Phalangists assisted by the Israel military, the massacre was publicly critiqued by some Italian Jewish intellectuals – most notably, Primo Levi – and followed in Rome by the bombing by terrorists of the Great Synagogue. (In fact, even prior to the massacre, the invasion had been condemned by Levi and several other intellectuals, including Franco Belgrado, Edith Bruck, Ugo Caffaz, Miriam Cohen, Natalia Ginzburg, David Meghnagi, and Luca Levi.)19

    The synagogue bombing resulted in the death of a child, Stefano Gay Tache. The killing took place on the holiday of of Shemini (also spelled Shmini) Atzeret (also spelled Azzeret), which the English version of the catalog of the Jewish Museum of Rome states is “a day when children receiving [sic] a public blessing.”20 Since the bombing, the Great Synagogue can only be visited via guided tours led by volunteers, who typically reference the attack. A B’nai B’rith Europe webpage repeats the claim that the attack took place when “a service of blessing for children was being held,” though it suggests that this attack “was perpetrated opposite the Grand Synagogue in Rome.”21 In fact, the blessing referenced occurs not on Shemini Atzeret but rather on the next day, Simchat Torah. In Israel, however, these holidays are celebrated on the same day. Regardless of this discrepancy, 1982 is sometimes cited as marking a definitive split between Italian Jews and the Italian left.22

    Scherini concludes her essay by arguing that both the Italian leftist press and the dailies tended to isolate Israeli actions from their political and historical context23 and to show no interest in the specific politics of the Palestinians,24 as well as to equate Israeli actions in Lebanon with the Shoah and suggest a transformation of Israeli Jews from victims to perpetrators of a contemporary persecution of the Palestinian people (196).25 She then explicitly links contemporary, post-war antisemitism with her contention that, “in the period under examination [1982] Israel constitutes a virtual ‘Jewish collective’ in the imagination of the Italian daily press.”26 According to the author, the treating of Israel as “the mirror through which to observe Italian Jews, and vice versa” and corollary homogenizing of all Jews is an instance of antisemitism.27

    A third position: some Italian Jewish intellectuals draw a relationship between contemporary antisemitism and the position, espoused by some Western intellectuals, that Israel represents the logical outcome, taken to its furthest point, of Western imperial expansion. This connection is suggested briefly by historian Guri Schwarz, who was a 2013-2014 Viterbi Visiting Professor at UCLA’s Center for Jewish Studies. Schwarz’s contention is that, in labeling Israel a kind of “worst case scenario,” Western antisemitism arises from a fear of the proximity of the self to the Other, a rejection of the Other in the self.28 That is, antisemitism arises from the fear that Jews are too similar to “the rest of us.” Schwarz’s argument unfortunately de-historicizes the trope, which appears to have arisen in the wake of the ’67 war . It found its condition of possibility in the linking of this war to then contemporary US imperial expansion in southeast Asia, as noted in Andrea Becherucci’s analysis of the coverage of the ’67 war in three left-wing Italian journals (119).29

    Beyond the fact that all three of these positions seem to foreclose, to varying degrees, any critique whatsoever of the military policies of the state of Israel, they also circumvent any discussion of the historical contradictions of a secular religious state. Clearly, the idea of a Jewish state is a product of the nineteenth century. It is historically linked to the “importation” to Europe, from the US and Latin America, of the model of the Enlightenment (secular) nation-state and overdetermined by (post-war) Cold War Western interests. This refusal to historicize Israel results in the particular double-bind that Israel on the one has the right to act as all other states – that is, to take both defensive and offensive action against perceived threats; this was the very argument debated in the diasporic community in 1982, with the invasion of Lebanon, perceived by some as Israel’s first offensive war30 – and that Israel is a “special case” – i.e., a state that, owing to the historical circumstances of its founding, is not subject to international law and the dictates of the UN, for example.

    As for the tension between the religious and the secular, an emblematic example is the insistence by some Italian Jews that the Jewish presence in Italy dates from 161BCE because the first book of Maccabees says so. (It may in fact date from earlier, as the Tunisian Jewish community dates itself, at least anecdotally, to the first diaspora, for example.) This in turn raises the question of how one writes the history of what is understood to be eternal – a problem that leads some scholars to argue that Jewish historiography finds its conditions of possibility in the Haskalah, nineteenth century Jewish Enlightenment (Yerushalmi). In Italy, the problem of how to write Jewish history is further complicated by the fact that the reform movement only recently came to Italy, and so the Roman Comunità is “officially” Orthodox.image31 This means that, in the Jewish Museum of Rome’s presentation of the history of the Jews, Biblical events for which there is little archaeological evidence are intermixed with such historically verifiable events as the destruction of the second Temple, commemorated in the Roman forum’s Arch of Titus.

    In drawing attention to the irresolvable tensions between the religious and the secular that necessarily inform the idea of a Jewish state, I am not suggesting, as Schwarz fears its antisemitic critics do, that Israel is “any worse” than the US in regard to ignoring the UN, for example. In fact, we know well that by virtue of its (declining) world hegemony, the US often chastises other states for breaking international law while itself flouting that law. I am suggesting, however, that, while no one would in all likelihood accuse US intellectuals who critique US foreign policy of being, say “racist,” the creation of a Jewish state has historically insured that any critique of that state will be equated in some quarters with antisemitism, even a critique produced by Jews, and that there seems to be a kind of willed refusal of some Italian Jewish intellectuals to work through this contradiction – a contradiction that finds one of its conditions of possibility in the modern “racialization” of Judaism that occurs via Nazi and Fascist antisemitism and its links to eugenics. Both Italian Fascism and Nazism deployed this antisemitism in an effort to invent national subjects, the Jew being the Other against which both Italian and German identities hoped to consolidate themselves and ward off their precarious histories.

    As its corollary, scholars who maintain that antizionism equals antisemitism must treat the latter itself as ahistorical – that is, as if there is no significant difference between pre-modern and modern forms of antisemitism. Rather than understand Italian Zionism as a kind of Foucauldian counter-discourse made possible by nineteenth century antisemitism and the antisemitic policies of Mussolini’s regime so well documented by Michele Sarfatti, Italian Jews who support unwaveringly the military policies of Israel today must construct their Comunità as always already Zionist.32 This, despite the fact that it is well known that many Jews who participated in the early years of post-Unification Italy were critical of Zionism and that, “before the Racial Laws of 1938, Italian Zionism was essentially the fruit of actions by a group of rabbis.”33

    A further corollary is that the term anti-zionism can refer both to a critique of the policies of the state of Israel and calls for its dismantling or even destruction. Volli himself argues that even at least one of those Italian Jewish authors he chastises admit (Volli’s word) that “for the great majority of Italian Jews, Israel remains an ideal and a patrimony to defend.” Interestingly, Volli uses patrimonio and not, for example stato,the former having connotations of both monetary and, more typically in Italian cultural discourse, connotations related to artistry, history, and heritage.

    Meanwhile, according to its own discourse about itself, at least as presented by its institutions such as the Jewish Museum of Rome, the cause of the shrinking of the Italian Jewish community is attributed not to any discontent with the Comunità’s refusal to critique Israeli military policies (and its insistent presentation of itself to the larger public as always having been supportive of Zionism) nor the lack in Rome of a thriving Italian Jewish reform movement but rather to mixed marriages. What the events of 1982 have produced is apparently an unhealable rift between the Italian left and the official representatives of the Italian Jewish Comunità.

    The problem of who exactly is an Italian Jew is further exacerbated by the fact that Italian Jews have lived their identities in ways far more complex than either of the crude terms “assimilation” or its opposite – autonomy? non-incorporation? – can signify. As long ago as 1985, Primo Levi “defended” himself from the charge, made in the US magazine Commentary, that he was assimilated, with the simple rejoinder, “I am. There does not exist in the Diaspora Jews who are not, to greater or lesser degrees: if only for the fact of speaking the language in which they live. I claim, for myself and for everyone, the right to choose the level of assimilation that is best suited to their culture and their surroundings.”34

    So, while, clearly, Italian Jews – both those who are official members of the Comunità and those who live their Judaism in a variety of different ways – hold varying opinions on the current military policies of the state of Israel, it is next to impossible to produce a critique of the state of Israel as a state without calling up the specter of antisemitism. That is, once a state is defined by Judaism, antisemitism is the necessary and irreducible outcome of any critique of Israel. As long as a critique of the very idea of the nation-state is part and parcel of leftist politics, and Israel continues to define itself as the (and not even a) Jewish state, critique of Israel will equal antisemitism, as least as it is defined by the aforementioned Italian Jewish intellectuals. These historical conditions create a particularly painful situation for those Italian Jews on the left, as they may feel as if they have no place in any Italian Jewish Comunità.

    Furthermore, once a Jewish state has been created in the lands formerly also inhabited by Palestinians, the only possible logical corollary is the formation of a Palestinian state. This, again, is irreducible; the logical outcome of the Palestinian diaspora is a Palestinian state. Thus the contradictory position of a global left that on the one hand engages in a critique of statehood and on the other argues for a Palestinian state. This is not hypocrisy or bad faith; it is a position overdetermined by history.

    These historical contradictions make it extremely difficult even to write of the relationship between Italian Jewry, Israel, and Zionism, and the historiographical problems of locating a post-war Jewish resistance to Zionism are substantial, since the keeper of the official records is the Comunità (which has an archive). Yet another problem bequeathed by Italian history: prior to the Shoah, Zionism was understood by many Italian Jews to be equivalent not to a call for Jewish statehood but rather philanthropic support for poor Jews in the Levant; even in the post-war years, the number of Italian Jews who immigrated to Israel was relatively minimal. Volli’s argument that Zionism is a word that not everyone understands is exactly (and not just figuratively) correct; for history has rendered it undecidable. The only way “out,” even provisionally, of this impasse, is further work on the history of Italian Judaism, and by parties who work scrupulously to make their interests as visible as they can. Unfortunately, an initial review of the debates in Italian Judaism around the events of 1982 reveals how little progress has been made on the issue of the rights of the Palestinian to self-determination – a phrase used by Levi and his co-signers in their response to the invasion of Lebanon.

    As Robert Esposito suggests, part of the problem with the term “community” is that it is almost always imagined as something that is possessed in common, something that can therefore be “lost” and re-found.35 Using an etymological approach, Esposito instead argues for a focus on the munus in community:

    the munus is the obligation that is contracted with respect to the other and that invites a suitable release from the obligation. The gratitude that demands new donations [italics in the original]. . . . It doesn’t by any means imply the stability of a possession and even less the acquisitive dynamic of something earned, but loss, subtraction, transfer.36

    Loss, subtraction, transfer – these are terms that have a very specific historical resonance to both Jews and Palestinians in the diaspora.37 While Israel’s current leaders are engaged in an extended grabbing of land – and, without a trace of irony, some members of the diasporic Libyan Jewish community in Rome protest that they have never been compensated for the land they were forced by Gaddafi to leave behind – Jewish memory keeps alive a tradition of hospitality to the stranger. Whether or not that tradition can survive the violence of nationalism is yet to be determined.

    _____

    John Champagne‘s research is in the area of Comparative Cultural Studies, with a focus on the representation of Gender and Sexuality in modernist film and literature. He currently teaches at Penn State Erie, the Behrend College, and as a Fulbright recipient, he spent the 2006–07 school year teaching American Studies in Tunisia at the University of La Manouba. He is the author of four books, including Aesthetic Modernism and Masculinity in Fascist Italy (London and NY: Routledge, 2013).

    _____

    notes:
    1. Volli, “Sionism: una parola che non tutti capiscono,” Shalom, June 2013: 18. Unless otherwise indicated, all translations are mine.
    Back to the essay

    2. Ibid
    Back to the essay

    3. Rosellina Balbi, “Davide, discolpati!” La Repubblica 7.135 (July 6, 1982): 20.
    Back to the essay

    4. Oriana Fallaci, “Scuola di politica,” Il mio cuore è più stanco della mia voce (Milano: Rizzoli, 2013), 82. Fallaci’s remarks were made at a conference at the Harvard Institute of Politics entitled “Politics and War” on September 23, 1982 – one week after the massacre of Sabra and Shatila, to which she referred in her talk several times. Earlier that year, Fallaci had traveled to Beirut to interview then Colonel Ariel Sharon. The interview was published in the September 6, 1982 issue of L’Europeo.
    Back to the essay

    5. “Chi siamo,” informazionecoretta.com, accessed May 19, 2014, http://www.informazionecorretta.com/main.php?sez=130.
    Back to the essay

    6. Ugo Volli, “Il potenziale del genocidio 19/05/2014,” informazionecoretta.com, accessed May 19, 2014, http://www.informazionecorretta.com/main.php?mediaId=&sez=280&id=53466
    Back to the essay

    7. Ibid. Volli is drawing his conclusions from the results of a global test of antisemitism developed by the Anti-Defamation League. On this test, see “About the Survey,” ADL Global 100, accessed May 19, 2014, http://global100.adl.org/about
    Back to the essay

    8. For the magazine’s website, see Shalom, Mensile Ebraico di Informazione e Cultura, accessed May 20, 2014. http://www.shalom.it
    Back to the essay

    9. “Consiglio della Communità Ebraica di Roma,” March 31, 1993, http://www.romaebraica.it/wp-content/uploads/2010/07/Regolamento-CER.pdf
    Back to the essay

    10. Art. 2.2, “Iscrizione alla Comunità,” Statuto dell’ Unione delle Comunità Ebraiche Italiane, accessed May 20, 2014, http://www.romaebraica.it/wp-content/uploads/2010/07/Statuto-UCEI1.pdf
    Back to the essay

    11. “formalizzata con esplicita dichiarazione o deriva da atti concludenti.” See Art.2.1, “Iscrizione.”
    Back to the essay

    12. Art.2.3, “Iscrizione.”
    Back to the essay

    13. On the structure of the Comunità, see “La C.E.R.” Comunità Ebraica di Roma, accessed May 20, 2014, http://www.romaebraica.it/cer-comunita-ebraica-roma/
    Back to the essay

    14. Art. 2.4, “Iscrizione.”
    Back to the essay

    15. “Italian” refers to those Jews who have historically inhabited the Italian peninsula since antiquity; a synagogue has been discovered, for example, at Ostia, Antica, thought to date from the reign of Claudius. On the synagogue, see Lee I. Levine, The Ancient Synagogue, The First Thousand Years (New Haven: Yale University Press, 2000.
    Back to the essay

    16. Guri Schwarz, After Mussolini, Jewish Life and Jewish Memories in Post-Fascist Italy, trans. Giovanni Noor Mazhar (London Vallentine Mitchell, 2012), 21. The association survived the postwar period, lasting until 1987; ibid., 22. Its name was changed to the present Unione delle Comunità Ebraiche Italiane in 1989.
    Back to the essay

    17. On Ginzburg, see Nadia Castronuovo, Natalia Ginzburg, Jewishness as Moral Identity (Leicester: Troubador, 2010).
    Back to the essay

    18. Marianna Scherini, “L’imagine di Israele nella stampa quotidiana Italiana: la guerra del Libano (septembre 1982), ” “Roma e Gerusalemme,” Israele nella vita politica e culturale italiana, Marcell Simoni e Arutro Marzano, eds, (Genova: ECIG, 2010), 177-99.
    Back to the essay

    19. Franco Belgrado, Edith Bruck, Ugo Caffaz, Miriam Cohen, Natalia Ginzburg, Primo Levi, David Meghnagi, and Luca Zevi, “Perché Israele si ritiri,” La Repubblica 7.123 (June 16, 1982): 10. The letter argued, “The destiny of the Israelian democracy rests in fact inseparably tied to the prospect of peace with the Palestinian people and reciprocal recognition.” Also, contra Volli, the letter fears that the invasion will in fact give rise to “a new antisemitism.”
    Back to the essay

    20. Di Castro, Daniela, Treasures of the Jewish Museum of Rome, (Rome: Araldo De Luca, 2010.), 19.
    Back to the essay

    21. B’nai B’rith Europe, “The Stefano Gay Tache Lodge in Rome,” accessed April 20, 2014, .
    Back to the essay

    22. Matteo Di Figlia, Israele e la Sinistra (Roma: Donzelli 2012) 121.
    Back to the essay

    23. Scherini, “L’imagine,” 195.
    Back to the essay

    24. Ibid., 195-96. Contra Scherini, both Levi and Fallaci were critical of Yasar Arafat, for example. See Primo Levi, “Chi ha coraggio a Gerusalemme?” Opere, 1171-72, reprinted from La Stampa, 24 June 1982, and Fallaci, “Scuola,” 78.
    Back to the essay

    25. Scherini, “L’imagine,” 196.
    Back to the essay

    26. Ibid., 197.
    Back to the essay

    27. Ibid.
    Back to the essay

    28. See the concluding chapter of Schwarz, After Mussolini.
    Back to the essay

    29. Andrea Becherucci, “Vicere la guerra e perdere la pace. Israele e la guerra dei Sei Giorni in tre riviste della sinistra Italiana: “Il Ponte,” “L’Astrolabio,” e “Rinascita,” “Roma e Gerusalemme,” Israele, 119.
    Back to the essay

    30. In the wake of years of international protest against the US war in Vietnam, Balbi disingenuously asked, in July of 1982, “Why is it only Israel that is judged by criteria not applied to other States? Why this visceral prejudice?” Balbi, “Davide,” 20. While the war in Vietnam might have been far from Balbi’s memory, this was not the case for some of her fellow Italians who analogized the invasion of Lebanon to Vietnam; see, for example, Fallaci, “Scuola,” 73, in which the writer compared the bombing of Lebanon (which occurred prior to the massacre at Sabra and Shatila) to Vietnam and Hué.
    Back to the essay

    31. Associated with The World Union for Progressive Judaism, reform congregations currently exist in Florence and Milan (where there are two). For links to the websites of these communities, see “The World Union for Progressive Judaism, Worldwide Congregations, Europe,” accessed May 19, 2014, http://wupj.org/Congregations/Europe.asp. Lev Chadash of Milan, the first reform congregation, dates from 2001. http://lnx.levchadash.info/index.php?option=com_content&task=view&id=10&Itemid=13; Volli was for a period of time its president. Rome maintains a Beth Hillel group for Jewish Pluralism.
    Back to the essay

    32. Michele Sarfatti, The Jews in Mussolini’s Italy: from Equality to Persecution (Madison: University of Wisconsin Press, 2006). On Jewish life during the fascist period, see also Alexander Stille, Benevolence and Betrayal, Five Italian Jewish Families Under Fascism (New York, NY: Picador, 2003).
    Back to the essay

    33. Dan Segre, “Ebrei Italiani in Israele,” in Identità e Storia degli Ebrei, ed. David Bidussa, Enrica Collettti Pischel, and Rafaella Scardi (Milano: Franco Angelli, 2000): 190.
    Back to the essay

    34. Primo Levi, “Gli Ebrei Italiani,” in Opere, Vol 2., ed. Marco Belpoliti (Turin: Einaudi, 1997), 1293.
    Back to the essay

    35. Roberto Esposito, Comunitas, the Origin and Destiny of Community, trans. Timothy Campbell (Stanford: Stanford University Press, 2010). For an excellent, brief introduction to Esposito’s ideas, see Alexander D. Barder, review of Roberto Esposito, Communitas, Philosophy in Review 31, no. 1 (2011): 29-32.
    Back to the essay

    36. Ibid., 5.
    Back to the essay

    37. On the 27th of June, 1982, Levi was interviewed by Alberto Stabile of La Repubblica. While resisting the positing of an analogy between Hitler’s “Final Solution” and “the quite violent and quite terrible things that the Israelis are doing today,” Levi nevertheless argued, “A recent Palestinian diaspora exists that has something in common with the diaspora of two thousand years ago.” Cited in Domenica Scarpa and Irene Soave, “A 25 anni della scomparsa, Le vere parole di Levi,” Il Sole 24 Ore, April 8, 2012, http://80.241.231.25/ucei/PDF/2012/2012-04-08/2012040821380709.pdf. The authors, however, get the date of the interview wrong, writing that it occurred on June 28.
    Back to the essay

  • Literature and Politics

    Literature and Politics

    Henry Veggian establishes Literature & Politics review:

    imageWhat intellectual traditions, political movements, writers and critics shape our understanding of the relationships between literature and politics in the United States? By what means do we identify such things, and to what ends? And how do these questions and others invite us to consider emergent configurations of critical thought? What possible futures might they suggest?

    The Literature & Politics section of The b2 Review solicits and invites original book reviews from interested contributors. We ask reviewers to evaluate critical works that consider how literary writers and writings engage forms of political thought, philosophy, history and action, as well as to evaluate figures, studies and traditions concerned with the dynamics between politics and the literary arts.

    We ask for reviews of an intermediate length but word count is not as important as style; we ask that you write reviews for the specialist as well as for the interested reader. Reviews will appear on the boundary 2 website.

    Please contact boundary 2 for further inquiry.

    –Henry Veggian