Category: The b2o Review

The b2o Review is a non-peer reviewed publication, published and edited by the boundary 2 editorial collective and specific topic editors, featuring book reviews, interventions, videos, and collaborative projects.  

  • Devin Zane Shaw — Disagreement and Recognition between Rancière and Honneth

    Devin Zane Shaw — Disagreement and Recognition between Rancière and Honneth

    by Devin Zane Shaw

    In an interview from 2012, Jacques Rancière states in response to a question about the role of dialogue in philosophy:

    I don’t believe in the virtue of dialogue in the form of: here’s a thinker, here’s another thinker, they’re going to debate amongst themselves and that’s going to produce something. My idea is that it’s always books that enter into dialogue and not people….Dialogue is never, for me, what it appears to be, which is something like the lightning flash of an encounter, a live exchange.[i]

    We should, then, approach the recent Recognition or Disagreement: A Critical Encounter on the Politics of Freedom, Equality, and Identity (Columbia University Press, 2016), with a similarly circumspect attitude.

    The core of the book, edited by Katia Genel and Jean-Philippe Deranty, is the debate between Rancière and Axel Honneth that took place at the Institute for Social Research in Frankfurt in June 2009, but it also includes a supplementary text by each author and an essay from each editor. Given that the editors’ essays comprise, at eighty pages, forty-five percent of the text, one should be particularly attentive to the ways in which their interventions shape the reception of the debate that was the book’s occasion. Against the editors, I want to argue here that this debate demonstrates the incompatibility of Honneth’s and Rancière’s respective projects. Moreover, Rancière’s work cannot be reconceptualized in the terms of Honneth’s liberal iteration of critical theory without sacrificing precisely those parts of his thought that are the most inventive, interesting, and politically and intellectually subversive.

    My differences with Genel and Deranty can best be summarized through our respective interpretations of Rancière’s claim that, in his critique of Honneth, he has reconstructed a “‘’ [sic] conception of the theory of recognition” (95). In my view, Rancière critically appropriates the terms of “recognition” to show what it would require to become a theory of dissensus and disagreement. Deranty outlines what he takes to be Rancière’s concern with a theory of recognition that ranges from Althusser’s Lesson to Disagreement in order to demonstrate an “in-principle agreement” between Rancière and Honneth (37). First, he argues that many of the examples from Disagreement are based on historical research that Rancière conducted in the 1970s. Then Deranty adduces passages that mention recognition, such as Alain Faure and Rancière’s “Introduction” to La parole ouvrière (1976), where they refer to political struggle as “the desire to be recognized which communicates with the refusal to be despised” (quoted on 38). He also cites an early interpretation of Pierre-Simon Ballanche’s account of the plebeian revolt on Aventine Hill (an episode which also plays a crucial role in the argument of Disagreement) where Rancière writes that the “rebellion was characterized by the fact that it recognized itself as a speaking subject and gave itself a name.”[ii] Rancière continues, though: “Roman patrician power refused to accept that the sounds uttered from the mouths of the plebeians were speech, and that the offspring of their unions should be given the name of a lineage.”[iii] This description has little to do with Honneth’s account of recognition, in which individuals recognize their freedom and the freedom of others as mediated by established social institutions. And then Deranty concedes that “Rancière just disagrees with some of the key concepts used by Honneth,” which undermines the verbal parallels that he draws upon to signal their agreement (36, my emphasis). Indeed, their principled dispute about their respective concepts undermines the very possibility of an “in-principle agreement.” Therefore, to evaluate the relationship between Rancière’s egalitarian politics and Honneth’s theory of recognition we cannot rely on verbal parallels; instead, we must address how the concepts of recognition and disagreement play out in relation to a theory of the political subject, the relation between politics and the political, and problems concerning what Rancière calls “the police” and social normativity.

    To address these questions, I will begin with the final essay included in Recognition or Disagreement, Honneth’s “Of the Poverty of Our Liberty: The Greatness and Limits of Hegel’s Doctrine of Ethical Life.” Earlier in the book, Honneth claims that “all kinds of political orders have to give a certain description or legitimation for who is included in the political community,” and, indeed, political philosophy often aims to supply the legitimation for a given society’s norms that decide how and whether individuals and their practices are included or excluded from the political order (115). Hegel, on Honneth’s account, demonstrates the logical and practical coherence of the social objectivity of the various types of individual freedom, that is, how freedom relates, through recognition, to politics, work, and love.

    In the book’s concluding essay, Honneth examines, first, how Hegel reconciles two common, subjective concepts of individual freedom within his account of objective freedom as it is realized in ethical life. Both subjective concepts are abstract sides of modern political freedom. For Hegel, the transition to modernity entails conceptualizing social institutions as “making possible the realization of freedom” (160). In other words, on Hegel’s account, individual freedoms are mediated through institutions—and institutions are mediated and produced through the actualization or realization of individual freedoms. Thus, when Hegel reconciles the two subjective concepts of freedom, which approximate what Isaiah Berlin calls negative and positive freedom, he demonstrates that both fail to incorporate the objectivity of freedom as it is embodied in concrete social institutions. According to the “negative” concept of freedom, an individual is free insofar as they are unhindered by the actions of others. While Hegel incorporates this incomplete concept of freedom within his system as “abstract right,” which ensures state protections of individual life, property, and freedom of contract, he faults negative freedom for lacking a positive determination of what the subject can do, socially, with freedom. According to the “positive” concept of freedom, which Hegel largely derives from Kant, the basis of morality is autonomy, the self-legislating and self-reflexive activity of the subject. While this concept of freedom gives a positive foundation to what morality is, it nonetheless remains subjective, lacking a concrete relationship to social objectivity.

    These negative and positive concepts of freedom are, therefore, in Hegel’s terms, “merely” subjective, while Hegel aims to demonstrate that individual freedom is objective, that is, reflected and recognized within objective social institutions. This concept of objective freedom is not limited merely to how we understand social institutions. To say that freedom is objective delimits an important intersubjective feature of individual freedom. As Honneth points out, Hegel argues that we cannot rely on Kantian models of autonomy in friendship or love, since the self-limitation of my freedom in the experience of friendship or love is not a self-limitation; it is “precisely that the other person is a condition of realizing my own, self-chosen ends” (164). The realization of a given individual’s freedom entails concrete social situations that implicate the freedom of others, and it is because social institutions mediate our relations with others that they have objective reality. Hegel—and by extension, Honneth—maintains that institutions receive normative justification insofar as they reflect and embody the practices of individuals’ freedoms, and that social institutions, in turn, engender the emergence and expansion of individual freedoms.

    Now, one can see why Honneth follows Hegel through the discussion of objective freedom in the doctrine of ethical life: what both the negative and positive subjective concepts of freedom lack is recognition. In our institutions, Honneth suggests, we should be able to recognize not only our own intentions but also the intentions of other subjects. In addition, Hegel identifies three ethical spheres in which each individual’s freedoms are realized in relation to others’: personal relationships, the market economy, and politics. For these reasons, Honneth argues that the “general structure” of Hegel’s doctrine of ethical life, despite some shortcomings, “remains sound even today,” and that this doctrine provides “us with a normative vocabulary that we can use to assess the respective value of the various freedoms we practice” (169; 167). Nonetheless, Honneth also faults Hegel for treating “as sacrosanct” three historically specific institutions as the outcome of the self-realization of objective spirit: the family—“guided by the patriarchal prejudices of his own day”—the capitalist market economy, and constitutional monarchy (171). While Hegel did not explicitly address the possibility that these institutions could be transformed to “make them more amenable to the basic demand for relations of reciprocity among equals,” Honneth contends that Hegel’s account of morality hints toward how political practice can revise social norms and reorganize social institutions to make them more democratic (172). According to Honneth’s revision of Hegel, the inclusion of liberal rights and the possibility for “moral self-positioning” allows for individuals to engage in “morally articulated protest” (174). Thus Honneth allows for a continued moral progress within societies and social institutions to a degree that was not envisioned by Hegel.

    *

    Despite his Hegelian framework, and despite his debts to the Frankfurt School, Honneth’s project shares some of the central concerns of mainstream Anglo-American political philosophy today: the emphasis on processes of justification and establishing conditions of justice in order to evaluate institutional and normative frameworks. By contrast, Rancière’s political thought shares neither the methods nor goals of mainstream political philosophy. Todd May has already explored in detail the differences between Rancière and mainstream political philosophy (including Rawls, Nozick, Amartya Sen and Iris Marion Young). In May’s account, these political philosophers rely, whether they are proponents or critics of distributive theories of justice, on a concept of “passive equality”: “the creation, preservation, or protection of equality by governmental institutions.”[iv] Rancière, though, makes the stronger polemical claim that political philosophy embeds itself in, and offers justification for, regimes of inequality that he calls “the police” or “policing.” One of the most striking features of Rancière’s work is his claim that what we typically call politics, even in its most democratic forms (voting, deliberation, governance, and popular legitimation), is policing. In Disagreement, Rancière defines the police as:

    first an order of bodies that defines the allocation of ways of doing, ways of being, and ways of saying, and sees that those bodies are assigned by name to a particular place and task; it is an order of the visible and the sayable that sees that a particular activity is visible and that another is not, that this speech is understood as discourse and another as noise.[v]

    Since this definition of the police sounds very close to the way that Rancière often glosses his concept of “the distribution of the sensible,”[vi] we should specify that policing produces and reproduces relations of inequality, the stratification of roles within a given distribution of the sensible that partition individuals and groups according to inclusion and exclusion, such as those whose task it is to rule and those whose task it is to obey. Moreover, on Rancière’s account, politics—in May’s terms, “active equality”—is a dynamic of collective engagement and revolt that aims to subvert and resist the stratification and coercion of policing and social institutions. Given that Honneth’s account of recognition emphasizes how social institutions mediate and engender individual freedoms, it then follows that in Rancière’s terms Honneth’s theory of recognition would not be an account of politics as much as it is an account—though a progressive one at that—of policing.

    And yet, in “Critical Questions on the Theory of Recognition,” his critique of Honneth (and Chapter Three of the book), Rancière does not use the terms “police” or “policing.” Instead, he begins with the conditional hypothesis that his differences with Honneth are best articulated by treating their respective approaches as competing theories of recognition. At the outset, however, he signals his critical intent by suggesting that “the term ‘recognition’ might also emphasize a relationship between already existing entities,” these entities being individuals and established social institutions (83). When, then, Rancière concludes that he’s sketched, through his critique of Honneth, his own theory of recognition, he’s appropriated the language of critical theory to articulate a politics of dissensus and disagreement.

    Rancière pursues this hypothesis—that he and Honneth are outlining competing theories of recognition—in order to locate their central points of disagreement. In Disagreement, Rancière defines disagreement (la mésentente) as a specific kind of political challenge to a given order of policing, “a determined kind of speech situation in which one of the interlocutors at once understands [entend] and does not understand [entend] what the other is saying.”[vii] In French, the term la mésentente plays on different connotations of the verb entendre, between “to hear” and “to understand.” On Rancière’s account, the politics of disagreement emerges when the marginalized or oppressed (what he calls “the part with no part”) within a given social order challenge the ways in which society is policed, and often these challenges are phrased in terms that have readily accepted meanings within society. However, politically contentious terms, such as equality, rights, or justice are given inventive new meanings that challenge the normative frameworks of a given regime of policing; the part with no part who are contesting injustice and the police can “hear” the same demands but “understand” entirely different things. Many political theorists lament this ambiguity and aim to define it away. However, Rancière argues that the ambiguity of our contentious terms and ideals makes dissensus possible. That is, this ambiguity makes it possible to identify how these politically contentious terms circulate between policing and politics, how they come to articulate and combat inequality and coercion. For example: justice, for some, means due process and equal consideration before the law, while justice for movements such as Black Lives Matter opens on to both a broad indictment of how so-called due process legitimates injustice against African-Americans who are victims of police violence, and a broader vision of transformative social justice.

    In “Critical Questions on the Theory of Recognition,” Rancière uses disagreement in a broader, dialogical sense rather than its specific, political sense. He argues that dialogue—to be truly dialogical—must be an “act of communication [which] is already an act of translation, located on a terrain that we don’t master” (84). Dialogue always involves translation, distortion, but also invention; in terms of philosophy, it means that both interlocutors must think outside of their usual terminology: distortion remains “at the heart of any mutual dialogue, at the heart of the form of universality on which dialogue relies” (84). But Rancière also suggests that dialogue, in its more specific, political sense, requires acknowledging the “asymmetry in positions” between interlocutors. This claim summarizes his differences with Habermas, which he had previously outlined in Disagreement: acknowledging how asymmetry and power distort the ideals of political dialogue entails, in Rancière’s account, a stringent form of universalism that demands philosophers to confront not just institutional barriers to democratic deliberation, but also how the processes of deliberation function to exclude certain forms of political speech and action. Thus Rancière’s critical question: to what degree does Honneth’s theory of recognition rely on the presupposition that the demands of political subjects have always already been mediated by social institutions?

    To confront this question, Rancière proposes three working definitions of recognition. Two reflect common usage: on the one hand, recognition means the concurrence of a perception with prior knowledge, as when we recognize a friend, location, or information; on the other hand, recognition in the moral sense designates how we recognize other individuals as autonomous beings like ourselves. In both cases, Rancière notes, “re-cognition” functions as an act of confirmation. He then hypothesizes that recognition could also be conceptualized in the terms of what he calls a distribution of the sensible. Recognition, then, “focuses on the configuration of the field in which things, persons, situations, and arguments can be identified” (85). In this sense, recognition comes prior to any act of confirmation—and the critique of recognition entails disagreement over the conditions in which persons, things, or situations are understood as such.

    We could ask, for instance, how is it that a given regime of policing frames some enunciations as political demands against injustice and others as merely subjective complaints or even noise? And we could use an analysis of this situation to attack the broader norms that legitimate this distribution of speech and noise.  While Rancière acknowledges that Honneth’s account of recognition “echoes” his own polemical account, he raises a crucial question: to what degree does Honneth’s account rely on the two connotations of the common usage, presuming a stable distribution of the sensible or normative framework that relies on an “identitarian conception of the subject” that conflicts with a “conception of social relations as mutual” or dynamically or socially constructed (85)?

    First, Rancière contends that Honneth embraces an “anthropological-psychological” concept of the subject that is heavily indebted to a Hegelian “juridical definition of the person” (87). Thus Honneth’s account of the subject’s struggle for recognition emphasizes the affirmation of self-identity and self-integrity within the intersubjective structure of recognition. In other words, it’s the same integral individual subject who is seeks recognition within a multiplicity of situations related to love, work, or politics. Then, Rancière argues that this juridical model of the integral identity of the subject conflicts with its claim to articulating intersubjective social agency—a point encapsulated in Honneth’s summary of love and recognition in the book: “in friendship and love my experience is precisely that the other person is a condition of my realizing my own, self-chosen ends” (164). To say that love involves two individuals realizing their respective ends and interests through another is overly juridical. To Honneth, Rancière counterposes love as it is found in À la recherche du temps perdu, where Proust describes love as a dynamic and aesthetic construction of an other. Rancière writes:

    What appears at the beginning is the confused apparition of a multiplicity, an impersonal patch on a beach. Slowly the patch appears as a group of young girls, but is still a kind of impersonal patch. There are many metamorphoses in that patch, in the multiplicity of young girls, through to the moment when the narrative personifies this impersonal multiplicity, gives it the face of one person, the object of love, Albertine. (88)

    Rancière offers this counternarrative to show how our theoretical frameworks delimit the possibilities of social agency that we are able to recognize—a criticism that Honneth subsequently accepts.

    Rancière’s attention to this point can perhaps explain how Rancière’s terminology can be alternately powerful and abstract. When he opposes the politics of equality to policing, it readily calls to mind clashes between protestors and cops, though politics cannot be reduced to these terms. However, when he defines those subjects who confront the established order as the part with no part, this definition is far more abstract than saying marginalized and oppressed. But Rancière relies on this level of abstraction in order to avoid delimiting conditions of political agency that could delimit who this part is because it could exclude groups who have yet to emerge and who we cannot foresee.

    In general, for Rancière, political subjects are neither self-identical nor self-integral. Instead, political subjects emerge through a dynamic of what he calls disidentification, the rejection of the roles, places, and tasks assigned to bodies within a given regime of policing. We could interpret Proust’s description of love, then, as a metaphor for the dynamic of political subjectivation: political subjects emerge first as a multiplicity, at first an impersonal patch in the social field, until it takes shape through the invention of a name—for instance, #blacklivesmatter or #NoDAPL—for a collective disruption of or rebellion against the police order. Given that all regimes of policing are instantiations of social inequality and coercion, politics is, for Rancière, by definition egalitarian. It is equality, he argues, that leads to a much more exacting concept of universality than an account of politics that neglects the asymmetry between the political subjects who exist by virtue of contesting the social order and the established order of policing. Politics enacts the affirmation of “an equal capacity to discuss common affairs”; in other words, politics enacts the intellectual and political equality of anybody and everybody (93).

    The task of political thought is to ascertain how politics involves a “polemical configuration of the universal” (94). The Black Lives Matter movement began with a call for justice for Mike Brown in Ferguson, but, according to Keeanga-Yamahtta Taylor, its next stage involves both “engaging with the social forces that have the capacity to shut down sectors of work or production until our demands to stop police terrorism are met” and movement building through solidarity, which addresses how, while African-Americans “suffer most from the blunt force trauma of the American criminal justice system,” the broader normative framework of “law-and-order politics” functions to oppress the poor in general.[viii] From a standpoint informed by Rancière, the goal of political thought would be to identify the movements and practices that drive “the process of spreading the power of equality” in the here and now, to identify how specific movements involve a polemical force of universality to subvert and combat the normative frameworks of a given police order (94). Far from endorsing a theory of recognition, Rancière has redefined recognition as a politics of dissensus and disagreement.

    *

    Thus we have good reason to doubt Deranty’s claim of an in-principle agreement between Rancière and Honneth. Indeed, the editors and I reach very different conclusions regarding the significance of this debate because they accept Honneth’s theoretical framework to interpret it, while I refuse to subsume Rancière’s concepts under Honneth’s. The point here, though, is not establishing who has read Rancière or Honneth correctly, but to examine how these interpretations delimit what each thinker believes is politically possible and feasible.

    Our first difference concerns the supposed common ground shared by Rancière and Honneth. Though Rancière explicitly chooses to oppose “politics” (la politique), rather than “the political” (le politique) to “the police”, Honneth and the editors equivocate between “politics” and “the political.” However, the terms, especially in French philosophy, are distinct—which means Rancière has made a deliberate conceptual choice.[ix] Politics, on his account, designates a dynamic activity, while “the political” carries the connotation of an original, fundamental political sphere upon which policing has supervened. For Honneth, then, when Rancière discusses equality, he’s describing either an “original definition of the political community” (115) or a political anthropology in which human beings “are constituted by a wish or a desire to be equal to all others,” and this “egalitarian desire…brings about the exceptional moment of politics” (99). In their “Critical Discussion” included in the book, Rancière rightly rebuts both of these characterizations. He holds that if politics takes place, it does so through an egalitarian praxis opposed to the police. To treat Rancière’s politics as a political anthropology, imputing particular desires or motives to political subjects, implies that the debate is about whether human beings are motivated by either a desire for recognition or for equality. We could, in that case, resolve the debate with a political anthropology of desire.

    If this is not enough reason to reject Honneth’s way of framing the debate, he also characterizes recognition and disagreement as two complementary forms of struggle with different scopes—but this categorization carries with it an implicit normative claim that recognition is more practical. He argues that Rancière brusquely reduces “the political,” considered as “a stratified normative order of principles of recognition,” to policing (103). Therefore Rancière interprets this stratified normative order too rigidly, when these norms are given to conflicts over their meaning, that is, subject to reinterpretation and revision. For Honneth, the revisability of the normative order allows us to conceive of two types of political intervention: an internal struggle for recognition and an external struggle for recognition. In Honneth’s terms, Rancière focuses exclusively on the external struggle for recognition, which, while it combats the “political order as such,” ignores the “reformist” ambitions of the internal struggle for recognition that aims to reinterpret existing normative principles to make social institutions and their normative frameworks more democratic and inclusive.

    But Honneth’s distinction between the internal and external struggles for recognition is not merely descriptive, but also normative: given, he claims, the difficulties in formulating injustice in revolutionary terms, it’s more important in day-to-day politics to “deal with these small projects of redefinition or of reappropriation of the existing modes of political legitimation” (106). Unlike Honneth, Rancière does not prescribe the scope of political struggle within a given situation, since such a prescription functions to legitimate or delegitimize choices we make about what is to be done. These choices cannot be evaluated outside of the context of political struggle itself. But Honneth’s normative preference is part of his philosophical framework: if the freedom of individuals is engendered and mediated by social institutions and norms, and if self-integrity is one of the primary ends of the theory of recognition, then individuals should aim to reform and reinterpret these institutions and norms incrementally.

    From Rancière’s perspective, even if we grant that political freedom is sometimes engendered by existing social institutions, this does not entail that all parts of society should recognize these institutions as engendering their freedom. Those who are marginalized and oppressed could just as equally recognize how a given institution has functioned to exclude, marginalize, oppress, or immiserate them. The goal of politics for these political subjects need not or should not be—nor should we prescribe their goal to be—the reform of or formal recognition within these institutions that have historically oppressed them. From Rancière’s standpoint, it is right for the part with no part to combat and transform the very normative principles that legitimate and reinforce these institutions of inequality, and to prescribe reform rather than radical normative transvaluation serves to delegitimize the possibility of formulating and enacting broader goals of political struggle.

    Thus while Recognition or Disagreement presents the debate between Rancière and Honneth, it speaks to broader issues about the scope and aims of contemporary political thought. The contrast between Honneth and Rancière ably demonstrates Rancière’s stubborn refusal to engage in the processes of justification valorized by mainstream political theory—indeed, it serves as a stark reminder of how engaging in these problems often, (and in Rancière’s view, always) entails accepting profound social inequalities. However, this book is also important because it shows that if we mainstream Rancière’s work, as Genel and Deranty attempt to do, we lose those parts of his work that are most subversive and inventive—and we are left with only Honneth.

    Devin Zane Shaw teaches philosophy at Carleton University. He is the author of Egalitarian MomentsFrom Descartes to Rancière (Bloomsbury, 2016) and Freedom and Nature in Schelling’s Philosophy of Art (Bloomsbury, 2010).

    Notes

    [i] Jacques Rancière, The Method of Equality: Interviews with Laurent Jeanpierre and Dork Zabunyan, transl. Julie Rose. Malden: Polity, 2016, p. 183.

    [ii] Quoted on 38, but the reference is incomplete. See Rancière, Staging the People: The Proletarian and His Double, transl. David Fernbach. London: Verso, 2011, p. 37.

    [iii] Rancière, Staging the People, 37.

    [iv] Todd May, The Political Thought of Jacques Rancière: Creating Equality. University Park: Pennsylvania State University Press, 2008, p. 3.

    [v] Rancière, Disagreement: Politics and Philosophy, transl. Julie Rose. Minneapolis: University of Minnesota Press, 1999, p. 29.

    [vi] As Rancière defines it in Recognition or Disagreement, a distribution of the sensible is “a relation between occupations and equipments, between being in a specific space and time, performing specific activities, and being endowed with capacities of seeing, saying, and doing that ‘fit’ those activities. A distribution of the sensible is a set of relations between sense and sense, that is, between a form of sensory experience and an interpretation that makes sense of it. It is a matrix that defines a whole organization of the visible, the sayable, and the thinkable” (136).

    [vii] Disagreement, p. x.

    [viii] Keeanga-Yamahtta Taylor, From #BlackLivesMatter to Black Liberation. Chicago: Haymarket Books, 2016, pp. 217, 211.

    [ix] See Samuel A. Chambers, The Lessons of Rancière. Oxford: Oxford University Press, 2013, pp. 50–57.

     

  • Quinn DuPont – Ubiquitous Computing, Intermittent Critique

    Quinn DuPont – Ubiquitous Computing, Intermittent Critique

    a review of Ulrik Ekman, Jay David Bolter, Lily Díaz, Morten Søndergaard, and Maria Engberg, eds., Ubiquitous Computing, Complexity, and Culture (Routledge 2016)

    by Quinn DuPont

    ~

    It is a truism today that digital technologies are ubiquitous in Western society (and increasingly so for the rest of the globe). With this ubiquity, it seems, comes complexity. This is the gambit of Ubiquitous Computing, Complexity, and Culture (Routledge 2016), a new volume edited by Ulrik Ekman, Jay David Bolter, Lily Díaz, Morten Søndergaard, and Maria Engberg.

    There are of course many ways to approach such a large and important topic: from the study of political economy, technology (sometimes leaning towards technological determinism or instrumentalism), discourse and rhetoric, globalization, or art and media. This collection focuses on art and media. In fact, only a small fraction of the chapters do not deal either entirely or mostly with art, art practices, and artists. Similarly, the volume includes a significant number of interviews with artists (six out of the forty-three chapters and editorial introductions). This focus on art and media is both the volume’s strength, and one of its major weaknesses.

    By focusing on art, Ubiquitous Computing, Complexity, and Culture pushes the bounds of how we might commonly understand contemporary technology practice and development. For example, in their chapter, Dietmar Offenhuber and Orkan Telhan develop a framework for understanding, and potentially deploying, indexical visualizations for complex interfaces. Offenhuber and Telhan use James Turrell’s art installation Meeting as an example of the conceptual shortening of causal distance between object and representation, as a kind of Peircean index, and one such way to think about systems of representation. Another example of theirs, Natalie Jermijenko’s One Trees installation of one hundred cloned trees, strengthens and complicates the idea of the causal index, since the trees are from identical genetic stock, yet develop in natural and different ways. The uniqueness of the fully-grown trees is a literal “visualization” of their different environments, not unlike a seismograph, a characteristic indexical visualization technology. From these examples, Offenhuber and Telhan conclude that indexical visualizations may offer a fruitful “set of constraints” (300) that the information designer might draw on when developing new interfaces that deal with massive complexity. Many other examples and interrogations of art and art practices throughout the chapters offer unexpected and penetrating analysis into facets of ubiquitous and complex technologies.

    James Turrell, Meeting 2016
    MoMA PS1 | James Turrell, Meeting 2016, Photos by Pablo Enriquez

    A persistent challenge with art and media analyses of digital technology and computing, however, is that the familiar and convenient epistemological orientation, and the ready comparisons that result, are often to film, cinema, and theater. Studies reliant on this epistemology tend to make a range of interesting yet ultimately illusory observations, which fail to explain the richness and uniqueness of modern information technologies. In my opinion, there are many important ways that film, cinema, and theater are simply not like modern digital technologies. Such an epistemological orientation is, arguably, a consequence of the history of disciplinary allegiances—symptomatic of digital studies and new media studies originating from screen studies—and a proximate cause of Lev Manovich’s agenda-setting Language of New Media (2001), which relished in the mimetic connections resulting from the historical quirk that the most obvious computing technologies tend to have screens.

    Because of this orientation, some of the chapters fail to critically engage with technologies, events, and practices largely affecting lived society. A very good artwork may go a long way to exposing social and political activities that might otherwise be invisible or known only to specialists, but it is the role of the critic and the academic to concretize these activities, and draw thick connections between art and “conventional” social issues. Concrete specificity, while avoiding reductionist traps, is the key to avoiding what amounts to belated criticism.

    This specificity about social issues might come in the form of engagement with normative aspects of ubiquitous and complex digital technologies. Instead of explaining why surveillance is a feature of modern life (as several chapters do, which is, by now, well-worn academic ground), it might be more useful to ask why consumers and policy-makers alike have turned so quickly to privacy-enhancing technologies as a solution (to be sold by the high-technology industry). In a similar vein, unsexy aspects of wearable technologies (accessibility) now offer potential assistance and perceptual, physical, or cognitive enhancement (as described in Ellis and Goggin’s chapter), alongside unprecedented surveillance and monetization opportunities. Digital infrastructures—both active and failing—now drive a great deal of modern society, but despite their ubiquity, they are hard to see, and therefore, tend not to get much attention. These kinds of banal and invisible—ubiquitous—cases tend not to be captured in the boundary-pushing work of artists, and are underrepresented (though not entirely absent) in the analyses here.

    A number of chapters also trade on old canards, such as worrying about information overload, “junk” data whizzing across the Internet, time “wasted” online, online narcissism, business models based on solely on data collection, and “declining” privacy. To the extent that any of these things are empirically true—when viewed contextually and precisely—is somewhat beside the point if we are not offered new analyses or solutions. Otherwise, these kinds of criticisms run the risk of sounding like old people nostalgically complaining about an imagined world before technological or informational ubiquity and complexity. “Traditional” human values might be an important form of study, but not as the pile-on Left-leaning liberal romanticism prevalent in far too many humanistic inquiries of the digital.

    Another issue is that some of the chapters seem to be oddly antiquated for a book published in 2016. As we all know, the publication of edited collections can often take longer than anyone would like, but for several chapters, the examples, terminology, and references feel unusually dated. These dated chapters do not necessarily have the advantage of critical distance (in the way that properly historical study does), and neither do they capture the pulse of the current situation—they just feel old.

    Before turning to a sample of the truly excellent chapters in this volume, I must pause to make a comment about the book’s physical production. On the back cover, Jussi Parikka calls Ubiquitous Computing, Complexity, and Culture a “massively important volume.” This assessment might have been simplified by just calling it “a massive volume.” Indeed, using some back-of-the-napkin calculations, the 406 dense pages amounts to about 330,000 words. Like cheesecake, sometimes a little bit of something is better than a lot. And, while such a large book might seem like good value, the pragmatics of putting an estimated 330,000 words into a single volume requires considerable care to typesetting and layout, which unfortunately is not the case here. At about 90 characters per line, and 46 lines per page—all set in a single column—the tiny text set on extremely long lines strains even this relatively young reviewer’s eyes and practical comprehension. When trudging through already-dense theory and the obfuscated rhetoric that typically accompanies it (common in this edited collection), the reading experience is often painful. On the positive side, in the middle of the 406 pages of text there are an additional 32 pages of full-color plates, a nice addition and an effective way to highlight the volume’s sympathies in art and media. An extensive index is also included.

    Despite my criticisms of the approach of many of the chapters, the book’s typesetting and layout, and the editors’ decision to attempt to collocate so much material in a single volume, there are a number of outstanding chapters, which more than redeem any other weaknesses.

    Elaborating on a theme from her 2011 book Programmed Visions (MIT), Wendy H.K. Chun describes why memory, and the ability to forget, is an important aspect to Mark Weiser’s original notion of ubiquitous computing (in his 1991 Scientific American article). (Chun also notes that the word “ubiquitous” comes from “Ubiquitarians,” a Lutherans sect who believed Christ was present ‘everywhere at once’ and therefore invisible.) According to Chun’s reading of Weiser, to get to a state of ubiquitous computing, machines must lose their individualized identity or importance. Therefore, unindividuated computers had to remember, by tracking users, so that users could correspondingly forget (about the technology) and “thus think and live” (161). The long history of computer memory, and its rhetorical emergence out of technical “storage” is an essential aspect to the origins of our current technological landscape. Chun notes that prior to the EDVAC machine (and its strategic alignment to cognitive models of computation), storage was a well understood word, which etymologically suggested an orientation to the future (“stores look toward a future”). Memory, on the other hand, contained within it the act of recall and repetition (recall Meno’s slave in Plato’s dialogue). So, when EDVAC embedded memory within the machine, it changed “memory by making memory storage” (162). In doing so, if we wanted to rehabilitate Weiser’s original image, of being able to “think and live,” we would need to refuse the “deadening of the world brought about by memory as storage and realize the fundamentally collective nature of memory and writing” (162).

    Sean Cubitt does an excellent job of exposing the political economy of ubiquitous technologies by focusing on the ways that enclosure and externalization occur in information environments, interrogating the term “information economy.” Cubitt traces the history of enclosures from the alienation of fifteenth-century peasants from their land, the enclosure of skills to produce dead labour in nineteenth-century factories, to the conversion of knowledge into information today, which is subsequently stored in databases and commercialized as intellectual property—alienating individuals from their own knowledge. Accompanying this process are a range of externalizations, predominantly impacting the poor and the indigenous. One of the insightful examples Cubitt offers of this process of externalization is the regulation of radio spectrum in New Zealand, and the subsequent challenge by Maori people who, under the Waitangi Treaty, are entitled to “all forms of commons that pre-existed the European arrival” (218). According to the Maori, radio spectrum is a form of commons, and therefore, the New Zealand government is not permitted to claim exclusive authority to manage the spectrum (as practically all Western governments do). Not content to simply offer critique, Cubitt concludes his chapter with a (very) brief discussion of potential solutions, focusing on the reimagining of peer-to-peer technology by Robert Verzola of the Philippines Green Party. Peer to peer technology, Cubitt tentatively suggests, may help reassert the commons as commonwealth, which might even salvage traditional knowledge from information capitalism.

    Katie Ellis and Gerard Goggin discuss the mechanisms of locative technologies for differently-abled people. Ellis and Goggin conclude that devices like the later-model iPhone (not the first release), and the now-maligned Google Glass offer unique value propositions for those engaged in a spectrum of impairment and “complex disability effects” (274). For people who rely on these devices for day-to-day assistance and wayfinding, these devices are ubiquitous in the sense Weiser originally imagined—disappearing from view and becoming integrated into individual lifeworlds.

    John Johnston ends the volume as strongly as N. Katherine Hayles’s short foreword opened it, describing the dynamics of “information events” in a world of viral media, big data, and, as he elaborates in an extended example, complex and high-speed financial instruments. Johnston describes how events like the 2010 “Flash Crash,” when the Dow fell nearly a thousand points and lost a trillion dollars and rebounded within five minutes, are essentially uncontrollable and unpredictable. This narrative, Johnston points out, has been detailed before, but Johnston twists the narrative and argues that such a financial system, in its totality, may be “fundamentally resistant to stability and controllability” (389). The reason for this fundamental instability and uncontrollability is that the financial market cannot be understood as a systematic, efficient system of exchange events, which just happens to be problematically coded by high-frequency, automated, and limit-driven technologies today. Rather, the financial market is a “series of different layers of coded flows that are differentiated according to their relative power” (390). By understanding financialization as coded flows, of both power and information, we gain new insight into critical technology that is both ubiquitous and complex.

    _____

    Quinn DuPont studies the roles of cryptography, cybersecurity, and code in society, and is an active researcher in digital studies, digital humanities, and media studies. He also writes on Bitcoin, cryptocurrencies, and blockchain technologies, and is currently involved in Canadian SCC/ISO blockchain standardization efforts. He has nearly a decade of industry experience as a Senior Information Specialist at IBM, IT consultant, and usability and experience designer.

    Back to the essay

  • Andrew Martino – Exhuming the Text: Alice Kaplan’s “Looking for the Stranger: Albert Camus and the Life of a Literary Classic”

    Andrew Martino – Exhuming the Text: Alice Kaplan’s “Looking for the Stranger: Albert Camus and the Life of a Literary Classic”

    Alice Kaplan’s Looking for the Stranger: Albert Camus and the Life of a Literary Classic

    Reviewed by Andrew Martino

    Albert Camus never considered himself an existentialist. In fact, Camus never exclusively believed in any school of thought. Camus was the consummate outsider, the one who stood apart from those who subscribed to views that forced those subscribers into a narrow ideology, especially when that ideology mixed with violence, something Camus steadfastly resisted. If we had to place Camus into any category it would be that of the humanist caught in the absurd. Camus believed in life over death (without believing in an afterlife), yet this belief did not keep him from contemplating the question of suicide, the only serious philosophical problem confronting us, as he writes in The Myth of Sisyphus. Camus’ humble beginnings in extreme poverty and illiteracy in his native Algeria  testify to the power of the human spirit in the face of an indifferent world. When he was awarded the Nobel Prize for Literature in 1957 he expressed reservations and claimed that the prize should have gone to André Malraux, an early influence on his writing. Camus also realized that the Nobel would bring a certain celebrity that would complicate his life, perhaps even sabotaging his art. Add to this his “silence” on the Algerian problem and his very public and acrimonious break with Sartre, and Camus becomes a figure trapped in a world where he is increasingly unable to control his own image. Camus is a problematic figure who is claimed by both the Right and the Left, leaving the man and his writing caught in a political vortex. Focusing on the postcolonial aspect of The Stranger, Edward W. Said writes that Camus “is a moral man in an immoral situation.”[i] When Camus died at the age of 46 in a car accident in 1960, he left the world with the image of the charismatic young man, Bogart-like in his coolness, and still with the promise of great things to come. But a saint he was not. His numerous affairs and constant womanizing, his reluctance to act or speak out against French imperialism in Algeria, his disillusionment with and expulsion from the Communist Party, render him more human than academics might be comfortable with. Camus’ life was full of contradictions, full of silences. Yet, it was precisely from these contradictions and silences that Camus produced one of the most important and widely read books of the twentieth century.

     Looking back over the seven decades since the publication of The Stranger, Camus’ reluctance to situate (in the Sartrean sense of the term) himself in the bubble of existentialism, a bubble in which The Stranger and his relationship with Sartre placed him, the novel blazed a path that opened up fields where the absurd might be articulated, contemplated, and confronted from the inside (the modernist bent) rather than from above and beyond, as the canonical novels of the nineteenth century may have done. In her essay “French Existentialism,” Hannah Arendt briefly examines Sartre and Camus’ influence on the “new” movement where novels carry the weight of philosophy. Throughout that essay she also comments on Camus’ reluctance to be labeled an existentialist. “Camus has probably protested against being called an Existentialist because for him the absurdity does not lie in man as such or in the world as such but only in their being thrown together.”[ii] Here we have what is perhaps the most concise and articulate formulation of absurdist philosophy to date. Camus’ definition of absurdity, painstakingly mapped out in Caligula, The Stranger, and The Myth of Sisyphus, is not quite existentialism, but does contain existentialist DNA, especially Kierkegaardian and Dostoevskian (two of Camus’ patron saints) DNA. As Camus remarks in The Myth of Sisyphus: “I can therefore say that the Absurd is not in man (if such a metaphor could have a meaning) nor in the world, but in their presence together.”[iii] Camus’ definition of the absurd is also the epistemological curve in the road separating him from Sartre’s thinking. If Sartre’s philosophy can be distilled into his phrase “Hell is other people,” then Camus’ is a philosophy of the absurd dependent upon relationships among people. On the other hand, Camus’ articulation of the absurd, as we’ve seen above, resides in the relationship of humans with their world.

    Together, Sartre and Camus blazed a path where philosophy and art, in this case literature, met, thereby ushering in a new form of the novel, one that would examine existence from a philosophical perspective while making use of a form in which to mold these philosophical perspectives. What emerges from this is a hybrid. According to Randall Collins, “What was identified was a tradition of literary-philosophical hybrids. Sartre and Camus were key formulators of the canon, and themselves archetypes of the career overlap between academic networks and the writers’ market. The phenomenon of existentialism in the 1940s and 1950s added another layer to this overlap.”[iv] But this hybridization was more than a heady cerebral new movement in fiction; this hybrid constituted a new way of thinking about the world, a world that emerged primarily from a particular network of intellectuals at a particular time in Paris. Sartre and Camus are on the crest of this wave of existentialism and their thinking would go on to change the world.

    Alice Kaplan’s extraordinary new book Looking for the Stranger: Albert Camus and the Life of a Literary Classic, is a careful and meticulously researched examination of Camus’ 1942 novel. Kaplan is one of the leading scholars of twentieth century French culture and history. She is currently the John M. Masser Professor of French at Yale University where she also received her Ph.D. in French in 1981. She has published seven books, including: French Lessons: A Memoir (1993), The Collaborator: The Trial and Execution of Robert Brasillach (2000), and Dreaming in French: The Paris Years of Jacqueline Bouvier Kennedy, Susan Sontag, and Angela Davis (2012). In 2013 Kaplan edited and provided the introduction to The Algerian Chronicles, a collection of articles and essays Camus wrote from 1939-1958. Kaplan’s edited edition is the first time these writings have appeared in English, so she is no stranger to Camus and his place in twentieth century French culture.

    Early on Kaplan claims that Looking for the Stranger is actually a biography of Camus’ best known work, and one of the most famous and widely read texts of the twentieth-century. However, this does not mean that Kaplan foregoes a glimpse into Camus’ life, thus resurrecting the Barthesian “death of the author” debate. Instead, Kaplan goes looking for The Stranger in the author instead of the author in The Stranger; the difference is subtly stunning. In other words, her investigation is more preoccupied with the creative process and its cultural and social context than it is with getting to the author as a god-like figure. Camus always claimed that The Stranger was the second in a three part series exploring the absurd from three different perspectives: a novel (The Stranger), a dramatization (Caligula), and a philosophical work (The Myth of Sisyphus). But The Stranger is hardly a book that needs rescuing from obscurity, nor does Kaplan claim that it does. To date the novel has sold over ten million copies and is still read in over 40 languages. It is still on high school and college syllabi, thus making it required reading for young men and women. In fact, a student’s first encounter with existentialism and the absurd is likely to come from a reading of The Stranger. Instead, she offers us a more comprehensive look into the text, running down every lead, exploring every avenue that might expand our understanding of what makes The Stranger the text that it is.

    Kaplan begins by acknowledging the spectacular success of The Stranger, making it one of the most popular and important texts of the twentieth century. She quickly glosses over the critical reaction to The Stranger by pointing out that readings of the novel map some of the most important theoretical lenses that have influenced twentieth century thought. “In fact, you can construct a pretty accurate history of twentieth-century literary criticism by following the successive waves of analysis of The Stranger: existentialism, new criticism, deconstruction, feminism, postcolonial studies” (2). The Stanger, she claims, has influenced thinking of a diverse population that spans generations. Indeed, the remarkable staying power of the novel to remain relevant, perhaps even more relevant now than when it was published, is a feat that its author and its critics at the time could not have foreseen. I am not sure that students continue to read The Stranger with the commitment that they once did, but it is undeniable that the novel still matters, that it still provokes us into thinking, especially in a time when fundamentalism and terrorism are on the rise, and Europe and the United States are flirting with a new form of fascism in the guise of a renewed interest in ridged nationalism. But Kaplan is not necessarily interested in the public and academic reception of The Stranger. Instead, she claims that the novel’s readers and commentators have overlooked something from our reading of the text since its publication: that something is a biography of the novel. “Yet something essential is lacking in our understanding of the author and the book. By concentrating on themes and theories—esthetic, moral, political—critics have taken the very existence of The Stranger for granted” (2-3). She takes the unprecedented, and academically unpopular path that looks into the life of the author and the circumstances that allowed the author at a particular place and time to write one of the most powerful works of world literature. However, it is important to point out that Kaplan sets out to write a biography of the novel, and not the author. In fact, Camus’ life becomes a part of the puzzle that is The Stranger.

    Kaplan is not the first to comment on the unlikely success of The Stranger and its problematic birth. She is, however, the first to devote an entire book to an investigation, an investigation that is almost documentary-like in its approach, to the novel from conception to publication and beyond. And she accomplishes this brilliantly. Told in twenty-six short chapters, bookended by a prologue and an epilogue, Kaplan leads us into the depths of the novel in a highly engaging and thought-provoking fashion. In fact, the structure of her book presents its readers with the “life” of the novel, a life that has continued on long after the death of its creator. Drawing from a reservoir of sources, including Camus’ notebooks and her own trips to Algeria, Looking for the Stranger is a scholarly adventure story. As Kaplan claims in her acknowledgements: “I looked for The Stranger in libraries, in archives, in neighborhoods on three continents” (219). Of course, the idea of The Stranger was with her all of the time, but what makes Kaplan’s book so provocative is precisely the lengths she goes to in search of the novel. Kaplan explores The Stranger in three parts: before its publication, during its publication, and after its publication.

    In the first chapter Kaplan gives us the image of a young man in front of a bonfire burning various papers that link him to a past, a past that could be dangerous to him and those who know him. But as Kaplan tells it, the young Camus could not bring himself to burn all of his letters and writings. What he saved would act as a cache of material, both physical and remembered, from which he would extract and rework into a slim, simply told tale of a man who fails to cry at his mother’s funeral and, by a series of circumstances, ends up shooting an unnamed Arab on a beach, only to be arrested, tried, convicted and sentenced to death. Yet, the reader is never quite sure if the protagonist is convicted and sentenced to death because of the murder or his refusal to conform to the rules of a society that demands that one cry at one’s mother’s funeral. The image of the bonfire given to us by Kaplan is a powerful one. As we travel with her deeper into her investigation, we learn that the bonfire was a kind of rite Camus needed to perform in order to purge his mind and soul so that he could go on to write what he felt needed to be written—unimpeded by ghosts, but still attentive to their silences, which spoke to and through him.

    Throughout the spring of 1940, six years after the bonfire, Camus worked furiously on The Stranger, almost in total isolation holed up in his miserable hotel room in Montmartre, interrupted only to work for five hours a day at Paris-Soir. The twenty-six year old was as cut off from the world as he had ever been. Alone in a foreign city, with German bombs exploding all over France, Camus fought his loneliness and misery by throwing himself into his writing. Not yet divorced from his first wife, Simone Hié, his fiancé Francine Faure refused to accompany him to Paris. The only thing he brought with him was the first chapter of The Stranger and a few of his press clippings. Kaplan: “His sense of separation from everyone he loved put him in a state of mind that was both painful and enabling” (71). Like Camus’ biographer Olivier Todd, Kaplan highlights the importance of Camus’ isolation when he first arrives in Paris. Camus believed that the failure of A Happy Death, his abandoned first novel, was due to his inability to write without interruption. Camus’ isolation in Paris enabled him, out of necessity, to devote all of his attention to The Stranger. Kaplan’s research offers us a marvelous glimpse into the creative process Camus used, or perhaps more accurately, was host to, during his writing of the novel. Kaplan claims that Camus wrote The Stranger almost line for line, as if he were dictating a story he was seeing play out before his eyes. Where he struggled with the writing of A Happy Death, The Stranger seems to have emerged almost fully formed, complete.

    However, his writing of The Stranger does not mean that it was without its problems. In fact, the birth of The Stranger was long and fraught with difficulties both internal and external. Until his arrival in Paris, Camus struggled with getting into the narrative, creating a new story, as well as using material from A Happy Death. Interestingly, most reviewers of Kaplan’s book, Robert Zaretsky, himself an accomplished Camus scholar, and John Williams in particular[v], have devoted a majority of their reviews to the shortage of paper in France as the novel was set to go to press. “To say that the very existence of The Stranger was threatened by the material conditions of the war is no exaggeration, since paper supplies were becoming more and more precious. It looked at one point as if Camus would have to supply his own paper stock!” (136). Camus was in Oran with his family at the time, and was happy to help Gallimard with locating paper. The novel came very close to not being published, but paper stock was found at the last minute and Camus was not obliged to supply his own.

    Once the novel was published it was met with immediate success. But perhaps its success was not so unusual after all. From the beginning Camus wanted the French publishing world, located in Paris, to represent him. In the chapter “A Jealous Teacher and a Generous Comrade,” Kaplan tells the story of Camus’ almost frantic correspondence with Jean Grenier and Pascal Pia, the teacher and the comrade, respectfully, and their influence on The Stranger in its early stages. More importantly, if Camus were to move from a provincial author to a wider audience, one that would include the whole of Europe and possibly America, he would have to seek publication outside of Algeria. As Kaplan notes: “Yet Paris was still the center of book publishing in France, and if Camus wanted to publish outside Algeria, he’d eventually have to find a way to get his manuscript to the capital” (107). This, it seems to me, provides the necessary evidence that Camus was thinking bigger than his native land. He desired a world stage, a stage that would allow his work to be read by the widest possible public and Gallimard was the publisher that could provide him with that opportunity. In his book The Existentialist Moment: The Rise of Sartre as a Public Intellectual, Patrick Baert illustrates the importance of publishing, especially those publishing houses in Paris, for providing the necessary outlet for ideas. “Intellectual ideas spread mainly through publications. Whether through books, magazines, or articles, publishing is central to the rise of intellectual movements. For such movements to be successful, authors have to be well connected to the main publishers and need to have sufficient freedom and power to be able to write what they want to write.”[vi] The network Gallimard could provide Camus with would plug him into some of the most resonant writers and thinkers of the time. As mentioned above, The Stranger was not just a novel, but also an important piece of a longer meditation on the absurd. Therefore, Camus’ relationship with Gallimard, as Kaplan points out, is a key component to his rise to international prominence. Quite frankly, without Gallimard, The Stranger might not have met with its tremendous success.

    Camus’ association with Gallimard was not the only key to his success, however. Gallimard’s star and existentialism’s major voice, Jean-Paul Sartre, also had a lot to do with the success of The Stranger. In his celebrated review of The Stranger, originally published in 1943, Sartre almost single handedly anoints Camus into the French intellectual network, thus solidifying his reputation as a resonant French intellectual. Still, early on in his review Sartre points out that, like its author, The Stranger is a book from “across the sea,” highlighting Camus’ Algerian heritage. Sartre’s generous and insightful review gives a certain intellectual legitimacy to the novel. Sartre: “The Stranger is a classical work, a work of order, written about the absurd and against the absurd.”[vii] This Apollonian form, in the Nietzschean sense, of the novel further reinforces the boundary lines that mark the absurd context, a context that we might fold into the Dionysian, again in the Nietzschean sense.

    But it would be a mistake to consider The Stranger a French novel; it is, in almost every sense, an Algerian novel, a novel obsessed with the sun and the sea. What is perhaps closer to the novel’s intention is, at least in part, a Mediterranean world in a colonial context. In other words, the pied noirs who enjoy French citizenship and the protection it offers as opposed to Arab subjectivity. Arab subjectivity is one of the chief criticisms postcolonial scholars hurl at The Stranger and its author. Yet, a purely postcolonial reading of The Stranger severely limits our understanding of the novel. As David Carroll points out, “I would even say that to judge and indict Camus [as Edward Said does] for his “colonialist ideology” is not to read him; it is not to treat his literary texts in terms of the specific questions they actually raise, the contradictions they confront, and the uncertainties and dilemmas they express. It is not to read them in terms of their narrative strategies and complexity. It is to bring everything back to the same political point and ignore or underplay everything that might complicate or refute such a judgment.”[viii] The postcolonial lens that has dominated readings of The Stranger has also relegated it and its creator to a graveyard for Eurocentric authors. Kaplan’s attention to detail, however, locates the nameless murdered Arab in The Stranger in a central, one might even say, privileged, position. Almost from the beginning, Kaplan admits to being nearly obsessed with the figure of the nameless Arab. Indeed, the namelessness of this character is one of the pivotal points in her book. As Kaplan discovers, there was a nameless Arab in Camus’ life, one that would lead him straight to the central scene in The Stranger.

    In 2015 Other Press published the English translation of Kamel Daoud’s The Meursault Investigation, a retelling of The Stranger from the point of view of the brother of the Arab killed on the beach by Meursault. Daoud, an Algerian journalist living in Oran, writes for Quotidien d’Oran, a French language newspaper in Algeria. The Meursault Investigation is an interesting book that reads more in the style of Camus’ The Fall than The Stranger. The protagonist, speaking to us in the first person from a bar in Oran, informs us that there are other facts in the case that we did not hear, the chief among these is the name of his brother, Meursault’s victim, Musa: “Who was Musa? He was my brother. That’s what I’m getting at. I want to tell you the story Musa was never able to tell. When you opened the door of this bar, you opened a grave, my young friend” (4). Daoud’s text comes dangerously close to being fan fiction. However, there is something profoundly relevant in the novel. The Meursault Investigation demonstrates a deeper understanding of The Stranger, and Camus’ style. In order to write this book, Daoud proves that he knows The Stranger intimately, and his contribution to the story is, indeed, worthy of consideration. The Meursault Investigation demands to be read, digested, and then read again in the context to the cultural as well as the literary conditions of Algeria before, during, and after its independence.

    Kaplan devotes nearly an entire chapter (chapter 26) to Daoud’s novel and the figure of the unnamed Arab who appears in nearly spectral form in The Stranger. She tells us that she had a meeting with Daoud in 2014 in Oran, in which he claimed “we don’t read The Stranger the same way as Americans, French, Algerians” (210).  Kaplan’s reading of Daoud’s novel is a revelatory experience for her, and by association, for us. She strategically situates The Meursault Investigation both within and beyond the lens of postcolonial theory.

    Kaplan’s research into the source of the killing of the Arab scene in The Stranger is a remarkable piece of journalism. Her investigation led her through the towns and alleyways of Oran, to dusty archives, and populated streets, all despite an Algerian travel advisory for those holding a United States passport. “For two years, I had traveled to places in France and Algeria connected to The Stranger: I had walked down the former rue de Lyon in Algiers, past Camus’s childhood home. With photographer Kays Djilali, I climbed the steep Chemin Sidi Brahim, knocking on doors until we found the House Above the World, now the home of three generations of Kabyle women who speak neither French nor Arabic. With Father Guillaume Michel from Glycines Study Center in Algiers, I drove out to gold and blue vistas of Tipasa. In Paris, I stood in the dreary spot on the hill of Montmartre where Camus wrote in solitude” (211). At the end of the trail is a name: Kaddour Touil, and a story.

    Kaplan’s research demonstrates that it is not really Camus the author who haunts The Stranger, but rather it is the specter of Meursault who haunts Camus, both in life and after death. Meursault, as Olivier Todd informs us, is a combination of several people Camus knew. “The character of Meursault was inspired by Camus, Pascal Pia, Pierre Galindo, the Bensoussan brothers, Sauveur Galliero, and Yvonne herself. Marie was not Francine. Camus the writer mastered his novel in a way that Camus the man did not control in his life. Meursault never asked himself any questions, whereas Camus was always examining his actions and motivations.”[ix] Authors routinely use what and who they know for characters and their actions in books, but Camus’ relationship with Meursault seems to be as complicated as that character’s relationship with the reader. Kaplan’s book sheds a new light on the complexities of those relationships.

    The Stranger is truly a work of world literature, in the sense that David Damrosch defines the concept.[x] With The Stranger we have an Algerian author who wrote in French but was influenced by Danish, Russian, and German thinking, and was stylistically influenced by American authors like Hemingway and James M. Cain. Alice Kaplan gives us a view of The Stranger that joins a growing chorus of scholarship on the controversial book and its author. She provides keen insight that opens up other avenues of thinking about that book and its author. Camus’ influence seems to be growing, not diminishing as we move deeper into the twenty-first century, and this is needed, especially given the growing resurgence of nationalism and isolationist polices, i.e. Brexit and Trump. Perhaps it’s only literature, and international fiction in particular, that can save us from ourselves. In this age of social media epitomized by the egotistical selfie, international fiction has become more important than ever. Kaplan’s book reminds us that nothing exists in a vacuum, that great works of art come about contextually and pan-culturally. The Stranger might never have been a success without the French existentialist network of the time.

    Andrew Martino is Professor of English at Southern New Hampshire University where he also directs the University Honors Program. He has published on contemporary literature and is currently finishing a manuscript on the concept of security in the work of Paul Bowles.

    Notes

    [i] Edward W. Said. Culture and Imperialism. (New York: Vintage Books, 1994), 174.

    [ii] Arendt, Hannah. “French Existentialism.” Essays in Understanding: 1930-1954. (New York: Schocken Books, 1994), 192.

    [iii]Albert Camus. The Myth of Sisyphus. Trans. Justin O’Brien. ) New York: Vintage Books, 1991), 30.

    [iv] Randall Collins. The Sociology of Philosophies: A Global Theory of Intellectual Change. (Cambridge, Massachusetts: The Belknap Press of Harvard University Press, 2002), 764.

    [v] See Zaretsky’s review in Los Angeles Review of Books (https://lareviewofbooks.org/article/biography-zaretsky-kaplan-camus/) and Williams’ review in the New York Times (Sept. 15, 2016).

    [vi] Patrick Baert. The Existentialist Moment: The Rise of Sartre as a Public Intellectual. (Cambridge, England: Polity Press, 2015), 138-139.

    [vii] Jean-Paul Sartre. “The Stranger Explained.” We Have Only This Life to Live: The Selected Essays of Jean-Paul Sartre 1939-1975. Ed. Ronald Aronson and Adrian Van Den Hoven. (New York: New York Review Books, 2013), 43.

    [viii] David Carroll. Albert Camus the Algerian: Colonialism, Terrorism, Justice. (New York: Columbia University Press, 2007), 15.

    [ix] Todd, Olivier. Albert Camus: A Life. (New York: Alfred A. Knopf. 1997), 107.

    [x] Here I am thinking specifically of Damrosch’s theory of circulation. See David Damrosch’s What is World Literature. (New Jersey: Princeton University Press, 2003) for a full definition of the concept.

  • Nathan Brown — The Logic of Disintegration: On the Art Practice of Alexi Kukuljevic

    Nathan Brown — The Logic of Disintegration: On the Art Practice of Alexi Kukuljevic

    by Nathan Brown

    The body is the inscribed surface of events (traced by language and dissolved by ideas), the locus of a dissociated Self (adopting the illusion of a substantial unity), and a volume in perpetual disintegration.

                            – Michel Foucault, “Nietzsche, Genealogy, History”[i]

    A troubling and enabling fact about the body is that it is never exactly “here” nor “there.” The existence of the body evades its coincidence with language, with thought, with the I, such that it can be described as “the locus of a dissociated Self.” The body is the self, but it is the self as dissociated. Its existence is an index of the dissociation the self is, of the self’s non-identity with itself, with language, and with thought.

    Writing on Nietzsche’s physiological attunement to philosophical thinking, Foucault offers three determinations of the body: 1) the inscribed surface of events; 2) the locus of a dissociated Self; 3) a volume in perpetual disintegration. These determinations abjure the apparent self-evidence of the body’s organic integrity (“the illusion of substantial unity”) in order to consider it as the site of certain operations (inscription, dissociation, disintegration) and as a spatially extended object (surface, locus, volume). The body records events and it instantiates the self’s dissociation. It holds together the dissociated self with those events that traverse it, but the very site of this holding together, its volume, is at the same time coming apart, disintegrating. Language traces events inscribed on the body; ideas dissolve them. Language and ideas separate events from the body, from the surface upon which they are inscribed, exteriorizing their inscription (tracing) or absorbing them into thought (dissolution). The perpetual disintegration of the body is the process by which the surface of its volume ceases to make available such exteriorization or absorption. The disintegration of the body is the gradual coming undone of language and of thought, of the registration of events.

    How might we situate art with respect to these determinations of the body? As a practice, art takes place at the boundaries of language and thought: it is involved with language and thought, yet not (only) linguistic or ideational. To describe the body as “a volume in perpetual disintegration” is to consider it formally: disintegration implies a measure of integration, and this measure, considered as volume, is form. Can this disintegration of form be exteriorized? As the body disintegrates, can it produce a double of its disintegration? Or, if not a double, at least a counterpart, a semblable? If philosophy takes place as the conjunction of language and thinking, how can art, at the boundaries of philosophy, disjoin these by doubling the perpetual disintegration of the volume that the body is, by displacing the locus of a dissociated self?

    These are the questions that will guide my approach to the art practice of Alexi Kukuljevic,[ii] through which I hope to limn a certain science of the logic of disintegration.

    CAPUT MORTUUM

    At Caput Mortuum, Kukuljevic’s solo show in 2012, a plaster cast of the artist’s teeth, his bite, sits atop the highest plinth in the room, spray painted gold and titled The Subject-Object (Fig. 1). The cast displays a pronounced overbite, the upper incisors caving in at the center and thus protruding out diagonally at irregular angles. On a plinth behind and to the left, another cast of the same teeth is presented at the opening of the show, this time in frozen black ink resting on a neon green edition of Hegel’s Phänomenologie des Geistes. The piece is titled The Object-Subject (Fig. 2 & 3). As the duration of the opening unfolds, the frozen cast melts into a liquid black pool, soaking the book beneath and forming a minimally differentiated volume of black ink against the background of the black plinth below. Watching over this process of disintegration from the wall above is a silkscreen print of Hegel’s portrait, his forehead exhibiting an unseemly goiter of spray foam with a nail driven through its center, neon green paint seeping from the wounded brow of the great thinker, running over the left eye and down the philosopher’s face across the surface of the print.

    Figure 2, The Object-Subject (2012)
    Figure 3, The Object-Subject (melted)

     

     

     

     

     

     

    This configuration establishes a basic dialectic of the artist’s practice. A singular or signature trait of the artist’s embodied subjectivity—his irregular bite—is cast as a sculptural object and presented to the viewer’s eye coated in the color of value, gold. A frozen double of this object, cast in the color of negation and the medium of inscription—black ink—displays the impermanence of its objecthood, the temporal finitude of its form, by melting into an indistinct pool. The subject becomes object on the condition that the object becomes subject, yet the doubling of the object (molded in plaster as well as black ink) enables it to sustain its form even as it melts into fluidity. The formal and fluid excess of this doubling is suggested by the seepage of paint from the pierced surface of Hegel’s printed portrait, as if the provocation of the thinker’s absolute judgment—that “the being of Spirit is a bone” (Hegel 1977 [1807]: 208)—called for a trepanation, by way of verification. Can we find the substance-subject in the skull? In the Phenomenology’s chapter on “Observing Reason,” philosophy reaches the point at which thought thinks its unthinking substrate and thus sublates that substrate as thought. It then becomes the vocation of art to render the residue of this sublation—the persistence of thought’s unthinking body—as the obdurate, curiously inconceivable, condition of its possibility.

    Art thus inhabits the disjunction between the highest and the lowest, the spiritual fulfillment of self-comprehending life and the physical function, as Hegel puts it, of taking a piss (210).[iii] From the point of view of philosophy, it “must be regarded as a complete denial of Reason to pass off a bone as the actual existence of consciousness” (205). From the point of view of art, the materials in which consciousness is inscribed are the ineliminable ground of formal specificity. “The body” is a relay between subject and object, but one that cannot simply be “lived.” Thinking itself as a thing that thinks, the thinking thing finds its particularity in the material substrate and remainder of this operation: not just any skull, but this skull; this skull which is, impossibly, “mine.” “My body” is that which is not (quite) either mine or me, yet which is I. The being of Spirit is not just any bone. What dissolves into fluidity through the becoming subject of the object, or resolves into solidity through the becoming object of the subject, is the specificity of these teeth, the irregular contours of this bite, and it is on the condition of encountering resolutely material form that universality can include particularity.

    Within the cut between the Subject-Object and the Object-Subject, art tarries with this relay between the specificity of the material particular and its insistence, as specific, within the genericity of the universal. This is one of the rifts that art inhabits.

    RIFT 

    Absolute knowledge requires the reconciliation of subject and object. This is not an option for art. If art knows anything (this is unclear) it is that the subject can not even be reconciled with itself, let alone with the object. The art object is an unreconciled remainder of the rift between the I and the Me. “Me” is the object form of the pronoun “I.” When I say “I,” the “me” is the unfortunate residue of my enunciation. “I = I” enunciates the genesis of the subject, but, for better or for worse (for worse), the subject has a body that remains unequal to the equals sign, that is unreconciled with the I to which it supposedly belongs. There “I” am (me), just when I hoped to be “here.” The golden egg of self-equivalence is held aloft (Fig. 4), supported by the doubled singularity of the irregular bite, by the mold of the split jaw that is the ground of articulation, the structure of the mouth, the condition of enunciation, or “The Limits of Grammar” (as another title has it).

    I = I splits into the dissociation of the I and the Me, held together as the body, exteriorized as the art object — the residue of such dissociation. In A Little Game Played Between the I and the Me, Kukuljevic’s contribution to the Nouvelles Vagues show at Palais de Tokyo in 2013 (Fig. 5), the central piece titled The I and the Me consists of two formally similar but morphologically discrepant sculptural masses, one of which is placed solidly upon a pedestal while the other hangs precariously from its edge, as if having just climbed up on stage or about to fall off.

    Figure 4, I = I (2012)

    From a speaker within these asymmetrically relational forms, not-quite mirror images, a  slow, dry, tired voice emanates into the gallery space:

    I say: I, I, I. You say: me. Me say, you. You say: I, I, I. I say, me.

    You answer to human. You grind your teeth. You point with the jaundiced nub of a finger. Your jaw drops on its hinge. Your thumbs bend at the joint. You toss word upon word. Live in abstraction. Skip stones. Sip whiskey. Polish silver. Lay claim to the luxury of fine cotton. Vomit champagne. And know how to sharpen the blade.

    ….

    There is an unease in your cadence. Your pace is hobbled. Your bones lack alignment. Your stare a milky grey. That hole in your head oozes something unrefined. Something is making you reach for your nail file. Adjust your posture. (Kukuljevic 2013)

    Figure 5, A Little Game Played Between the I and the Me (2013), Installation View

    Art is a pastime, a distraction, an indulgence, or a chore, like skipping stones, sipping whiskey, or polishing silver. It is a luxury, a guilty pleasure, like fine cotton or champagne, yet also something of an impediment, a burden, a limp, or perhaps the cane a limp requires. The hangover after the champagne. It is at once a decadent practice and the tick of the uneasy, the correlate of both the hobbled pace and the easy profligacy of the dissolute aristocrat. A goiter. A gouty toe. An overgrowth. Something that makes you reach for your nail file. It can hardly keep its balance on the pedestal upon which it is placed. The I stands firm, but the Me falters. Or the Me pretends to solidity, as the I wavers. Art is the imbalance of their mutual reckoning, their teeter-totter, the milky grey substance of their self-regarding stare, the hinge upon which the jaw issues abstractions, the sharpened blade with which one arm stabs the other.

    The rift between the I and the Me is the rift within the I = I, and consciousness of this rift demands its object, “the locus of a dissociated Self,” in order to convey its dissociation. The recognition of this dissociation solicits its displacement. Yet the object into which this locus is displaced must itself be doubled if it is not merely to suggest an exteriorization of the self, but rather the exteriorization of the self’s dissociation. The doubling of the object is the double of the dissociated (rather than unified) subject. Art which knows the riven conditions of its own possibility duplicates the singular form of its object, breeds its replication, demands reiteration, refuses the originality of the origin. Art repeats.

    (C8H8)n

    Figure 6, The Subject’s Alchemical Residuals (2012), detail

    In his sculptural work, Kukuljevic’s preferred material is styrofoam (expanded polystyrene).[iv] The I and the Me, for example, is composed of rectangular polystyrene panels stacked unevenly or clumped together vertically, coated with cement, and globbed with spray foam. Kukuljevic shapes the material by cutting it, burning it with a blowtorch, or melting it away with acetone, a substance with roots in alchemical practices (see Gorman and Doering 1959). Thus, one of The Subject’s Alchemical Residuals (Fig. 6) is a curved wedge of styrofoam with a conical hole melted through its center, the pocked surface around the base of the conical hollow marking the damage done by splashes of the corrosive substance. Like The Subject-Object and The Object-Subject, this might be read as something of a demonstration piece, a formal synechdoche or concentrated reduction of the artist’s concerns and methods, a minimal unit of his practice.

    The subject makes a hole in the object, which thus becomes an art object. The hole is not made by digging, by a practice of removal that would merely shift its material off to the side. It is made by dissolution, dissipation, dispersion: the hole itself, not the material subtracted from it, is the visible remainder of its production. What is produced is not a pile but an absence, a negation. The material bears the trace of this negation without remainder; that which remains is spirited away. Thus the art object becomes the residue, the residual, of an act of negation, its damaged remnant. It is an alchemical residual insofar as, qua art object, it has acquired value. Value is acquired by the material remnant of the negation of matter; it is its immaterial companion, inscribed as an absence within the object that makes it art.

    If the production of acetone has its roots in premodern alchemical practices, the production of polystyrene (beginning in the 1930s at IG Farben and 1941 at Dow Chemical) can be traced to the emergence of aromatic polymer chemistry, predicated upon Kekulé’s modeling of the benzene ring in 1865, and thus coeval with Marx’s theory of the commodity. The coincidence is merely suggestive, yet the chemical fabrication of organic compounds (“synthesis”) shadows the history of real subsumption and the attendant rise of mass consumption like an uncanny double (see Leslie 2005). Not only industrially produced objects but the molecules of which they are composed become artificial. Marx tells us that

    If we subtract the total amount of useful labor of different kinds which is contained in the coat, the linen, etc., a material substratum is always left. This substratum is furnished by nature without human intervention. When man engages in production, he can only proceed as nature does herself, i.e. he can only change the  form of the materials. (Marx 1990 [1867]: 133)

    This remains the case, but with the rise of chemical synthesis the production of the material substratum itself becomes a matter of labor, such that the only remaining substratum “furnished by nature without human intervention” are atoms of carbon and hydrogen — not even the molecular forms in which these are combined. As if in uncanny response to the metaphorical provocations of Marx’s chemical analogies in the first volume of Capital, the commodity becomes artificial in its very substance. The abstraction of socially necessary labor time saturates not only the object produced from natural materials, but also the molecular structure of the materials themselves, such that even the latter are soaked in the immaterial substance of value.[v] “It is absolutely clear,” writes Marx,

    that, by his activity, man changes the forms of materials of nature in such a way as to make them useful to him. The form of wood, for instance, is altered if a table is made out of it. Nevertheless, the table continues to be wood, an ordinary, sensuous thing. But as soon as it emerges as a commodity, it changes into a thing which transcends sensuousness. (163)

    Synthetically produced organic compounds, such as polymers, are in this sense not “ordinary sensuous things” (“materials of nature”) but rather materials that already “transcend sensuousness,” materials that are never not already commodities. Not only the process of production but also the materials upon which it works are fully subsumed.

    Thus a styrofoam cup is a commodity made of a material that has no “natural” existence outside of the commodity form, as is a polyester dress. So is a rectangular panel or a molded form of expanded polystyrene packaging material, but in this case the relation of the commodity to its consumption is rather curious. Here we are dealing with a commodity whose use value is to protect commodities as they circulate. A consumer buys something else, and some styrofoam comes with it, a necessary if unwanted accompaniment. Indeed, styrofoam packaging is in a particularly abject position insofar as it does not even carry out the other functional purpose of packaging, that of advertising the product within, in the manner of the all important box. Styrofoam is a mere intermediary between the alluring surface of the disposable exterior and the desirable utility of the interior object. A material byproduct of circulation, expanded polystyrene packaging is both invisible at the point of sale and already waste at the point of consumption. Even the consumer’s cat, who loves to sleep in cardboard boxes, wants nothing to do with molded styrofoam once it has been cast aside. Artificial even in its molecular constitution, unwanted by the consumer to whom it is destined, expanded polystyrene packaging is the paradigmatically unnatural detritus of the capitalist transformation of nature.

    The rendering of this destitute material as art is its salvation, or one more indignity to which it is subjected. At last, in any case, it is put on display, forming the curious substance of something someone might even buy.

    PERSONAE

    In the hospital rooms on either side, objects—vases, ashtrays, beds—had looked wet and scary, hardly bothering to cover up their true meanings. They ran a few syringesful into me, and I felt like I’d turned from a light, Styrofoam thing into a person. I held up my hands before my eyes. The hands were as still as a sculpture’s.

                            – Denis Johnson, Jesus’ Son

    “From a light, Styrofoam thing into a person”: Kukuljevic’s art practice reverses this conversion. The movement from person to styrofoam thing is productive not only of sculpture, but also of personae: those artificial figures of personhood through which one presents oneself to the public.

     Spending some time at his 2012 solo show, Don’t Be a Dreamer, Mr. Me, one comes to feel an odd sense of consolation among its major pieces: An Orgy of Stupidity (Fig. 7); Idiot (Fig. 8); A Gangrenous Fop (Fig. 9). The titles suggest a shared lack of intelligence, foregrounding a common trait of cognitive degeneracy. Indeed, not much can be expected by way of sparkling conversation from chunks of burned and painted styrofoam. “Everything about the show appears to be unhealthy — mentally and physically,” found one reviewer (Schwartz 2013). It is true, a sojourn among these initially unattractive, mildly poisonous forms seems not to promise the edification of a trip to the gym or the library. Yet one nevertheless develops a certain fondness for them, this cast of characters; an improbable affection gradually accrues in their mere presence.

    Deleuze recognized that stupidity is both the enemy of and the condition for philosophical thinking. One thinks in order to combat stupidity, yet in order to begin thinking at all, one has to be stupid. There must be an interruption of the order of the given, of the already known, of what Deleuze called “the image of thought” in order for thought to encounter its own ungroundedness: in order for thought to know that it does not know, and thus begin to think. In order not to be stupid, one has to be stupid: this is a contradiction with which philosophy has been embroiled since Socrates. What Konrad Bayer called “the sixth sense,” says Kukuljevic, involves “knowing when to risk being a dummy” (Kukuljevic, 2013-2014). But Deleuze goes beyond merely knowing when to take this risk, claiming that “Stupidity (not error) constitutes the greatest weakness of thought, but also the source of its highest power in that which forces it to think” (Deleuze 2004 [1968]: 345). Just as “the mechanism of nonsense is the highest finality of sense,” he argues, “the mechanism of stupidity (bétise) is the highest finality of thought” (193).. Do these styrofoam forms impart some of their stupidity to the viewer? Do they thus solicit thinking?

    One notes their vaguely anthropomorphic aspect. An Orgy of Stupidity looks like an enormous malformed grey skull accosted by pink spray foam, brooding dull-wittedly upon its table. Holes are melted into the “front” of the piece, resembling hollow eyes, while deeper crevices puncture it behind and below, visible when viewed in the round. Deceptively simple, the formal construction of the piece is in fact carefully articulated. Positioned at the back of the room, the bulk of this sculpture anchors the space, at once drawing the gaze and looking on, surveying the assembled art without having much to say about it. The show seems to turn upon this piece, a dead-head like a humanoid boulder measuring the depth or frivolity of our contemplation, of our chatter, against the taciturn obduracy of its inorganic impassivity.

    Figure 7, An Orgy of Stupidity (2012)

    While An Orgy of Stupidity rests solidly upon its base, Idiot is propped against a load-bearing column, while the large, roughly rectangular form of A Gangrenous Fop balances upon a single dowel anchored in a styrofoam base resting on a plinth. The fragile support of the latter piece drives home the lightness of what seem to be massive forms, the interior airiness of imposing exteriors, often sealed with a layer of concrete. This counter-intuitive play between the heaviness of surface and the lightness of depth is mediated by the technique and motif of perforation running through Kukuljevic’s practice. It is enacted by his melting away of surfaces in order to bore into sculptural forms and also thematized in wall pieces involving concrete and chair caning (Fig. 10), a material he values for the concomitant complicity and cancellation of surface and depth suggested by its woven form.

    Figure 8, Idiot (2009)
    Figure 9, A Gangrenous Fop (2012)

    The surface is more weighty than the interior — that is the sort of judgment one might venture looking at a piece like A Gangrenous Fop, with its lightly balanced heft. Yet the concrete surface itself is punctuated by holes that confuse or undo this distinction, leading us into the form along its surface in pursuit of depth, which thus becomes surface. Likewise, the use of spray foam to combine sculptural masses and to fill in crevices between them suggests an eruption — or at least a slow, coagulating leakage — of the interior. Meanwhile, color mediates this formal dialectic. Synthetic, superficial fluorescent shades seep from interiors or coat their exposed crevices, highlighting absences opened by corrosion, or the white sublation of color constitutes a pure yet perforated surface through which solid grey concrete seeps.

    Figure 10, Concrete IV (2012)

    If the somewhat familiar sculptural forms (one of them is titled A Human-Like Creature) exhibited at Don’t Be a Dreamer, Mr. Me come to seem sympathetic, perhaps it is because they have been through so much. Punctured, corroded, seeping foam and stained with garish colors, carefully poised or precariously propped up, they have an air of weary endurance about them, as if about to collapse or retire yet in for the long haul by virtue of their molecular inertia and their improbable value as art. They seem fated to be tired for a long time, with no choice but to make a display of themselves. This wry anthropomorphism solicits transferential self-pity, such that a title like Idiot may come to feel like a way of insulting the audience — a rhetorical inclination to which Kukuljevic is happily prone. In the end one takes it well. There is something like a communal self-loathing to be gleaned from such a show, the circulation of self-recognition as the concession of its weary stupidity, its dissolution (Fig. 11).

    Given the dissociation of the self, its perpetual disintegration, perhaps an encounter with the stupidity of self-recognition is one among the most precious objects art has to offer — or at least its most sincere gift. It snaps one out of a bland tete-â-tete with oneself, or with another, such that one begins to think. We come to feel affection for the forms that gift takes.

    Figure 11, Even Misanthropes Grow Weary (2014)

    SMOKE

    Figure 12, One or Two Things I Know About A.K. (2012-2013)

    Having started with bone, why not end with breath? Both have been said to be spirit. Yet even as Hegel could subsume the materiality of the skull within the ideality of the concept, breath is a materialization of the ineffable. This is a recognition readily available amid a cloud of cigar smoke, which constitutes for Kukuljevic not only a medium in its own right but a method of attunement, a dissociated Stimmung:

    Trapped between index and middle finger, a cigar traces a delicate line, its stump more unseemly. However, if held with poise, a cigar is a simple and elegant machine, much like a crowbar, that provides the mind with the material impetus for prying off an impression of the soul, as one peels off a latex mold.

    “Each cigar is a snapshot,” he writes, “of the soul’s decomposition” (Kukuljevic 2014). The cigar is a prop, like a sculpture. Yet it is a prop whose substance becomes interchangeable with that of the subject who wields it, to the detriment of both the subject and the object. The cigar is the temporary site of a chiasmus whereby both the subject and the object burn down to a material remainder, the former more slowly than the latter but no less surely. The billowing form of the cloud of smoke “focuses the mind on life’s dissipative march” (Fig. 12).

    Marcel Duchamp understood the pitfalls of relating to art primarily through the figure of the object, or “the art object.” For if something is an object, how can it be art? And if it is art, how can it be an object? Implicit in these questions is the immaterial surplus exhaled by any object that comes to be called “art,” the ineffable imprimatur invisibly stamped upon that which the term designates, an imprimatur that converts it into something other than what it is. Duchamp thus focused his attention upon what he called the infra-thin: “when the tobacco smoke smells also of the mouth which exhales it, the two odors marry by infra-thin.” The two odors, he says. Yet this figure of the infra-thin involves not only a marriage of two odors, but also of the object, the subject, and the fumes it exhales, mediated by the corporeal hollow of the mouth. Here the infra-thin is a complex of the subject-object, or the object-subject, which entails not only the ephemeralization of the corporeal but the corporealization of the ephemeral, a physics of the metaphysical and a materialization of the ideal, like “prying off an impression of the soul, as one peels off a latex mold.”

    Figure 13, The Physiology of the Cigar, Photogram (2014)

    If the smoking of cigars is properly considered part of Kukuljevic’s art practice (evident in his habit of filling the gallery with cigar smoke before openings), the photogram is its saleable analog (Fig. 13 & 14). Like his silkscreen prints of coral, or his wall sculptures with chair caning, his photograms tarry with the perforations constitutive of surface and with the permeability of the object. Just as the cigar burns into ash, a fragile record of its temporal dispersion, the retentional action of the photogram gives us to see the legible transparency of material structure, the ghost of the incorporeal that haunts all bodies.

    Figure 14, Torn Vitola, Photogram (2014)

    Yet the record of the cigar’s dispersion, its ash, is also its material residue — like styrofoam packaging that arrives alongside the consumer’s commodity. It needs somewhere to end up, to repose, and thus calls not only for the light touch of the photogram but also the hospitable embrace of the ashtray (Fig. 15). The propped up body of the sculpture would then support the papery corpse of the cigar, leaving the viewer to contemplate the degree to which form follows function in the case of so fleshly a friend of the infra-thin. This is the highest form of practicality we will encounter in Kukuljevic’s practice: the making of a place, barely contained within itself, to put the leavings of disintegration. Perhaps “the object” is better understood as such a place — and this is the sort of place, indelicately distended and on the verge of collapse, that the artist might call art.

    Figure 15, Ashtray #3 (2015)

     

    Figure 16, Trading Places (2015)

     

    It is in this sense that I view Trading Places (Fig. 16) as a particularly notable piece in Kukuljevic’s oeuvre. Whereas most of the sculptural works are either tenuously propped or heavily settled, this one rests upon a stable base, yet one that is mobile. Its form is again vaguely anthropomorphic, but in this case diminutive — a sidekick of sorts, like Lear’s clever fool or an R2D2 suffering the fate of Tithonus. The figure is burned out, carved away, its interior exposed and its surface rough-hewn, yet its dominant shade is a light azure that lends it a certain celestial freshness amid the charred remains it barely holds together. At the center of the piece, the same thin wood stick that bends under the burden of supporting some of the sculptures in this case holds aloft its own offering, cradled in a bright yellow latex glove, as if in supplication of the viewer. Here, the piece seems to intimate, this is what I have for you.

    What is thus presented is a bit of ash, the stump of a cigar, cupped within an indeterminate grey residue. Perhaps this is a present, maybe a presentiment. Sculpture, trading places, offers up a volume in perpetual disintegration as if posing its own question to the viewer, to the body of the subject who is not allowed to touch it: what do you have to offer me?

    BIBLIOGRAPHY

    Deleuze, Gilles. Difference and Repetition. (1968). Translated by Paul Patton. London: Continuum, 2004

    Gorman, Mel and Charles Doering. “History of the Structure of Acetone.” Chymia. 5 (1959): 202-208.

    Hegel, G.W.F. Phenomenology of Spirit. (1807). Translated by A.V. Miller. Oxford: Oxford University Press, 1977

    Kukuljevic, Alexi. Audio Track, A Little Game Played Between the I and the Me. Nouvelles Vagues, Palais de Tokyo, 2013.

    Kukuljevic, Alexi. Exhibition Text for Don’t Be a Dreamer, Mr. Me. (December 6, 2013 – January 19, 2014). http://www.marginalutility.org/exhibitions/2013/alexi-kukuljevic-dont-be-a-dreamer-mr-me/.

    Kukuljevic, Alexi. “More or Less Art, More or Less a Commodity, More or Less and Object,More or Less a Subject: The Readymade and the Artist” in The Art of the Concept. Edited by Nathan Brown and Petar Milat. Frakcija 64/65 (2013): 62-70.

    Kukuljevic, Alexi. Exhibition Text for You Can’t Rely on the Joke as the Only Mode of Social Relation…. (March 14 – April 30, 2014). http://www.kunsthalle- leipzig.com/kukuljevic.html

    Leslie, Esther. Synthetic Worlds: Nature, Art, and the Chemical Industry. London: Reaktion Books, 2005.

    Marx, Karl. Capital: Volume 1. (1867). Translated by Ben Fowkes. New York: Penguin, 1990.

    Schwartz, C. “Alexi Kukuljevic Dares Not to Dream at Marginal Utility.” Knight Blog (December 10, 2013). http://www.knightfoundation.org/blogs/knightblog/2013/12/10/alexi-kukuljevic-marginal-utility/

    NOTES

    [i] Thanks to Petar Milat for drawing my attention to this passage.

    [ii] Kukuljevic’s work has been included in exhibitions at Tanya Leighton Gallery (Berlin, 2016), Kavi Gupta (Chicago, 2015), Palais de Tokyo (Paris, 2013), De Appel (Amsterdam, 2012), and has been shown in solo exhibitions at Å+ Gallery (Berlin, upcoming 2016), Kunsthalle Leipzig (2014); ICA Philadelphia (2013), Jan Van Eyke Academie (Maastrict, 2013), and SIZ Gallery (Rijeka, 2012). He holds a Ph.D. in Philosophy from Villanova University, where he wrote a dissertation titled “The Renaissance of Ontology: Kant, Heidegger, Deleuze” (2009). He was a researcher at Jan Van Eyke Academie (2012-2013). His book Liquidation World: On Forms of Dissolute Subjectivity is forthcoming with MIT Press. He is the author of an artist’s book, Cracked Fillings, available at alexikukuljevic.com.

    [iii] Hegel writes, “The infinite judgement, qua infinite, would be the fulfilment of life that comprehends itself; the consciousness of the infinite judgment that remains at the level of picture-thinking behaves as urination [verhält sich als Pissen]” (210).

    [iv] Strictly speaking, “Styrofoam” is the brand name of extruded polystyrene produced exclusively by Dow Chemical, which is used in craft and insulation applications and is usually blue or green. The term is more loosely and commonly applied to expanded polystyrene in general, such as that used for foam cups or molded packaging. Following this common usage, I will refer to expanded polystyrene and styrofoam interchangeably.

    [v] Kukuljevic has published an essay on the relationship between the commodity form, the readymade, and the figure of the artist. See Kukuljevic 2013.

     

  • Pieter Lemmens and Yuk Hui — Apocalypse, Now! Peter Sloterdijk and Bernard Stiegler on the Anthropocene

    Pieter Lemmens and Yuk Hui — Apocalypse, Now! Peter Sloterdijk and Bernard Stiegler on the Anthropocene

    by Pieter Lemmens and Yuk Hui

    ‘You really take no account of what happens to us. When I talk to young people of my generation, who are about two or three years older or younger than me, they all say the same: we no longer have the dream to found a family, to have children, or a profession, or ideals, like you did when you were teenagers. That’s all over, because we are sure that we will be the last generation, or one of the last, before the end’

    The Shock of the Anthropocene

    In the above quote from the novel L’Effondrement du temps by the anonymous writer collective L’impansable, the fifteen year-old Florian addresses the current generation of politicians and more generally of adults responsible for our world and its future (L’impansable 2006). The French philosopher Bernard Stiegler has recently quoted this statement in many of his talks and it also features prominently in his new book Dans la disruption. Comment ne pas devenir fou? (In the disruption – how not to become mad? Stiegler 2016). Florian’s remark reveals a strong sense of melancholia about the arrival of the end. For Stiegler, this is not simply rhetoric. In an interview with the French Newspaper Le Monde on 19 November 2015, shortly after the Paris attacks, Stiegler confesses that “I can no longer sleep during the night, not because of the terrorists but because of worries that my children will no longer have any future” (Stiegler 2015a). What makes Stiegler so sad, and even so pessimistic about the current situation?

    As we see it, Stiegler is not exaggerating, but rather telling the truth. It is true that he has been accused of being a pessimist, because of his statements on the future of work, automation, editorialisation, etc. The general excitement about technological developments may give the impression that the world is moving towards a brighter posthumanist or transhumanist future. Many scholars working on technology tend to be easily satisfied with the phenomena emerging out of the new digital infrastructures and hence disregard a fierce critique of technology as a gesture of a Neo-Frankfurt School. Stiegler calls this attitude dénégation (denial). In his new book, Stiegler puts Florian’s accusation on a par with the shocking revelations of global “whistleblowers” like Edward Snowden, Chelsea (formerly Bradley) Manning and Julian Assange and he characterizes it as a parrhesia in the sense made famous by the French philosopher Michel Foucault, i.e., as a “frank and free” saying things as they are, or in other words a frank and courageous speaking of the truth. In this case, the truth of our time is a truth to which, according to Stiegler, virtually everyone prefers to close their eyes since it is too traumatic, inconceivable and appalling. It speaks not just about the possible but even the rather likely and imminent end of humanity, or at least of human civilization as we know it.

    What is this truth of our time? Perhaps one can start with its causes, which are multiple: the global climate and ecological crisis, resource depletion, military development, digital industrialization and a runaway consumerism accelerating daily through the intense exploitation of people’s attention and desires – there is a whole range of phenomena that seem to inevitably lead towards an apocalyptic end. If we are not able to reverse these destructive trends, humanity may soon confront its own extinction. The principal task and first duty of philosophy today, according to Stiegler, is to give a response to the parrhesia of Florian. Let’s start by introducing the subject of the Anthropocene and the scientific debates related to it. Many climate scientists[1] talk about a large-scale shift imminent in the Earth’s biosphere whose consequences will be unpredictable but in all likelihood catastrophic, especially if nations do not get together quickly to steer the “anthropogenic impacts” on the biosphere in a more beneficial direction. This mega- or ultra-wicked problem (as it is called in policy circles) is arguably the essence as well as the urgency of what has recently become known as the “Anthropocene”. This term was introduced in 2000 by the Dutch climate scientist and atmospheric chemist Paul Crutzen to identify the new geological era that in his view we have entered at least since the Industrial Revolution in the late 18th century (Crutzen 2002). As his now widely accepted hypothesis stated, “the human” (anthropos in Greek) or at least a certain part of humanity has become the most important geological (f)actor, having more impact on the state of the biosphere than all natural factors together. The human has thereby become de facto and willy-nilly responsible for the biosphere and by implication for its own future fate.

    The so-called “great acceleration” that started after World War II is considered to be responsible for finally bringing about what French historians Christophe Bonneuil and Jean-Baptiste Fressoz have called “the shock of the Anthropocene” (Bonneuil and Fressoz 2016): the world-wide dawning of humanity’s largely destructive impact on its own planetary life-support system. The predictions of the consequences of this for humanity in the short and long run vary, but even the Intergovernmental Panel on Climate Change (IPCC), known to represent the rather cautious mainstream view, has been forced to continually adjust its forecasts to more gloomy outcomes. The most extreme predictions, like those of the American ecologist Guy McPherson, foresee a near-term human extinction event within three decades (McPherson 2013).

    We would like to address the Anthropocene from both a philosophical and a political perspective. The former concerns the existence and responsibility of humans; the latter the political struggle that we must amplify. The term Anthropocene is ambivalent, since on the one hand, it leads to the illusion that man is back in the center, as one of the scientific researchers remarked during a recent conference entitled “How to think the Anthropocene?[2]”. The researcher proudly stated for the first time after the Copernican revolution, “man” has rediscovered her/his centrality. On the other hand, this revolution is responsible for global warming, the widespread destruction of ecosystems and the alarming loss of biodiversity that some authors (like Elizabeth Kolbert) have called the “sixth mass extinction”, caused this time by human beings themselves (Kolbert 2014). In other words, if it is responsible for putting “man” back in the center, it might also lead to her/his destruction.

    But what does this “geological event” of the Anthropocene really mean? Some geologists, or authors who are aligned with the thinking of “deep time”, see the Anthropocene as an insignificant event in comparison to the hundreds of millions years of geological history. The earth is in a constant process of destruction and reconstruction, the extinction of a species is one of those contingent events that carry no significance to the life of the earth. We may want to call this attitude, exemplified for instance in the work of the Dutch geophysiologist Peter Westbroek, geo-centrism or geo-reductionism (Westbroek 1992). The problem is not that such authors are wrong concerning earth science, but rather that they are right; in fact, they are so correct about it that they don’t see the problem.

    Marxist authors like Jason Moore, Maurizio Lazzarato and Christian Parenti argue that we should talk about the Capitalocene instead of the Anthropocene since it is not so much “the human” as the capitalist mode of production that is to be held responsible for the current devastation and exhaustion of the Earth’s biosphere (Moore 2016). Like Slavoj Žižek, they promote a more class-oriented view, re-interpreting the Anthropocene as the result of capitalism’s way of organizing nature in the case of Moore, who situates the Anthropocene’s  beginning not in the 18th century but in the long 16th century of primitive accumulation and the large-scale land-grabbing by budding capitalists known as the “enclosure of the commons” (Moore 2015). McKenzie Wark, another Marxist author who is nonetheless critical about the notion of the capitalocene (Wark 2015a), develops a “labor perspective” on the epic challenge of the Anthropocene, one that is inspired by the work of early Soviet authors Alexander Bogdanov and Andrey Platonov and feminist theorist Donna Haraway and Californian writer Kim Stanley Robinson (Wark 2015b).

    Many authors contest the term Anthropocene also because it suggests the existence of one unitary subject, “the human” or “humanity”, which would be responsible for the current crisis. However, as the German philosopher Peter Sloterdijk jokingly remarked in a recent public debate with Stiegler in Nijmegen in the Netherlands on the 27th of June (Sloterdijk and Stiegler 2016), sending an e-mail to humanity@planet.earth will inevitably yield a delivery failure message: “the human” or “humanity” does not exist. It is also obvious that some parts of humanity, like those belonging to the rich and affluent societies of the West, are much more “guilty” than, say, those fractions who live in the so-called developing world, the cruel fact being that the latter are generally much more affected by the devastating consequences of climate change than the former (in India for instance temperatures have been rising to a sweltering 51 degrees Celsius and many people are expected to die due to extreme heat and drought) (Wyke 2016).

    In his 1979 book The Principle of Responsibility, the German philosopher Hans Jonas already warned for the danger of humanity’s self-destruction due to its immense technological power and ability to destroy the planet (Jonas 1985). Jonas called for a new ecological ethic of responsibility and thereby proved himself to be an Anthropocenic thinker avant la lettre. His book was published at the onset of the so-called neoliberal revolution which swept away virtually every environmental policy that had gradually gained more support in the seventies and unleashed a global economic world war in the context of which we are all forced to compete against each-other—a war that is on a fatal collision course with the earthly ecosystem. The big question is whether, and how, we can reverse this process: how we can transform our hugely destructive impact on the earth into a more constructive and responsible one in order to avert the global catastrophe of which the current global crisis is only the prelude? As geobiologist Peter Ward put it in his book The Medea Hypothesis: ‘We are in a box. Ultimately it is a lethal box, a gas chamber or fryer, depending on how things work out. If we are to survive as a species, we will have to do a Houdini act’ (Ward 2015: 141).

    Two Proposals for a Reversal: Neganthropocene and Co-immunization

    What could be the response to the Anthropocene besides emphasizing responsibility? Or is there a more primary question still: who is responsible for what? Let us look at the diagnoses of two already mentioned thinkers who have both thought extensively about the human-technology relationship in recent decades: Peter Sloterdijk and Bernard Stiegler. Both offer some insights not only into the technological but also the historical and political, and even anthropological problem of the “shock of the Anthropocene”, which could be fundamentally understood as the consequence of neoliberal globalization of technology and capital.

    Sloterdijk, who calls himself a “leftist conservative”, is gaining increased attention in the Anglophone world yet is still a relatively marginal figure in it (unlike many of his continental colleagues of the same age and stature). His philosophical perspective is decidedly Nietzschean yet he is also very much influenced by Heidegger, Foucault, Deleuze, and Lacan, as well as the German tradition of philosophical anthropology (for example, Arnold Gehlen, Max Scheler and Helmuth Plessner). He became instantly famous in Europe in 1983 with his explosive debut Critique of Cynical Reason in which he diagnosed the current Zeitgeist as one of “enlightened unhappy consciousness” (with obvious allusions to Hegel) and a systemic hyper-cynicism that he hoped to counter with a new form of non-intellectual, bodily, popular- plebeian, humorous-grotesque, dadaesque and explicitly low-brow “critique”, inspired mainly by the brilliantly shameless performances of Diogenes of Sinope. His was a “critique beyond critique” that he called “kynicism” (with a k) (Sloterdijk 1988).

    While in this huge two-volume treatise Sloterdijk still presented himself as an heir of the tradition of critical theory of his principal teachers from the Frankfurt School, notably Adorno, Horkheimer and Bloch, he was clearly a very recalcitrant and ultimately rather unfaithful one. In his 1989 book Eurotaismus. Eine Kritik der politischen Kinetik, a thesis on the postmodern condition and its discontents, he largely exchanged the Frankfurt School for the “Freiburg School” and developed a Heidegger-inspired critique of modernity’s “total mobilization” in terms of a kinetic reinterpretation of the latter’s notion of releasement [Gelassenheit]. In the later chapters of this book he too proved himself to be an Anthropocenic thinker avant la lettre by pointing toward the fragility and finitude of the Earth as the base upon which human cultural-historical projects unfolded. He proclaimed that human culture would have to be increasingly responsible for its maintenance in the future, calling for a global ecological turn of the whole human endeavor (Sloterdijk 1989).

    Yet it is only in his monumental Spheres trilogy from 1998-2004 (Sloterdijk 2004), a grand sphero-immunological reinterpretation of the evolution and history of humankind and all the religious and metaphysical systems it produced—in other words, a history that operates from the perspective of humans as self-immunizing creatures who are sphere-building, sphere-abiding and sphere-borne beings–, that Sloterdijk develops a philosophical anthropology that is able to fully account for the anthropocenic condition we are inescapably entering. In particular the post-holistic, plural spherology or polyspherology of co-isolationist co-existence that is developed in the third volume of Spheres titled Foams is eminently suited for considering the human condition in the age of the Anthropocene (Sloterdijk 2016a), as Sloterdijk’s friend Bruno Latour has justly remarked (Latour 2008).

    *

    Bernard Stiegler started his academic career as a commentator of Martin Heidegger, more specifically on the question of technology in Heidegger’s thought. Unlike Sloterdijk, who takes the question of space and topology in Heidegger’s thought further and has suggested “Being and Space” as an alternative title for his Spheres-project, Stiegler’s work centers on the question of time and time’s relation to technology through what he calls tertiary retention, a notion that completes the circle of Husserl’s theory of retentions and protentions (Stiegler 1998). The tertiary retention is the technically captured trace as well as support of both primary retention (e.g. the melody that is retained in our mind) and secondary retention (e.g. the melody that we can recall tomorrow). For Stiegler the tertiary retention is a supplement as well as “exteriorization” of memory (in the words of French paleoanthropologist André Leroi-Gourhan) through which he attempts to re-read the history of European philosophy as a history of the suppression of the question of technics – as a response to Heidegger’s critique of the forgetting of the question of Being in Western metaphysics. The history of technology for Stiegler could be described as the history of grammatization, a term coined by the French historian and linguist Sylvain Auroux, in which the organic and inorganic organs are configured and reconfigured according to the progress of technological invention (e.g. alphabetic writing, analog writing, digital writing).

    Stiegler, who became a philosopher when he was incarcerated in Toulouse for committing several armed bank robberies, is currently director of the Institute of Research and Innovation (IRI), an institute that he established in 2006 in the Centre Georges Pompidou in Paris, and president of the lobby group ars industrialis. Best-known for his magnum opus Technics and Time, he has more recently dedicated himself to research on digital technologies as our new technical condition and he has developed what he calls a “general organology” (more on this below) to understand the effects on that condition of today’s consumerist capitalism (Stiegler 2010a). He has been a member of the national council for the digital in France. Stiegler’s politics consists in what he (following Plato and Derrida) calls the pharmacology of technology, namely the fact that technology is at the same time good and bad, remedy and poison. The politics of technology is to inhibit the toxicity in favor of the remedy. This also reveals his hope for the positive use of pharmakon as resistance against industrialization based on the exploitation of psycho-power, neuro-plastiticty and the capacity to take care of one’s self and of others (Stiegler 2010b).

    Of course, the immediate decarbonization of our economies and a transformation to renewable energy sources should be our first imperative. It could also be the case, as some geologists suggest, that geo-engineering will solve some of the problems that those changes would also address (Steffen et al. 2011). Others propose so-called “third way technologies” for carbon capture to reduce the atmospheric burden of CO2 during the time that is needed for the transition to a carbon-free economy  (Flannery 2016). However, what we are now facing is much more than a geo-chemical problem; and indeed it would be naïve to believe that it is only a geological question. We are facing, rather, what Stiegler calls the “entropocene”: the becoming entropic (in the sense of a world-wide exhaustion and ruination) of the biosphere due to what he calls a generalized toxification of all the systems that make up the human habitat on this planet: economic, social, technical, psychological, financial, juridical, educational, etc. (Stiegler 2017). In his view, those systems are all conditioned by a technical milieu which has been massively annexed and exploited by the capitalist industry to promote an evermore nihilistic process of production and consumption that exclusively serves the goal of profit accumulation. Since the technical milieu also encompasses the Earth’s biosphere, this leads to a massive accumulation of entropy that has reached such a scale so as to profoundly disrupt the geochemical processes of the earth.

    For Stiegler, humanity is an originally technical phenomenon that is made up of three different organ systems: the psychosomatic organs of human individuals; social organizations; and all kinds of technical organs (Stiegler 2014). Those three organ systems are intimately intertwined and evolve on the basis of changes in the technical organs. And these technical organs must be understood as compensations for an original lack of natural properties. Stiegler has developed the latter point with reference to the story that the sophist told in Plato’s Protagoras, in which the fire stolen by Prometheus is a compensation for the fault of Epimetheus, who forgot to give the human being any skill or property. Stiegler is critical of this compensation or what he also calls supplement. By taking up the concept of the pharmakon from Plato’s Phaedrus and Jacques Derrida’s “Plato’s Pharmacy” (Derrida 1981), he developed further what he calls a “pharmacology” of technology (Stiegler 2011). Technics are understood as pharmaka, i.e., both medicine and poison. New technologies, and one can think of the internet as a digital pharmakon, are initially always toxic and that is why they are in need of “therapies” which can turn the poison into a remedy. Politics, law, education, skill-based labor and professions are for Stiegler domains where such therapies can be developed (Stiegler 2013). Since technological innovation has been delegated totally to the market by neoliberalism and turned into a permanently accelerating process of “innovation for profit”, this therapeutic adoption of technology has become almost impossible, leaving only constant, frenetic, and increasingly blind adaptation. And this is for Stiegler the principal process behind the aggravation of the Anthropocene as entropocene.

    An example that may allow readers to imagine how such an entropy is produced is the use of technical organs (e.g. social networks, smart phones, automations, drones, etc.) for marketing and consumerism, which consequently destroy the psycho-somatic organs, since they produce only a drive toward perpetual consumption and no longer cultivate desire and therapeutic investment in skills and objects –– one can think of the addiction to video games or the internet and how they lead to the collapse of established social organizations. That situation systematically diverts attention away from confronting our real situation on this planet. The restructuralization of the economy as exo-somatisation oriented around the digital attention economy, big data and what is called “algorithmic governance” are taking us ever further into the abyss of nihilism. And yet, the internet is potentially also the best instrument at hand for a collective care-taking of the Earth and its inhabitants on a global scale. In Stiegler’s For a New Critique of Political Economy (Stiegler 2010), one of the alternatives put forward is an “economy of contribution”, which proposes to develop technologies which serve the initiation of a new economy of real investment of desire and the fight against the drive-based economy of consumerism. If the drive-based economy ultimately leads to addiction, then the economy of contribution hopes to turn libido into investment. That conversion is fundamentally a question of care: taking care of oneself and others.

    The entropocene marks the inability to construct such an economy of care and of libido. Instead, it leads and will continue to lead to the further spread of entropy. The anthropocene presents a global symptom, which cannot and must not be ignored as if it were simply a geological or a mere economical question. In 2015, the summer school of the Pharmakon academy–the philosophy school Stiegler started in 2010 in Epineuil in France–was dedicated to the “affirmation of a neganthropocene”. The neganthropocene argues for a new form of technological development that allows a so-called “bifurcation” – a radical change of direction in the sense of thermodynamics and seeks to produce qualitative differences for individuals as well as social groups. Recently, Stiegler has started a project with the Plaine Commune of Saint-Denis next to Paris to create what he calls a “truly smart city”, the realization of his philosophy for a new economy.

    *

    Sloterdijk already provided a perceptive and prescient sketch of the global situation of humanity in the epoch of what is now called the Anthropocene in his 1989 treatise Eurotaoismus. Until the dawning of the planetary “limits to growth”, as the famous 1972 report on the discrepancy between global economic expansion and planetary resources issued by the Club of Rome was entitled, the Earth was conceived (and accordingly treated) by a modernizing and industrializing humanity exclusively as the backdrop and unlimited resource fund for its cultural-historical projects. The metaphysical and “antisymbiotic” logic that characterizes the historical drama of mobilization that is modernity is indifferent if not blind to the stage upon which it is enacted. For a humanity that aims to become “master” and possessor of nature, as Descartes’ famous phrase had it, the Earth is reduced to a servant and supplier of material and energetic resources (and it is today still overwhelmingly considered as such by politicians and economists in terms of the “ecosystem services” it provides). It is only when the play starts to ruin the stage, Sloterdijk wrote in Eurotaoismus, that the actors are forced to take another view of both the stage and of themselves. What was once called “nature” and conceived of as an ever reliant, productive, abundant and robust backdrop has been fatally implicated in the maelstrom of human productivism and consumerism – “enframed” by it, as Heidegger would have it – with the destruction of its habitability impending if humanity does not start taking care of it and make it an integral if not central part of its cultural concerns. Referring to a phrase of the late Heidegger, Sloterdijk writes that the Earth can for us no longer be the endlessly patient “building-carrying” one that she was for all of humanity before us. The continued existence of so-called “nature, which we have now uncovered as being just a small and fragile ‘film’ covering a planetary body, can no longer be entrusted to her own autarky (since she has been scientifically exposed and technologically exploited), but will become dependent on us humans” (Sloterdijk 1989). That realization also means the definitive end of any peace of mind in the cosmos, on which all human cultures until now have rested (Davis and Turpin 2015).

    In the apocalyptic last chapter of his 2009 book You Must Change Your Life, Sloterdijk claims that the awareness of the fact that we cannot continue our current care-less lifestyles any longer but need to “change our lives” and start “taking care of the whole” is nowadays almost universally shared, forming the quintessence even of today’s Zeitgeist. Arguing that the global crisis shares many characteristics with the ancient God of monotheism, he speculates that this crisis will inevitably initiate, and will have to initiate, nothing less than a global immunological turn, i.e., a revolutionary transformation in the way humans construct and organize their immuno-spheric residence on the planet: “a new world-forming gesture” in terms of a new global project of sphere-construction, understood first of all as a transformation from local to global immunization strategies, from local protectionisms to a “protectionism of the whole” (Sloterdijk 2014a). This will require a “social tipping point” in the awareness, willingness and ability to act collectively as Earthlings.

    A viable future for humanity on this planet can therefore only be conceived for Sloterdijk on the basis of constructing a “global co-immunity structure” or a “global immune-design”, infused by a spirit of “co-immunism”, based on the awareness of a shared ecological and immunological situation and the realization that this new situation, which is actually that of the Anthropocene, cannot be dealt with on the basis of the existing local techno-cultural resources only but needs a planet-wide “logic of cooperation” (Sloterdijk 2014a). The technological reversion suggested by Sloterdijk is one that he calls a homeotechnological turn, i.e., a turn from the traditional, largely contra-natural, dominating, Earth-ignoring and Earth-ignorant allotechnological paradigm to a co-natural, non-dominating and Earth-caring homeotechnological paradigm. That also means the reconstruction of the global technosphere from a machine of exploitation and violation of the planetary oikos to an engine that co-operates and co-produces with the Earth’s bio- and atmosphere, an idea that resonates much with Stiegler negentropic turn (Sloterdijk 2015). Like Stiegler, who sometimes tends to identify the anthropocene with Heidegger’s Gestell, i.e., re-interpreted as the Ereignis of the Industrial Revolution as the deployment of the thermodynamic machine (the entropic character of which was not perceived by Heidegger, anymore than he took account of the notion of entropy in his thinking of the physis), Sloterdijk also thinks of the homeotechnological revolution as a benign turn of the Gestell towards a global-ecological “housing” project (Gehäuse) (Sloterdijk 2001).

    In a lecture given at the climate conference in Kopenhagen in December 2009, Sloterdijk suggests that a homeotechnological conversion of the human noosphere and technosphere around the Earth, and thus of the institution of a co-operative and co-productive relation of both anthropospheres with the biosphere, might eventually lead to the explication or unconcealing – here meant in the quasi-Heideggerian sense of the term – of a “hybrid-Earth” that is capable of much more than we can now imagine from our still allotechnologically programmed perspective, i.e., a homeotechnologized Earth whose capacities might very well be multiplied to an unimaginable extent (Sloterdijk 2015).

    Applying Spinoza’s famous dictum (from his Ethica) that “Nobody knows what a body can do” to the body of the Earth, Sloterdijk makes the wager that a homeotechnological turn of our immuno-spheropoietic being-on-the-planet forms our best and most hopeful answer to the challenge of the anthropocene, thereby referring to the bold ideas of the famous American architect Richard Buckminster Fuller, whose notion of Spaceship Earth as expounded in his 1968 book Operation Manual for Spaceship Earth has had a decisive influence on Sloterdijk’s sphero-immunological perception of the global ecological crisis and the anthropocene (Sloterdijk 2015, 108-9).

    As Sloterdijk already emphasizes in the final section of his 1993 book Weltfremdheit, such a global co-immunization project could very well prove to be a challenge that is too big for the anthropos, that is to say: as it currently exists (Sloterdijk 1993). Yet if there is one over-arching insight that runs through all of Sloterdijk’s onto-anthropological reflections, it is that humans are those beings that are always confronted with problems that are far too big for them but that they nevertheless cannot avoid dealing with. This structural burdening with what the tragic Greeks called ta megala, the “big things”, which puts human beings under permanent “growth stress” and/or “format stress” – today unfolding as “planetarization stress” (Sloterdijk 1995) – is what anthropogenesis as hominization and coming-into-the-world through sphero-poietic expansion is all about. And philosophy’s inaugural task is to be the birth-helper of this process of uncanny coming-into-the world (Sloterdijk 1993).

    If the human matures by increasing his awareness and responsibility through confrontations with the “big things”, the anthropocenic challenge of creating a global, i.e., planetary co-immunity structure will probably make clear for the very first time, and to all those involved, what “growing up” in its most general sense truly means for humanity (Sloterdijk 1993). Although the anthropos charged with responsibility is still “below the age of maturity” today (Sloterdijk 2015), the challenge of the anthropocene forces him, and provides him with the chance, to assume and acquire the proper maturity.

    Although he never gets very specific about the details, Sloterdijk claims that the anthropocene in this sense requires an entirely new, still to be invented mode of “big politics”, one that he designated as “hyperpolitics” in a book entitled Im selben Boot. Versuch über die Hyperpolitik (In the same boat. An Essay on Hyperpolitics) from 1995 that is, like many other books from that period, a preliminary sketch for the Spheres project (Sloterdijk 195). After the “paleopolitics” as the “miracle of the repetition of humans by humans” characteristic of pre-sedentary, pre-agricultural societies, and the “classic politics” of agriculture-based cities and nation-states as the perpetuation of that miracle in larger formats, today’s expansion of humans’ spheropoiesis toward the global, forcing them to live together in even larger formats, calls for a hyperpolitics, i.e., a global “state-athletics” for which there are no traditional examples at hand and for which the existing modes of “national-egoism” politics in fact only act as blockades. As in 1995, we can still observe a huge disproportion between the forces that are necessary and the weaknesses that are available and it seems still all too obvious that “creating jobs on the Titanic” continues to represent the pinnacle of current political intelligence (although piling up debts to continue unbridled consumption is today’s preferred policy). And Sloterdijk’s spot-on remark after the failed Copenhagen climate summit of 2009, that citizens all over the globe should safeguard themselves from their own governments, seems still all too valid after the 2015 Paris summit.

    The Herculean, currently impossible task for a coming hyperpolitics is to transform today’s “monster-international of end-users” or the hypermass of “last men with no return” into a global solidarity collective that takes care again of itself and the world and understands itself as a link between its ancestors and its offspring and not egoistically as the exclusive end-user of itself and its own life chances, an important theme Sloterdijk extensively elaborated upon in his 2014 book Die schrecklichen Kinder der Neuzeit. Über das anti-genealogische Experiment der Moderne (The Terrible Children of Modernity. On the Anti-Genealogical Experiment of the Modern Age; Sloterdijk 2014b). As such, hyperpolitics is the first politics of last men and should be understood as the continuation of paleopolitics with other means and on a global scale.

    Since human spheropoiesis has gone global and pretends to encompass the entire biosphere, the situation of humanity vis-à-vis the planet has reversed, as Swedish earth system scientist Johan Rockström proclaims, from a “small world, big planet” situation into a “big world, small planet” one (Rockström and Klum 2015). To preserve what he calls a “safe operating space for humanity” within the planetary boundaries, he argues that we are in need of a global governance of the earth system in order to reconnect human techno-cultural systems with the biosphere in a co-constructive fashion. There exists already a “Global Earth Observation System of Systems” (GEOSS), which tracks many key planetary boundary processes. Intelligent and democratic use of such a system might indeed usher in a “good anthropocene” beneficial to all inhabitants of the earth system. It could be one of the supports of the global immune system that is necessary for our collective survival as Sloterdijk claims. Yet it is also important to make sure that life in the anthropocene is not just about sur-vival. It should also be a “good life”, a “life worth living” in Stiegler’s expression.

    But how can Sloterdijk’s polyspherology, which takes the visual image of bubbles, be prevented from becoming the soil for fascism? The current refugee problem seems to be the touchstone of the foam theory. In an interview with the German magazine Cicero early this year, Sloterdijk claims that “we haven’t learned the praise of border”, and “The Europeans will sooner or later develop an efficient common border policy. In the long run the territorial imperative prevails. Finally, there is no moral obligation to self-destruction” (Sloterdijk 2016b). For sure, borders define the interiority and exteriority of bubbles, and hence realize such polycosmology; however, they thereby also blur the line between fascism and co-existence. In what sense can we interpret further the concept of co-existence, which recently has appeared in many other works dealing with the anthropocene and ecological crisis? Co-existence implies first of all communication and coalition – a positive concept of immune system under the current pharmacological condition, which stands as the opposite of the Brexit. We will come back to the politics of co-existence later when we address the concept of the “internation” as an alternative political imaginary.

    Dealing With the Apocalypse. A New Kind of Politics for the Anthropocene

    Let us try to conclude by restating the classic question “what is to be done”? Recently, there has been a lot of discussion about the question of scale and the Anthropocene is a scale problem of the highest order. The well-known American writer Evgeny Morozov has stated in almost all of his recent speeches that there is in fact NO alternative to the current neoliberal model of Silicon Valley – you are “free to use and free to give your data”, because the “Silicon Valley ideology” is so powerful that no individual effort will ever be able to challenge it–only the intervention of a body like the European Union could have a substantial effect. However, he does not see this will happen. On the other hand, British accelerationists like Nick Srnicek and Alex Williams have argued that after Occupy Wallstreet, the resistances or “micropolitics” that continue to spring up everywhere (such as urban gardening or dumpster diving) are not able to “scale up” to really challenge capitalism (Srnicek and Williams 2013). They criticize the individualist moral of the anarchist as a self-limitation as revolutionary force, and therefore fall prey to the appropriation of capitalism (Srnicek and Williams 2016: 29-37). This leaves us in a situation of helplessness, and micropolitics becomes self-consolation par excellence. The authors proposed what they call accelerationist politics inspired by the cybersyn project in the socialist Chili of the early Seventies, namely a socialist appropriation of technology in order to construct what they called a “post-work” economy, which includes 1) full automation, 2) reduction of working week, 3) universal basic income and 4) diminishment of the work ethic (2016: 127). Except for the last point, which is very close to the anarchists, their vision can be superimposed on the agenda of the Chinese Communist Party, which is unfortunately built upon a rather simple if not naïve understanding of technology.

    First of all, it still remains to be debated if previous forms of resistance are futile, especially when such claims are no more than pure intellectual activities. Indeed such claims seem like a revival of cynicism for intellectuals to stay in front of the computer and renounce direct actions on the street; and sometimes it seems even grotesque when some respond to such “impasse” by “fully appropriating” Facebook or Google, as if “high technology” has necessarily led to the illusionary “post-capitalism” in the sense of Paul Mason (Mason 2016). A more critical attitude towards the technological acceleration should be taken, which goes beyond the opposition between optimism and pessimism. Both proposals for the neguentropocene and co-immunization should be taken further as concrete political acts. There, realization can only be achieved by going back to the question of the local. Locality is central for both Stiegler and Sloterdijk in terms of resistance against global capitalism, and locality can only be archived through personal contacts and concrete projects, that seem further and further from the grand intellectual revolutionary plots. We don’t pretend to know what is to be done. However, for effectively confronting the Anthropocene, and responding to it in a systematic and scalable way, we would like to propose two points concerning the role of the state and the form of resistance.

    If states want to avoid being liquidated by the neoliberal economy, they will have to assume responsibility. We all know that nation-states had no problem whatsoever with intervening after the financial crisis of 2008 when the European banks ran into trouble. It was a moment when European governments undeniably showed that they are still capable of doing things on a global scale – though in the wrong way – in stark contrast to Hardt and Negri’s thesis of the power of Empire and the withering-away of nation-states (Hardt and Negri 2000). It seems that the nation-state should be obliged to take the problem of Anthropocene seriously and act upon it – not just by “going green”, but also by seriously addressing what Stiegler diagnoses as the entropic becoming of our world. However, it is also undeniably true that national governments have become pawns in the hands of global oligarchies and that national sovereignty is de facto eliminated and replaced by the dictates of the financial markets, with the recent fate of Greece being the most pitiable example. How much hope can we still bestow to our governments? And indeed one should be skeptical about them; however at the moment, they are the only institutions besides of transnational enterprises, which can effectively mobilize resources for large-scale projects.

    The anti-globalization movement in the late 20th century and first decade of the new millennium has made popular the multitude, yet the silence of the anti-globalization movement in recent years means that the form of micropolitics or artistic gesture proposed by it is no longer effective for dealing with the Anthropocene. By the same token, we already know about the failure of the “third sector” of NGOs, which since the anti-globalization movement haven’t cast any new light on the future. We also know that the post-World War II institution of the United Nations, despite its innumerable programs, doesn’t have any real executive power. Surely one can imagine, as many have done, that in order to form a federal body more powerful than the United Nations, a third world war would have to break out – and if the Anthropocenic situation worsens, such a scenario is not at all unrealistic.

    By way of conclusion, we want to gesture toward the possibility of establishing an “internation”, a concept developed by Marcel Mauss in 1920 and recently taken up by Stiegler to propose the constitution of a new form of public power that might be able to defy the forces of capital and guide humanity into another future than the barbaric and intolerable “no future” prescribed by neoliberalism’s TINA (“There is no alternative”) mantra (Stiegler 2015b). Mauss pronounced the article “Nation et internationalisme” in the colloquium “The Problem of Nationality” organized by the Aristotelian Society in London, in which he expressed the urgency that philosophers take an avant-garde approach to the question of nation and internation (Mauss 1920). The increasing economic interdependence after the first world war becomes a “défaut”, based on which Mauss proposed also a “moral interdependence” of mutual-aid as well as the reduction in sovereignty to reduce war. Stiegler took up Mauss’s notion of the internation recently in States of shock (2015b) and interpreted it through the lens of Simondon’s concept of individuation.

    Bernard Stiegler. Courtesy of Alchetron

    A nation for Stiegler is a project of “collective individuation” through the establishment of a res publica. Internation is a project that takes this process further in order to re-institutionalize the production and dissemination of knowledge in order to re-create the circuits of transindividuation in the sciences that are now dominated by the marketisation and commercialization of knowledge. Stiegler imagines this internation first of all as a project for academics and scholars more generally all over the world (what he calls “interscience”) to unite in resolutely refusing their recruitment in the global economic world war unleashed by neoliberalism and instead sign a global peace treaty, backed up by a new legislative body (Stiegler 2015b).

    This should start the re-forging of the digital networks into tools for cooperation and care, and for the elevation of collective intelligence. De facto, this internation already exists (and has existed for a long time) in the form of collaborations among research institutes, schools and universities worldwide. However, the research funding strategies in the past decades in Europe (if not worldwide) have rigidified these collaborations and turned them into zombie-like dogmas. Political visions of researchers are always submitted to the hidden agenda of the market and commercial value (what is called the “valorization agenda”). There is no lack of awareness of this among academics but at the moment there is no effective strategy to act against the market hegemony. The formation of an internation could foster such a strategy. Yet it will have to become explicitly politicized in order to function as a catalyst for the construction of new forms of global socialization and cooperation that could usher in the neganthropocene and bring about a large-scale homeotechnological revolution in the sense of Sloterdijk. The only alternative would be to surrender to the brutal dictates of a consumerist capitalist innovation that will only produce more entropy, impotence and stupidity. In the words of Stiegler, we need to mobilize internation against disindividuation.

    The creation of an internation has a meaning for our epoch, and indeed there is an urgency to do so, in view of the destructive nature of the anthropocene and the entropic becoming of the technological world. It is for sure not only the responsibility of intellectuals and universities, and it is for sure that a larger scale of association with sectors and groups outside of university is necessary, but it is also important to reflect on these at the level of locality and localization, according to different orders of magnitude. To pass into act is only a question of perception and action but also, and probably even more profoundly, a process of psychic and collective individuation, which doesn’t come naturally. It takes courage to create such a condition and such a quantum leap. Retrospectively, Mauss’ remark on intellectual courage can therefore still serve as a Mahnruf to contemporary intellectuals:

    Why didn’t the philosophers take an avant-garde position on this? They understood it well as it is about founding the doctrine of democracy and nationalities. British and French were ahead of their time, and one shouldn’t forget Kant, Fichte. Why did they choose to stay at the back, and serve the vested interest? (Mauss 1920)

    We would finally like to ask here, most likely in deviation from Stiegler’s own intentions, whether it would be possible to conceive of such an internation as an enabling strategy for what Antonio Negri and Judith Revel have called “the invention of the common” (Negri and Revel 2008), i.e. as an intermediate step toward the establishment of a “global commons” of knowledge and capabilities and ultimately a common global authority not only beyond the private but also beyond the public. This return to Negri does not mean that we are proposing to undermine the role of the state, which we have invoked earlier. On the contrary, if the global economy in the past decades has been running on the principle of privatization and marketization, as Slavoj Žižek has rightly argued (Žižek 2009), and if the recent triumph of Donald Trump as well as the Brexit signal a return to a conservative revolution founded on the strengthening of sovereignty and border control, “communization” will be a counter process against the struggling self-preservation of capitalism. In that case, the economy of the commons inscribed in the project of internation could become a vehicle for the creation of a truly global co-immunity structure, and a truly global engine of neguanthropy. But for this to be possible, there should first be a re-orientation of strategies in teaching, research and funding within universities.

    References

    Barnosky, Anthony D. et al. 2012. “Approaching a state shift in Earth’s biosphere”, Nature, No. 486, 07 June 2012: 52–58

    Bonneuil, Christophe and Fressoz, Jean-Baptiste. 2016. The Shock of the Anthropocene. The Earth, History and Us. London:Verso.

    Crutzen, Paul. 2002. “Geology of Mankind”, Nature, No. 415, 23, 3 January 2002: 23.

    Davis, Heather and Turpin, Etienne. 2015. Art in the Anthropocene: Encounters Among Aesthetics, Politics, Environments and Epistemologies. London: Open Humanities Press.

    Derrida, Jacques. 1981. “Plato’s Pharmacy” in Dissemination. Translated by Barbara Johnson. Chicago: University of Chicago Press.

    Flannery, Tim. 2016. Atmosphere of Hope. Solutions to the Climate Crisis. London: Penguin.

    Hardt, Michael and Negri, Antonio. 2000. Empire. Cambridge: Harvard University Press.

    L’impansable. 2006. L’Effondrement du temps. Tome 1, Pénétration. Paris: Le Grand Souffle Editions.

    Jonas, Hans. 1985. The Imperative of Responsibility. In Search of an Ethics for the Technological Age. Chicago: University of Chicago Press.

    Kolbert, Elizabeth. 2014. The Sixth Extinction: An Unnatural History. London: Bloomsbury.

    Latour, B. 2008. “A Cautious Prometheus? A Few StepsToward a Philosophy of Design (with Special Attention to Peter Sloterdijk)”, In Fiona Hackne, Jonathn Glynne and Viv Minto (editors), Proceedings of the 2008 Annual International Conference of the Design History Society – Falmouth, 3-6 September 2009, e-books, Universal Publishers, 2-10.

    Mason, Paul. 2016. PostCapitalism: A Guide to Our Future. London: Penguin.

    Mauss, Marcel. 1920. « La nation et l’internationalisme. » Communication en français à un colloque: « The Problem of Nationality », Proceedings of the Aristotelien Society, London, 20, 1920, 242-251.

    McPherson, G. 2013. Going Dark. Baltimore: Publish America.

    Moore, Jason. 2015. Capitalism in the Web of Life: Ecology and the Accumulation of Capital. London: Verso.

    Moore, Jason. 2016. Anthropocene or Capitalocene? Nature, History, and the Crisis of Capitalism. Oakland: PM Press.

    Negri, Antonio and Revel, Judith. 2008. “Inventing the Common”. Multitudes, 13 May 2008: http://www.generation-online.org/p/fp_revel5.htm

    Rockström, Johan and Klum, Mattias. 2015. Big World, Small Planet. Abundance Within Planetary Boundaries. Stockholm: Bokförlaget Max Ström.

    Sloterdijk, Peter. 1988. Critique of Cynical Reason. Minneapolis: University of Minnesota Press.

    Sloterdijk, Peter. 1989. Eurotaoismus. Eine Kritik der politischen Kinetik. Frankfurt am Main: Suhrkamp.

    Sloterdijk, Peter. 1993. Weltfremdheit. Frankfurt am Main: Suhrkamp.

    Sloterdijk, Peter. 1995. Im selben Boot. Versuch über die Hyperpolitik. Frankfurt am Main: Suhrkamp.

    Sloterdijk, Peter. 2001. Nicht gerettet. Versuche nach Heidegger. Frankfurt am Main: Suhrkamp.

    Sloterdijk, Peter. 2009. “Das 21. Jahrhundert beginnt mit dem Debakel vom 19. Dezember 2009”, Süddeutschen Zeitung, 19 December 2009.

    Sloterdijk, Peter. 2004. Sphären. Frankfurt am Main: Suhrkamp.

    Sloterdijk, Peter. 2014a. You Must Change Your Life. Cambridge-Malden: Polity.

    Sloterdijk, Peter. 2014b. Die schrecklichen Kinder der Neuzeit. Über das anti-genealogische Experiment der Moderne. Frankfurt am Main: Suhrkamp.

    Sloterdijk, Peter. 2015. Was geschah im 20. Jahrhundert? Frankfurt am Main: Suhrkamp.

    Sloterdijk, Peter. 2016a. Foams: Spheres Volume III: Plural Spherology. Los Angeles: Semiotext(e).

    Sloterdijk, Peter. 2016b. “Es gibt keine moralische Pflicht zur Selbstzerstörung”. Cicero Magazin für politische Kultur, 28 January 2016: http://www.cicero.de/berliner-republik/peter-sloterdijk-ueber-merkel-und-die-fluechtlingskrise-es-gibt-keine-moralische

    Sloterdijk, Peter and Stiegler, Bernard. 2016. “Welcome to the Anthropocene. A Public Debate”, Nijmegen, 27 June 2016: https://www.youtube.com/watch?v=HoxPk4VBbOk

    Srnicek, Nick and Alex Williams, 2013. “#ACCELERATE MANIFESTO for an Accelerationist Politics” www.criticallegalthinking.com/2013/05/14/accelerate-manifesto-for-an-accelerationist-politics/

    Srnicek, Nick and Alex Williams, 2016. Inventing the Future. London: Verso.

    Steffen, Will et al. 2011. “The Anthropocene: From Global Change to Planetary Stewardship”, AMBIO Vol. 40: 739-761.

    Stiegler, Bernard. 1998. Technics and Time Vol. 1. The Fault of Epimetheus. Stanford: Stanford University Press.

    Stiegler, Bernard. 2010a. Taking Care of Youth and the Generations. Stanford: Stanford University Press.

    Stiegler, Bernard. 2010b. For a New Critique of Political Economy. Cambridge-Malden: Polity.

    Stiegler, Bernard. 2013. What Makes Life Worth Living. On Pharmacology. Cambridge-Malden: Polity.

    Stiegler, Bernard. 2014. Symbolic Misery Vol. 1. The Hyperindustrial Epoch. Cambridge-Malden: Polity.

    Stiegler, Bernard. 2015a. Interview with Le Monde, « Ce n’est qu’en projetant un véritable avenir qu’on pourra combattre Daech http://www.lemonde.fr/emploi/article/2015/11/19/bernard-stiegler-ce-n-est-qu-en-projetant-un-veritable-avenir-qu-on-pourra-combattre-daech_4813660_1698637.html

    Stiegler, Bernard. 2015b. States of Shock: Stupidity and Knowledge in the 21st Century. Cambridge-Malden: Polity.

    Stiegler, Bernard. 2016. Dans la disruption. Comment ne pas devenir fou? Paris: Le liens qui libérent.

    Stiegler, Bernard. 2017. Automatic Society: Volume 1: The Future of Work. Cambridge-Malden: Polity.

    Vial, Stephane. 2016. La Fin d’un Philosophe. https://medium.com/@svial/bernard-stiegler-la-fin-dun-philosophe-autrefois-inspirant-ff59c1ac4c8#.d7sqsaa6s

    Ward, Peter. 2015. The Medea Hypothesis: Is Life on Earth Ultimately Self-Destructive? Cambridge: Princeton University Press.

    Wark, McKenzie. 2015a. “The Capitalocene”. PS: http://www.publicseminar.org/2015/10/the-capitalocene/

    Wark, McKenzie. 2015b. Molecular Red. Theory for the Anthropocene. London: Verso.

    Westbroek, Peter. 1992. Life as a Geological Force: Dynamics of the Earth. New York: W.W. Norton & Co.

    Wyke, Tom. 2016. “Killer heatwave alert after temperature hits a record 51C: More deaths feared as hundreds die and India sees its hottest day on record”, Daily Mail, 20 May 2016.

    Žižek, Slavoj. 2009. First As Tragedy, Then As Farce. London: Verso.

    [1] See Barnosky et al. 2012.

    [2] The event took place in Paris just before the COP 21 in November 2015 and was organized by Philippe Descola and Catherine Larrère.

  • Daniel Greene – Digital Dark Matters

    Daniel Greene – Digital Dark Matters

    a review of Simone Browne, Dark Matters: On the Surveillance of Blackness (Duke University Press, 2015)

    by Daniel Greene

    ~

    The Book of Negroes was the first census of black residents of North America. In it, the British military took down the names of some three thousand ex-slaves between April and November of 1783, alongside details of appearance and personality, destination and, if applicable, previous owner. The self-emancipated—some free, some indentured to English or German soldiers—were seeking passage to Canada or Europe, and lobbied the defeated British Loyalists fleeing New York City for their place in the Book. The Book of Negroes thus functioned as “the first government-issued document for state-regulated migration between the United States and Canada that explicitly linked corporeal markers to the right to travel” (67). An index of slave society in turmoil, its data fields were populated with careful gradations of labor power, denoting the value of black life within slave capitalism: “nearly worn out,” “healthy negress,” “stout labourer.”  Much of the data in The Book of Negroes was absorbed from so-called Birch Certificates, issued by a British Brigadier General of that name, which acted as passports certifying the freedom of ex-slaves and their right to travel abroad. The Certificates became evidence submitted by ex-slaves arguing for their inclusion in the Book of Negroes, and became sites of contention for those slave-owners looking to reclaim people they saw as property.

    If, as Simone Browne argues in Dark Matters: On the Surveillance of Blackness, “the Book of Negroes [was] a searchable database for the future tracking of those listed in it” (83), the details of preparing, editing, monitoring, sorting and circulating these data become direct matters of (black) life and death. Ex-slaves would fight for their legibility within the system through their use of Birch Certificates and the like; but they had often arrived in New York in the first place through a series of fights to remain illegible to the “many start-ups in slave-catching” that arose to do the work of distant slavers. Aliases, costumes, forged documents and the like were on the one hand used to remain invisible to the surveillance mechanisms geared towards capture, and on the other hand used to become visible to the surveillance mechanisms—like the Book—that could potentially offer freedom. Those ex-slaves who failed to appear as the right sort of data were effectively “put on a no-sail list” (68), and either held in New York City or re-rendered into property and delivered back to the slave-owner.

    Start-ups, passports, no-sail lists, databases: These may appear anachronistic at first, modern technological thinking out of sync with colonial America. But Browne deploys these labels with care and precision, like much else in this remarkable book. Dark Matters reframes our contemporary thinking about surveillance, and digital media more broadly, through a simple question with challenging answers: What if our mental map of the global surveillance apparatus began not with 9/11 but with the slave ship? Surveillance is considered here not as a specific technological development but a practice of tracking people and putting them into place. Browne demonstrates how certain people have long been imagined as out of place and that technologies of control and order were developed in order to diagnose, map, and correct these conditions: “Surveillance is nothing new to black folks. It is a fact of antiblackness” (10). That this ”fact” is often invisible even in our studies of surveillance and digital media more broadly speaks, perversely, to the power of white supremacy to structure our vision of the world. Browne’s apparent anachronisms make stranger the techniques of surveillance with which we are familiar, revealing the dark matter that has structured their use and development this whole time. Difficult to visualize, Browne shows us how to trace this dark matter through its effects: the ordering of people into place, and the escape from that order through “freedom acts” of obfuscation, sabotage, and trickery.

    This then is a book about new (and very old) methods of research in surveillance studies in particular, and digital studies in general, centered in black studies—particularly the work of critical theorists of race such as Saidiya Hartman and Sylvia Wynter who find in chattel slavery a prototypical modernity. More broadly, it is a book about new ways of engaging with our technocultural present, centered in the black diasporic experience of slavery and its afterlife. Frantz Fanon is a key figure throughout. Browne introduces us to her own approach through an early reflection on the revolutionary philosopher’s dying days in Washington, DC, overcome with paranoia over the very real surveillance to which he suspected he was subjected. Browne’s FOIA requests to the CIA regarding their tracking of Fanon during his time at the National Institutes of Health Clinical Center returned only a newspaper clipping, a book review, and a heavily redacted FBI memo reporting on Fanon’s travels. So she digs further into the archive, finding in Fanon’s lectures at the University of Tunis, delivered in the late 1950s after being expelled from Algeria by French colonial authorities, a critical exploration of policing and surveillance. Fanon’s psychiatric imagination, granting such visceral connection between white supremacist institutions and lived black experience in The Wretched of the Earth, here addresses the new techniques of ‘control by quantification’—punch clocks, time sheets, phone taps, and CCTV—in factories and department stores, and the alienation engendered in the surveilled.

    Browne’s recovery of this work grounds a creative extension of Fanon’s thinking into surveillance practices and surveillance studies. From his concept of “epidermalization”—“the imposition of race on the body” (7)—Browne builds a theory of racializing surveillance. Like many other key terms in Dark Matters, this names an empirical phenomenon—the crafting of racial boundaries through tracking and monitoring—and critiques the “absented presence” (13) of race in surveillance studies. Its opposition is found in dark sousveillance, a revision of Steve Mann’s term for watching the watchers that, again, describes both the freedom acts of black folks against a visual field saturated with racism, as well as an epistemology capable of perceiving, studying, and deconstructing apparatuses of racial surveillance.

    Each chapter of Dark Matters presents a different archive of racializing surveillance paired with reflections on black cultural production Browne reads as dark sousveillance. At each turn, Browne encourages us to see in slavery and its afterlife new modes of control, old ways of studying them, and potential paths of resistance. Her most direct critique of surveillance studies comes in Chapter 1’s precise exegesis of the key ideas that emerge from reading Jeremy Bentham’s plans for the Panopticon and Foucault’s study of it—the signal archive and theory of the field—against the plans for the slave ship Brookes. It turns out Bentham travelled on a ship transporting slaves during the trip where he sketched out the Panopticon, a model penitentiary wherein, through the clever use of lights, mirrors, and partitions, prisoners are totally isolated from one another and never sure whether they are being monitored or not. The archetype for modern power as self-discipline is thus nurtured, counter to its own telling, alongside sovereign violence. Browne’s reading of archives from the slave ship, the auction block, and the plantation reveal the careful biopolitics that created “blackness as a saleable commodity in the Western Hemisphere” (42). She asks how “the view from ‘under the hatches’” of Bentham’s Turkish ship, transporting, in his words, “18 young negresses (slaves),” might change our narrative about the emergence of disciplinary power and the modern management of life as a resource. It becomes clear that the power to instill self-governance through surveillance did not subordinate but rather partnered with the brutal spectacle of sovereign power that was intended to educate enslaved people on the limits of their humanity. This correction to the Foucauldian narrative is sorely necessary in a field, and a general political conversation about surveillance, that too often focuses on the technical novelty of drones, to give one example, without a connection to a generation learning to fear the skies.

    Stowage of the British slave ship Brookes under the regulated slave trade act of 1788
    “Stowage of the British slave ship Brookes under the regulated slave trade act of 1788.” Illustration. 1788. Library of Congress Rare Book and Special Collections Division Washington, D.C.

    These sorts of theoretical course corrections are among the most valuable lessons in Dark Matters. There is fastidious empirical work here, particularly in Chapter 2’s exploration of the Book of Negroes and colonial New York’s lantern laws requiring all black and indigenous people to bear lights after dark. But this empirical work is not the book’s focus, nor its main promise. That promise comes in prompting new empirical and political questions about how we see surveillance and what it means, and for whom, through an archaeology of black life under surveillance (indeed, Chapter 4, on airport surveillance, is the one I find weakest largely because it abandons this archaeological technique and focuses wholly on the present). Chapter 1’s reading of Charles William Tait’s prescriptions for slave management, for example, is part of a broader turn in the study of the history of capitalism where the roots of modern business practices like data-driven human resource management are traced to the supposedly pre-modern slave economy. Chapter 3’s assertion that slave branding “was a biometric technology…a measure of slavery’s making, marking, and marketing of the black subject as commodity” (91) does similar work, making strange the contemporary security technologies that purport the reveal racial truths which unwilling subjects do not give up. Facial recognition technologies and other biometrics are calibrated based on what Browne calls a “prototypical whiteness…privileged in enrollment, measurement, and recognition processes…reliant upon dark matter for its own meaning” (162). Particularly in the context of border control, these default settings reveal the calculations built into our security technologies regarding who “counts” enough to be recognized. Calculations grounded in an unceasing desire for new means with which to draw clear-cut racial boundaries.

    The point here is not that a direct line of technological development can be drawn from brands to facial recognition or from lanterns to ankle bracelets. Rather, if racism, as Ruth Wilson Gilmore argues, is “the state-sanctioned or extralegal production and exploitation of group-differentiated vulnerability to premature death,” then what Browne points to are methods of group differentiation, the means by which the value of black lives are calculated and how those calculations are stored, transmitted, and concretized in institutional life. If Browne’s cultural studies approach neglects a sustained empirical engagement with a particular mode of racializing surveillance—say, the uneven geography produced by the Fugitive Slave Act, mentioned in passing in relation to “start-ups in slave catching”—it is because she has taken on the unenviable task of shifting the focus of whole fields to dark matter previously ignored, opening a series of doors through which readers can glimpse the technologies that make race.

    Here then is a space cleared for surveillance studies, and digital studies more broadly, in an historical moment when so many are loudly proclaiming that Black Lives Matter, when the dark sousveillance of smartphone recordings has made the violence of institutional racism impossible to ignore. Work in digital studies has readily and repeatedly unearthed the capitalist imperatives built into our phones, feeds, and friends lists. Shoshanna Zuboff’s recent work on “surveillance capitalism” is perhaps a bellwether here: a rich theorization of the data accumulation imperative that transforms intra-capitalist competition, the nature of the contract, and the paths of everyday life. But her account of the growth of an extractive data economy that leads to a Big Other of behavior modification does not so far have a place for race.

    This is not a call on my part to sprinkle a missing ingredient atop a shoddy analysis in order to check a box. Zuboff is critiqued here precisely because she is among our most thoughtful, careful critics of contemporary capitalism. Rather, Browne’s account of surveillance capitalism—though she does not call it that—shows that race does not need to be introduced to the critical frame from outside. That dark matter has always been present, shaping what is visible even if it goes unseen itself. This manifests in at least two ways in Zuboff’s critique of the Big Other. First, her critique of Google’s accumulation of  “data exhaust” is framed primarily as a ‘pull’ of ever more sites and sensors into Google’s maw, passively given up users. But there is a great deal of “push” here as well. The accumulation of consumable data also occurs through the very human work of solving CAPTCHAs and scanning books. The latter is the subject of an iconic photo that shows the brown hand of a Google Books scanner—a low-wage subcontractor, index finger wrapped in plastic to avoid cuts from a day of page-turning—caught on a scanned page. Second, for Zuboff part of the frightening novelty of Google’s data extraction regime is its “formal indifference” to individual users, as well as to existing legal regimes that might impede the extraction of population-scale data. This, she argues, stands in marked contrast to the midcentury capitalist regimes which embraced a degree of democracy in order to prop up both political legitimacy and effective demand. But this was a democratic compromise limited in time and space. Extractive capitalist regimes of the past and present, including those producing the conflict minerals so necessary for hardware running Google services, have been marked by, at best, formal indifference in the North to conditions in the South. An analysis of surveillance capitalism’s struggle for hegemony would be greatly enriched by a consideration of how industrial capitalism legitimated itself in the metropole at the expense of the colony. Nor is this racial-economic dynamic and its political legitimation purely a cross-continental concern. US prisons have long extracted value from the incarcerated, racialized as second-class citizens. Today this practice continues, but surveillance technologies like ankle bracelets extend this extraction beyond prison walls, often at parolees’ expense.

    A Google Books scanner’s hand
    A Google Books scanner’s hand, caught working on WEB Du Bois’ The Souls of Black Folk. Via The Art of Google Books.

    Capitalism has always, as Browne’s notes on plantation surveillance make clear, been racial capitalism. Capital enters the world arrayed in the blood of primitive accumulation, and reproduces itself in part through the violent differentiation of labor powers. While the accumulation imperative has long been accepted as a value shaping media’s design and use, it is unfortunate that race has largely entered the frame of digital studies, and particularly, as Jessie Daniels argues, internet studies, through a study of either racial variables (e.g., “race” inheres to the body of the nonwhite person and causes other social phenomena) or racial identities (e.g., race is represented through minority cultural production, racism is produced through individual prejudice). There are perhaps good institutional reasons for this framing, owing to disciplinary training and the like, beyond the colorblind political ethic of much contemporary liberalism. But it has left us without digital stories of race (although there are certainly exceptions, particularly in the work of writers like Lisa Nakamura and her collaborators), perceived to be a niche concern, on par with our digital stories of capitalism—much less digital stories of racial capitalism.

    Browne provides a path forward for a study of race and technology more attuned to institutions and structures, to the long shadows old violence casts on our daily, digital lives. This slim, rich book is ultimately a reflection on method, on learning new ways to see. “Technology is made of people!” is where so many of our critiques end, discovering, once again, the values we build into machines. This is where Dark Matters begins. And it proceeds through slave ships, databases, branding irons, iris scanners, airports, and fingerprints to map the built project of racism and the work it takes to pass unnoticed in those halls or steal the map and draw something else entirely.

    _____

    Daniel Greene holds a PhD in American Studies from the University of Maryland. He is currently a Postdoctoral Researcher with the Social Media Collective at Microsoft Research, studying the future of work and the future of unemployment. He lives online at dmgreene.net.

    Back to the essay

  • Travis Alexander – Deregulating Grief: A Review of Dagmawi Woubshet’s “The Calendar of Loss: Race, Sexuality, and Mourning in the Early Era of AIDS”

    Travis Alexander – Deregulating Grief: A Review of Dagmawi Woubshet’s “The Calendar of Loss: Race, Sexuality, and Mourning in the Early Era of AIDS”

    a review of Dagmawi Woubshet’s The Calendar of Loss: Race, Sexuality, and Mourning in the Early Era of AIDS (Baltimore: The Johns Hopkins University Press, 2015)

    by Travis Alexander

    ~

    Not long after someone dies in Ethiopia, the edir—friend, relative, or neighbor—takes to the streets to blow a horn and call out the deceased’s name. Thus begins the process of mourning. After this announcement, the edir pitches a tent in front of the bereaved’s home. Over the next three days, mourners congregate in the tent and grieve. By the seventh day, public grieving has largely subsided. More urgency still has passed by the fortieth and eightieth days, by the seventh year. Dagmawi Woubshet opens The Calendar of Loss with a lyrical description of this practice, according to which the temporality of the living attunes itself to the claim of the dead. It’s a fitting introduction, as The Calendar casts Woubshet himself as no less edir than scholar. His particular charge is the AIDS dead from the “early years” of the epidemic—1981 to 1996, when highly active antiretroviral treatment became widely available. It was in 1996 that AIDS, according to certain political constituencies, was rendered nonlethal; according to others, it was even cured.

    The ambition of The Calendar, though, exceeds mourning the AIDS dead in either the form of a memoir or uncritical memorialization. To be sure, there exists a prolific tradition of just this kind of memoirish text, epitomized by writers like Sarah Schulman. Woubshet looks instead to efforts made by AIDS mourners to simultaneously grieve their dead, process the historical contingency of these deaths, and reckon with the probability that their own deaths were on the horizon. As such, these works are “steeped in a ‘poetics of compounding loss’” (3). This idiosyncratic form of mourning not only registers a novel structure of feeling, but, in “confound[ing] and travers[ing] the limits of mourning” renders extant literary and cultural elegaic genres inadequate (3). Evincing his interdisciplinary sensibility, Woubshet trains his analysis on genres running to obituaries, funerals, graffiti art, photography, film, epistolaries, choreography, installations, and of course, the poetic elegy itself. The resulting critical work is a dialogue at the intersection of trauma studies, psychoanalysis, queer theory, and African Diaspora studies.

    Woubshet organizes the book’s chapters according to the various ways that queer loss was reinserted into a public discourse that had attempted first to conceal it, and then to efface its embodied specificities. To take only one of his most powerful examples, Woubshet addresses how in its traditional form the obituary had functioned as a disciplinary genre of (hetero-) reproductive futurism. In its foregrounding of birth-family kinship networks, the obituary not only omitted mention of gay partners, but reified the futurism (those, especially, children, who live on) that sublimates and mediates such reproductivism. Moreover, these pieces never mentioned AIDS, coyly alluding instead to a “long disease” the deceased had suffered, thereby interring the dead in one last closet. In response to the mainstream news outlets running these posthumously disciplinary remembrances, gay newspapers “arrogated to themselves the authority of the obituary,” emphasizing the cause of death and the queer networks left in the wake of the decedent’s passing, thus both constituting queer counterpublics and protecting the “rights of the queer dead from the normative rites of the living” (59, 61, 67, 84). Woubshet’s ability to demonstrate how works of mourning exhumed the queer body interdicted from the scene of public grief is equally salient in his poetic analysis, centering on figures like Melvin Dixon and Paul Monette and informed by poetry and elegy scholars ranging from Peter Sacks to Max Cavitch to Jonathan Culler. He hastens to remind us that the explicitly fatal homophobia of the 1980s and ’90s has simply been sanitized into the gay liberalism of the present. In its triumphalist projection of gay normalcy and citizenship, gay liberalism (akin to what Jasbir Puar calls homonationalism) demands the erasure of AIDS, of the embodied queer past. “[B]y looking for the dead now, therefore,” The Calendar of Loss “challenge[s] gay liberalism’s present undertaking” (23).

    As such, the reformulation of central mourning genres such as the obituary , Woubshet notes, wasn’t demanded simply by the novel epidemiological and biocultural poetics of AIDS itself. It also responded to the unique forms of silence and erasure under which queer loss was placed in the 1980s and 90s by civil and governmental institutions alike. It is this “regulation of the ‘sphere of appearances’” (to borrow Judith Butler’s phrase) that the activist group ACT UP (AIDS Coalition to Unleash Power) addressed in its motto “Silence = Death” (16). Woubshet argues that the protocols of silence in this era “disprized” mourners of queer loss, “shroud[ing]” their grief “in silence, shame, and disgrace” (4). The texts and performances collected in Calendar refuse this status, and collectively insist that “mourning = survival.”

    In its recuperation of a form of grief that is indeterminate and inconsolable, The Calendar of Loss is also a referendum on the approach to loss and trauma offered by Freudian psychoanalysis, which sets forth a pat binary between normative grief (mourning) and pathological grief (melancholia). Where the mourner eventually replaces his lost object, the melancholic cannot, and languishes. Amid the exigencies of AIDS, however, this binary falls short insofar as it fails to apprehend the fact that for these mourners, death is not a “singular” event, but part of an ever-expanding series of deaths, including—most likely—the mourner’s own (5). The melancholic grief of queer communities constituted by AIDS are certainly not “normal” according to Freud, but neither are they pathological, inasmuch as they “achieve cathexis in mourning itself and in its art and activism. However, […] as newly cathected objects, [these] cannot displace loss; on the contrary, they place loss center stage” (18). In worrying the normal/pathological binary, Woubshet delivers a theoretical instrument to those employing psychoanalysis, and a bracing intervention to a queer theory whose conceptualizations of trauma have unproblematically embraced this conspicuously unqueer binarism for too long.

    Drawing on work by Howard Thurman, Woubshet observes that this non-pathological melancholy finds clear historical expression in the genre of slave songs and black spirituals. In the spirituals as well as in black life generally, “[d]eath and dying are not just ‘unusual, untoward events’ or ‘inevitably end-of-lifespan events,’ but instead punctuate [it] routinely and proleptically” (19). This constant anticipation of loss is central to the conceptions of social death elaborated by scholars such as Orlando Patterson. Thus, the paradigm of black mourning (as in the slave songs) and black life generally, “accommodates” and illuminates early AIDS mourning, particularly in its “insistence that death is ever present, that death is somehow always impending, and that survivors can confront all this death in the face of shame and stigma in eloquent ways that also often imply a fierce political sensibility and a longing for justice” (5). This comparative work confirms The Calendar Of Loss as the first monograph in the humanities at the intersection of queer theory and African Diaspora studies and allows it to spark a true theoretical commerce between those fields (26).

    Already in this book, in fact, interdisciplinarity has sensitized Woubshet to a liability of queer theory over and above its internalization of Freud’s pathologization of melancholy. I’m speaking here of queer theory’s characterization of the child derived heavily from Lee Edelman’s pathbreaking No Future: Queer Theory and the Death Drive (2004). In this latter account, the figure of the child is not only opposed to the queer subject, but is deployed—insofar as it represents the claims of futurity—to discipline and defer queer pleasure, which represents by contrast not only the present at the expense of the future, but also the very foreclosure of the future itself. In his final chapter, Woubshet details the Sudden Flowers collective, which provides the resources for Ethiopian orphans whose parents were lost to AIDS to create works of art and performances that help mediate their grief. Many of these orphans choose to write letters to their deceased parents in which they chronicle the stages and practices of their mourning, and the sensation of the absence, the lost object(s), they have not (yet) filled or replaced. These children “rely not on idealized figures of innocence and purity to characterize their own experiences, but instead on queer figures of abjection, disparagement, and fearlessness,” thereby “thwart[ing] the naturalized figure of the child as the very embodiment of futurity” (140). The experiences of these children, then, are a living rebuke to the cleanliness of queer theory’s characterization of the child. But Woubshet doesn’t simply gesture to the children of Sudden Flowers to append an asterisk to the queer theory’s anti-natalism, to correctively bolster its critical acumen (though he certainly does accomplish this). While joining Edelman in the latter’s critique of hegemonic natalism, he breaks away in aiming to indicate what we might well call the white privilege of queer theory—the complacency of the latter’s archive, its evident disinterest in the particularities of life in the submerged global south in favor of an aestheticized lumping-together of African people with AIDS under the signifier of unalterable tragedy.

    But more witheringly still, The Calendar of Loss reveals the extent to which queer theory becomes a vested defender, an unwitting academic strategist, in the process of universalizing whiteness. Drawing on Robin Bernstein’s Racial Innocence, Woubshet recounts how, unlike the image of the white child that gelled (under the auspices of nineteenth-century Romanticism) to figure innocence, purity, and futurity, the black child discursively produced simultaneously (most canonically in the pickaninny) evoked repulsion, abjection, and social death (142). “Emptied of innocence and futurity,” he speculates, “the black child […] cannot be a marker against which queerness can be negatively defined” (142). Hidden behind the tact of Woubshet’s account is the indictment that positions like Edelman’s not only prefer the white child for its compatibility with a given theoretical imperative, but perpetuate a universalization according to which the white child, unburdened by racial marking, becomes the child as such, which iterates in turn the social death (in its rhetorical concealment) of the black child. This revelation represents just one of the fruits of Woubshet’s inflection of queer theory by the itinerary of African Diaspora studies.

    While we might fairly critique Woubshet’s failure to address the role of NGOs (like those that care for Ethiopian Orphans) as the “mendicant orders” (cf. Hardt and Negri) of the very same biopolitical governmentality that allowed AIDS to become a pandemic in the first place, this oversight seems the exception rather than the rule. The Calendar’s more concerning oversight is instead its unintentional reification of vitalist, optimistic, and citizenship-oriented rubrics of affect in its moments of “recuperation.” Consider for example Woubshet’s description of the children in the Sudden Flowers art collective who become “political figure[s], publicly taking on one of the most urgent issues of our time, [while simultaneously] departing from the norm” (144). These children are revealed in turn as “powerful agents, as subjects capable of reflection on and articulation of their experiences” (140). Here these children become deserving of praise insofar as they embrace an active, vigorous relationship with their circumstances. Elsewhere Woubshet will attribute the same valorizing characteristics to the gay American subject of his book too. AIDS mourners “across the Atlantic […] embodied AIDS openly and fearlessly” (5). Here “openly and fearlessly” carries the same sense of vigor and interactivity he attributed to the “powerful,” “agent[ial]” children of Ethiopia.

    Not only do these forms of affect coincide neatly with the behavioral strictures demanded by a late liberalism that exercises itself in intellectual and emotional economies, but they also threaten to undo the depathologization of melancholy executed above. That is to say, where Woubshet had previously claimed to find melancholy non-pathological insofar as it generates a new cathexis (attention to compounding loss), here he seems to smuggle in—through “articulation of […] experiences”—the kind of object-replacement or work-completion characteristic of normative mourning. Indeed, he says so himself in expressing his desire to show that nonnormative mourning “can be ‘productive rather than pathological, abundant rather than lacking, social rather than solipsistic, militant rather than reactionary’” (22). Here Woubshet no longer desires simply a neutral opposition to the pathological (that is, the nonnormative), but—in the term “productive”—casts his lot in with a term derived from the cathectic economy of capital. In turn “social” evokes liberal citizenship and pluralism, while “militant” continues in the valorization of vigorous and positive affect suggested earlier by “powerful,” “agent[ial],” “open,” and “fearless.” Inasmuch as “militancy,” “articulation,” “social[ity],” and “productiv[ity]” address themselves to futurity, they reiterate the natalism that Woubshet in agreement with Edelman deemed unsalvageable.

    Indeed, Edelman himself is perhaps most helpful in diagnosing the forms of complicity I’ve attributed to Woubshet. In a 2006 piece, he cautions us against the trap of “affirm[ing] an angry, uncivil ‘politics of negativity’” (“The Antisocial Thesis in Queer Theory” 821). Insofar as such negativity is “affirmed,” it becomes “little more than Oedipal kitsch,” performing the sentimental and “fundamentalist […] attachment to ‘sense, mastery, and meaning,’” and thereby striking “the pose of negativity while evacuating its force” (822). True negativity, meanwhile, refuses what Adorno calls the “all subjugating identity principle” (Negative Dialectics 320). In his attempt to depathologize queer melancholy, Woubshet pays homage to negativity, spurning the identification between melancholy and pathology. But in framing that melancholy as “militant,” “productive,” “social,” “articulate,” “open,” “fearless,” and certainly “agent[ial],” his negativity is outed as an identity principle in drag. This complicity also lends support to Jasbir Puar’s recent critique of affect theory (“Prognosis Time: Toward a Geopolitics of Affect, Debility, and Capacity”). For her the latter, in attempting to conceptualize a register of energies and forces uncapturable by a form of governmentality dependent on the capitalization of intellectual and emotional labor, unwittingly finds itself attributing to affect a set of optimistic, buoyant characteristics that are themselves of a piece with the imperatives of productivity and ablement central to late capital in the first place (“Prognosis Time”). While Woubshet’s methodology has no stake in affect, the optimism inherent in his characterizations of melancholic grief and its creative expression—even his exclusionary attention to only those who have taken it upon themselves to create—instantiates the ideological double-bind of Puar’s affect theorists.

    Of course, a productivity that is cyclical and endlessly iterative would be recuperable where one that is teleological would not. And his investment in the trope of the calendar, which evokes a form of articulation that repeats—despite its “militan[cy]”—in stasis, suggests that this is version of productivity Woubshet has in mind. So his flirtation with productivity is potentially aesthetic rather than ideological. Whatever the case may be, The Calendar of Loss remains a rich and urgently needed contribution. When the legacy of AIDS is being submerged, not only by the rhetoric of gay liberalism, but by a generation of queer theorists who have turned their attentions elsewhere, efforts like Woubshet’s to “speak again” its history and “reanimate lives that demand remembering” cannot go unnoticed (xi).


    _____

    Travis Alexander is a Mellon Graduate Fellow at The University of North Carolina, Chapel Hill. Though broadly interested in Post-45 literature and visual art, his specific interests cluster around portrayals of the HIV/AIDS epidemic in film, literature, television, and cultural theory between the 1980s and 1990s. Website: http://englishcomplit.unc.edu/people/travis-alexander.

    Back to the essay
    _____

    Works Cited

    • Adorno, Theodor. Negative Dialectics. Trans. E.B. Ashton. New York: Continuum, 1994.
    • Edelman, Lee with Robert L. Caserio, Judith Halberstam, José Esteban Muñoz, and Tim Dean. “The Antisocial Thesis in Queer Theory.” PMLA 121.3 (2006): 819 – 828.
    • Puar, Jasbir. “Prognosis Time: Toward a Geopolitics of Affect, Debility, and Capacity.” Women & Performance: A Journal of Feminist Theory 19.2 (2009): 161 – 172.
    • Woubshet, Dagmawi. The Calendar of Loss: Race, Sexuality, and Mourning in the Early Era of AIDS. Baltimore: The Johns Hopkins University Press, 2015.
  • Audrey Watters – The Best Way to Predict the Future is to Issue a Press Release

    Audrey Watters – The Best Way to Predict the Future is to Issue a Press Release

    By Audrey Watters

    ~

    This talk was delivered at Virginia Commonwealth University today as part of a seminar co-sponsored by the Departments of English and Sociology and the Media, Art, and Text PhD Program. The slides are also available here.

    Thank you very much for inviting me here to speak today. I’m particularly pleased to be speaking to those from Sociology and those from the English and those from the Media, Art, and Text departments, and I hope my talk can walk the line between and among disciplines and methods – or piss everyone off in equal measure. Either way.

    This is the last public talk I’ll deliver in 2016, and I confess I am relieved (I am exhausted!) as well as honored to be here. But when I finish this talk, my work for the year isn’t done. No rest for the wicked – ever, but particularly in the freelance economy.

    As I have done for the past six years, I will spend the rest of November and December publishing my review of what I deem the “Top Ed-Tech Trends” of the year. It’s an intense research project that usually tops out at about 75,000 words, written over the course of four to six weeks. I pick ten trends and themes in order to closely at the recent past, the near-term history of education technology. Because of the amount of information that is published about ed-tech – the amount of information, its irrelevance, its incoherence, its lack of context – it can be quite challenging to keep up with what is really happening in ed-tech. And just as importantly, what is not happening.

    So that’s what I try to do. And I’ll boast right here – no shame in that – no one else does as in-depth or thorough job as me, certainly no one who is entirely independent from venture capital, corporate or institutional backing, or philanthropic funding. (Of course, if you look for those education technology writers who are independent from venture capital, corporate or institutional backing, or philanthropic funding, there is pretty much only me.)

    The stories that I write about the “Top Ed-Tech Trends” are the antithesis of most articles you’ll see about education technology that invoke “top” and “trends.” For me, still framing my work that way – “top trends” – is a purposeful rhetorical move to shed light, to subvert, to offer a sly commentary of sorts on the shallowness of what passes as journalism, criticism, analysis. I’m not interested in making quickly thrown-together lists and bullet points. I’m not interested in publishing clickbait. I am interested nevertheless in the stories – shallow or sweeping – that we tell and spread about technology and education technology, about the future of education technology, about our technological future.

    Let me be clear, I am not a futurist – even though I’m often described as “ed-tech’s Cassandra.” The tagline of my website is “the history of the future of education,” and I’m much more interested in chronicling the predictions that others make, have made about the future of education than I am writing predictions of my own.

    One of my favorites: “Books will soon be obsolete in schools,” Thomas Edison said in 1913. Any day now. Any day now.

    Here are a couple of more recent predictions:

    “In fifty years, there will be only ten institutions in the world delivering higher education and Udacity has a shot at being one of them.” – that’s Sebastian Thrun, best known perhaps for his work at Google on the self-driving car and as a co-founder of the MOOC (massive open online course) startup Udacity. The quotation is from 2012.

    And from 2013, by Harvard Business School professor, author of the book The Innovator’s Dilemma, and popularizer of the phrase “disruptive innovation,” Clayton Christensen: “In fifteen years from now, half of US universities may be in bankruptcy. In the end I’m excited to see that happen. So pray for Harvard Business School if you wouldn’t mind.”

    Pray for Harvard Business School. No. I don’t think so.

    Both of these predictions are fantasy. Nightmarish, yes. But fantasy. Fantasy about a future of education. It’s a powerful story, but not a prediction made based on data or modeling or quantitative research into the growing (or shrinking) higher education sector. Indeed, according to the latest statistics from the Department of Education – now granted, this is from the 2012–2013 academic year – there are 4726 degree-granting postsecondary institutions in the United States. A 46% increase since 1980. There are, according to another source (non-governmental and less reliable, I think), over 25,000 universities in the world. This number is increasing year-over-year as well. So to predict that the vast vast majority of these schools (save Harvard, of course) will go away in the next decade or so or that they’ll be bankrupt or replaced by Silicon Valley’s version of online training is simply wishful thinking – dangerous, wishful thinking from two prominent figures who will benefit greatly if this particular fantasy comes true (and not just because they’ll get to claim that they predicted this future).

    Here’s my “take home” point: if you repeat this fantasy, these predictions often enough, if you repeat it in front of powerful investors, university administrators, politicians, journalists, then the fantasy becomes factualized. (Not factual. Not true. But “truthy,” to borrow from Stephen Colbert’s notion of “truthiness.”) So you repeat the fantasy in order to direct and to control the future. Because this is key: the fantasy then becomes the basis for decision-making.

    Fantasy. Fortune-telling. Or as capitalism prefers to call it “market research.”

    “Market research” involves fantastic stories of future markets. These predictions are often accompanied with a press release touting the size that this or that market will soon grow to – how many billions of dollars schools will spend on computers by 2020, how many billions of dollars of virtual reality gear schools will buy by 2025, how many billions of dollars of schools will spend on robot tutors by 2030, how many billions of dollars will companies spend on online training by 2035, how big will coding bootcamp market will be by 2040, and so on. The markets, according to the press releases, are always growing. Fantasy.

    In 2011, the analyst firm Gartner predicted that annual tablet shipments would exceed 300 million units by 2015. Half of those, the firm said, would be iPads. IDC estimates that the total number of shipments in 2015 was actually around 207 million units. Apple sold just 50 million iPads. That’s not even the best worst Gartner prediction. In October of 2006, Gartner said that Apple’s “best bet for long-term success is to quit the hardware business and license the Mac to Dell.” Less than three months later, Apple introduced the iPhone. The very next day, Apple shares hit $97.80, an all-time high for the company. By 2012 – yes, thanks to its hardware business – Apple’s stock had risen to the point that the company was worth a record-breaking $624 billion.

    But somehow, folks – including many, many in education and education technology – still pay attention to Gartner. They still pay Gartner a lot of money for consulting and forecasting services.

    People find comfort in these predictions, in these fantasies. Why?

    Gartner is perhaps best known for its “Hype Cycle,” a proprietary graphic presentation that claims to show how emerging technologies will be adopted.

    According to Gartner, technologies go through five stages: first, there is a “technology trigger.” As the new technology emerges, a lot of attention is paid to it in the press. Eventually it reaches the second stage: the “peak of inflated expectations.” So many promises have been made about this technological breakthrough. Then, the third stage: the “trough of disillusionment.” Interest wanes. Experiments fail. Promises are broken. As the technology matures, the hype picks up again, more slowly – this is the “slope of enlightenment.” Eventually the new technology becomes mainstream – the “plateau of productivity.”

    It’s not that hard to identify significant problems with the Hype Cycle, least of which being it’s not a cycle. It’s a curve. It’s not a particularly scientific model. It demands that technologies always move forward along it.

    Gartner says its methodology is proprietary – which is code for “hidden from scrutiny.” Gartner says, rather vaguely, that it relies on scenarios and surveys and pattern recognition to place technologies on the line. But most of the time when Gartner uses the word “methodology,” it is trying to signify “science,” and what it really means is “expensive reports you should buy to help you make better business decisions.”

    Can it really help you make better business decisions? It’s just a curve with some technologies plotted along it. The Hype Cycle doesn’t help explain why technologies move from one stage to another. It doesn’t account for technological precursors – new technologies rarely appear out of nowhere – or political or social changes that might prompt or preclude adoption. And at the end it is simply too optimistic, unreasonably so, I’d argue. No matter how dumb or useless a new technology is, according to the Hype Cycle at least, it will eventually become widely adopted. Where would you plot the Segway, for example? (In 2008, ever hopeful, Gartner insisted that “This thing certainly isn’t dead and maybe it will yet blossom.” Maybe it will, Gartner. Maybe it will.)

    And maybe this gets to the heart as to why I’m not a futurist. I don’t share this belief in an increasingly technological future; I don’t believe that more technology means the world gets “more better.” I don’t believe that more technology means that education gets “more better.”

    Every year since 2004, the New Media Consortium, a non-profit organization that advocates for new media and new technologies in education, has issued its own forecasting report, the Horizon Report, naming a handful of technologies that, as the name suggests, it contends are “on the horizon.”

    Unlike Gartner, the New Media Consortium is fairly transparent about how this process works. The organization invites various “experts” to participate in the advisory board that, throughout the course of each year, works on assembling its list of emerging technologies. The process relies on the Delphi method, whittling down a long list of trends and technologies by a process of ranking and voting until six key trends, six emerging technologies remain.

    Disclosure/disclaimer: I am a folklorist by training. The last time I took a class on “methods” was, like, 1998. And admittedly I never learned about the Delphi method – what the New Media Consortium uses for this research project – until I became a scholar of education technology looking into the Horizon Report. As a folklorist, of course, I did catch the reference to the Oracle of Delphi.

    Like so much of computer technology, the roots of the Delphi method are in the military, developed during the Cold War to forecast technological developments that the military might use and that the military might have to respond to. The military wanted better predictive capabilities. But – and here’s the catch – it wanted to identify technology trends without being caught up in theory. It wanted to identify technology trends without developing models. How do you do that? You gather experts. You get those experts to consensus.

    So here is the consensus from the past twelve years of the Horizon Report for higher education. These are the technologies it has identified that are between one and five years from mainstream adoption:

    It’s pretty easy, as with the Gartner Hype Cycle, to look at these predictions and note that they are almost all wrong in some way or another.

    Some are wrong because, say, the timeline is a bit off. The Horizon Report said in 2010 that “open content” was less than a year away from widespread adoption. I think we’re still inching towards that goal – admittedly “open textbooks” have seen a big push at the federal and at some state levels in the last year or so.

    Some of these predictions are just plain wrong. Virtual worlds in 2007, for example.

    And some are wrong because, to borrow a phrase from the theoretical physicist Wolfgang Pauli, they’re “not even wrong.” Take “collaborative learning,” for example, which this year’s K–12 report posits as a mid-term trend. Like, how would you argue against “collaborative learning” as occurring – now or some day – in classrooms? As a prediction about the future, it is not even wrong.

    But wrong or right – that’s not really the problem. Or rather, it’s not the only problem even if it is the easiest critique to make. I’m not terribly concerned about the accuracy of the predictions about the future of education technology that the Horizon Report has made over the last decade. But I do wonder how these stories influence decision-making across campuses.

    What might these predictions – this history of the future – tell us about the wishful thinking surrounding education technology and about the direction that the people the New Media Consortium views as “experts” want the future to take. What can we learn about the future by looking at the history of our imagining about education’s future. What role does powerful ed-tech storytelling (also known as marketing) play in shaping that future? Because remember: to predict the future is to control it – to attempt to control the story, to attempt to control what comes to pass.

    It’s both convenient and troubling then these forward-looking reports act as though they have no history of their own; they purposefully minimize or erase their own past. Each year – and I think this is what irks me most – the NMC fails to looks back at what it had predicted just the year before. It never revisits older predictions. It never mentions that they even exist. Gartner too removes technologies from the Hype Cycle each year with no explanation for what happened, no explanation as to why trends suddenly appear and disappear and reappear. These reports only look forward, with no history to ground their direction in.

    I understand why these sorts of reports exist, I do. I recognize that they are rhetorically useful to certain people in certain positions making certain claims about “what to do” in the future. You can write in a proposal that, “According to Gartner… blah blah blah.” Or “The Horizon Reports indicates that this is one of the most important trends in coming years, and that is why we need to commit significant resources – money and staff – to this initiative.” But then, let’s be honest, these reports aren’t about forecasting a future. They’re about justifying expenditures.

    “The best way to predict the future is to invent it,” computer scientist Alan Kay once famously said. I’d wager that the easiest way is just to make stuff up and issue a press release. I mean, really. You don’t even need the pretense of a methodology. Nobody is going to remember what you predicted. Nobody is going to remember if your prediction was right or wrong. Nobody – certainly not the technology press, which is often painfully unaware of any history, near-term or long ago – is going to call you to task. This is particularly true if you make your prediction vague – like “within our lifetime” – or set your target date just far enough in the future – “In fifty years, there will be only ten institutions in the world delivering higher education and Udacity has a shot at being one of them.”

    Let’s consider: is there something about the field of computer science in particular – and its ideological underpinnings – that makes it more prone to encourage, embrace, espouse these sorts of predictions? Is there something about Americans’ faith in science and technology, about our belief in technological progress as a signal of socio-economic or political progress, that makes us more susceptible to take these predictions at face value? Is there something about our fears and uncertainties – and not just now, days before this Presidential Election where we are obsessed with polls, refreshing Nate Silver’s website obsessively – that makes us prone to seek comfort, reassurance, certainty from those who can claim that they know what the future will hold?

    “Software is eating the world,” investor Marc Andreessen pronounced in a Wall Street Journal op-ed in 2011. “Over the next 10 years,” he wrote, “I expect many more industries to be disrupted by software, with new world-beating Silicon Valley companies doing the disruption in more cases than not.” Buy stock in technology companies was really the underlying message of Andreessen’s op-ed; this isn’t another tech bubble, he wanted to reinsure investors. But many in Silicon Valley have interpreted this pronouncement – “software is eating the world” – as an affirmation and an inevitability. I hear it repeated all the time – “software is eating the world” – as though, once again, repeating things makes them true or makes them profound.

    If we believe that, indeed, “software is eating the world,” that we are living in a moment of extraordinary technological change, that we must – according to Gartner or the Horizon Report – be ever-vigilant about emerging technologies, that these technologies are contributing to uncertainty, to disruption, then it seems likely that we will demand a change in turn to our educational institutions (to lots of institutions, but let’s just focus on education). This is why this sort of forecasting is so important for us to scrutinize – to do so quantitatively and qualitatively, to look at methods and at theory, to ask who’s telling the story and who’s spreading the story, to listen for counter-narratives.

    This technological change, according to some of the most popular stories, is happening faster than ever before. It is creating an unprecedented explosion in the production of information. New information technologies, so we’re told, must therefore change how we learn – change what we need to know, how we know, how we create and share knowledge. Because of the pace of change and the scale of change and the locus of change (that is, “Silicon Valley” not “The Ivory Tower”) – again, so we’re told – our institutions, our public institutions can no longer keep up. These institutions will soon be outmoded, irrelevant. Again – “In fifty years, there will be only ten institutions in the world delivering higher education and Udacity has a shot at being one of them.”

    These forecasting reports, these predictions about the future make themselves necessary through this powerful refrain, insisting that technological change is creating so much uncertainty that decision-makers need to be ever vigilant, ever attentive to new products.

    As Neil Postman and others have cautioned us, technologies tend to become mythic – unassailable, God-given, natural, irrefutable, absolute. So it is predicted. So it is written. Techno-scripture, to which we hand over a certain level of control – to the technologies themselves, sure, but just as importantly to the industries and the ideologies behind them. Take, for example, the founding editor of the technology trade magazine Wired, Kevin Kelly. His 2010 book was called What Technology Wants, as though technology is a living being with desires and drives; the title of his 2016 book, The Inevitable. We humans, in this framework, have no choice. The future – a certain flavor of technological future – is pre-ordained. Inevitable.

    I’ll repeat: I am not a futurist. I don’t make predictions. But I can look at the past and at the present in order to dissect stories about the future.

    So is the pace of technological change accelerating? Is society adopting technologies faster than it’s ever done before? Perhaps it feels like it. It certainly makes for a good headline, a good stump speech, a good keynote, a good marketing claim, a good myth. But the claim starts to fall apart under scrutiny.

    This graph comes from an article in the online publication Vox that includes a couple of those darling made-to-go-viral videos of young children using “old” technologies like rotary phones and portable cassette players – highly clickable, highly sharable stuff. The visual argument in the graph: the number of years it takes for one quarter of the US population to adopt a new technology has been shrinking with each new innovation.

    But the data is flawed. Some of the dates given for these inventions are questionable at best, if not outright inaccurate. If nothing else, it’s not so easy to pinpoint the exact moment, the exact year when a new technology came into being. There often are competing claims as to who invented a technology and when, for example, and there are early prototypes that may or may not “count.” James Clerk Maxwell did publish A Treatise on Electricity and Magnetism in 1873. Alexander Graham Bell made his famous telephone call to his assistant in 1876. Guglielmo Marconi did file his patent for radio in 1897. John Logie Baird demonstrated a working television system in 1926. The MITS Altair 8800, an early personal computer that came as a kit you had to assemble, was released in 1975. But Martin Cooper, a Motorola exec, made the first mobile telephone call in 1973, not 1983. And the Internet? The first ARPANET link was established between UCLA and the Stanford Research Institute in 1969. The Internet was not invented in 1991.

    So we can reorganize the bar graph. But it’s still got problems.

    The Internet did become more privatized, more commercialized around that date – 1991 – and thanks to companies like AOL, a version of it became more accessible to more people. But if you’re looking at when technologies became accessible to people, you can’t use 1873 as your date for electricity, you can’t use 1876 as your year for the telephone, and you can’t use 1926 as your year for the television. It took years for the infrastructure of electricity and telephony to be built, for access to become widespread; and subsequent technologies, let’s remember, have simply piggy-backed on these existing networks. Our Internet service providers today are likely telephone and TV companies; our houses are already wired for new WiFi-enabled products and predictions.

    Economic historians who are interested in these sorts of comparisons of technologies and their effects typically set the threshold at 50% – that is, how long does it take after a technology is commercialized (not simply “invented”) for half the population to adopt it. This way, you’re not only looking at the economic behaviors of the wealthy, the early-adopters, the city-dwellers, and so on (but to be clear, you are still looking at a particular demographic – the privileged half.)

    And that changes the graph again:

    How many years do you think it’ll be before half of US households have a smart watch? A drone? A 3D printer? Virtual reality goggles? A self-driving car? Will they? Will it be fewer years than 9? I mean, it would have to be if, indeed, “technology” is speeding up and we are adopting new technologies faster than ever before.

    Some of us might adopt technology products quickly, to be sure. Some of us might eagerly buy every new Apple gadget that’s released. But we can’t claim that the pace of technological change is speeding up just because we personally go out and buy a new iPhone every time Apple tells us the old model is obsolete. Removing the headphone jack from the latest iPhone does not mean “technology changing faster than ever,” nor does showing how headphones have changed since the 1970s. None of this is really a reflection of the pace of change; it’s a reflection of our disposable income and a ideology of obsolescence.

    Some economic historians like Robert J. Gordon actually contend that we’re not in a period of great technological innovation at all; instead, we find ourselves in a period of technological stagnation. The changes brought about by the development of information technologies in the last 40 years or so pale in comparison, Gordon argues (and this is from his recent book The Rise and Fall of American Growth: The US Standard of Living Since the Civil War), to those “great inventions” that powered massive economic growth and tremendous social change in the period from 1870 to 1970 – namely electricity, sanitation, chemicals and pharmaceuticals, the internal combustion engine, and mass communication. But that doesn’t jibe with “software is eating the world,” does it?

    Let’s return briefly to those Horizon Report predictions again. They certainly reflect this belief that technology must be speeding up. Every year, there’s something new. There has to be. That’s the purpose of the report. The horizon is always “out there,” off in the distance.

    But if you squint, you can see each year’s report also reflects a decided lack of technological change. Every year, something is repeated – perhaps rephrased. And look at the predictions about mobile computing:

    • 2006 – the phones in their pockets
    • 2007 – the phones in their pockets
    • 2008 – oh crap, we don’t have enough bandwidth for the phones in their pockets
    • 2009 – the phones in their pockets
    • 2010 – the phones in their pockets
    • 2011 – the phones in their pockets
    • 2012 – the phones too big for their pockets
    • 2013 – the apps on the phones too big for their pockets
    • 2015 – the phones in their pockets
    • 2016 – the phones in their pockets

    This hardly makes the case for technological speeding up, for technology changing faster than it’s ever changed before. But that’s the story that people tell nevertheless. Why?

    I pay attention to this story, as someone who studies education and education technology, because I think these sorts of predictions, these assessments about the present and the future, frequently serve to define, disrupt, destabilize our institutions. This is particularly pertinent to our schools which are already caught between a boundedness to the past – replicating scholarship, cultural capital, for example – and the demands they bend to the future – preparing students for civic, economic, social relations yet to be determined.

    But I also pay attention to these sorts of stories because there’s that part of me that is horrified at the stuff – predictions – that people pass off as true or as inevitable.

    “65% of today’s students will be employed in jobs that don’t exist yet.” I hear this statistic cited all the time. And it’s important, rhetorically, that it’s a statistic – that gives the appearance of being scientific. Why 65%? Why not 72% or 53%? How could we even know such a thing? Some people cite this as a figure from the Department of Labor. It is not. I can’t find its origin – but it must be true: a futurist said it in a keynote, and the video was posted to the Internet.

    The statistic is particularly amusing when quoted alongside one of the many predictions we’ve been inundated with lately about the coming automation of work. In 2014, The Economist asserted that “nearly half of American jobs could be automated in a decade or two.”“Before the end of this century,” Wired Magazine’s Kevin Kelly announced earlier this year, “70 percent of today’s occupations will be replaced by automation.”

    Therefore the task for schools – and I hope you can start to see where these different predictions start to converge – is to prepare students for a highly technological future, a future that has been almost entirely severed from the systems and processes and practices and institutions of the past. And if schools cannot conform to this particular future, then “In fifty years, there will be only ten institutions in the world delivering higher education and Udacity has a shot at being one of them.”

    Now, I don’t believe that there’s anything inevitable about the future. I don’t believe that Moore’s Law – that the number of transistors on an integrated circuit doubles every two years and therefore computers are always exponentially smaller and faster – is actually a law. I don’t believe that robots will take, let alone need take, all our jobs. I don’t believe that YouTube has been rendered school irrevocably out-of-date. I don’t believe that technologies are changing so quickly that we should hand over our institutions to entrepreneurs, privatize our public sphere for techno-plutocrats.

    I don’t believe that we should cheer Elon Musk’s plans to abandon this planet and colonize Mars – he’s predicted he’ll do so by 2026. I believe we stay and we fight. I believe we need to recognize this as an ego-driven escapist evangelism.

    I believe we need to recognize that predicting the future is a form of evangelism as well. Sure gets couched in terms of science, it is underwritten by global capitalism. But it’s a story – a story that then takes on these mythic proportions, insisting that it is unassailable, unverifiable, but true.

    The best way to invent the future is to issue a press release. The best way to resist this future is to recognize that, once you poke at the methodology and the ideology that underpins it, a press release is all that it is.

    Image credits: 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28. And a special thanks to Tressie McMillan Cottom and David Golumbia for organizing this talk. And to Mike Caulfield for always helping me hash out these ideas.
    _____

    Audrey Watters is a writer who focuses on education technology – the relationship between politics, pedagogy, business, culture, and ed-tech. She has worked in the education field for over 15 years: teaching, researching, organizing, and project-managing. Although she was two chapters into her dissertation (on a topic completely unrelated to ed-tech), she decided to abandon academia, and she now happily fulfills the one job recommended to her by a junior high aptitude test: freelance writer. Her stories have appeared on NPR/KQED’s education technology blog MindShift, in the data section of O’Reilly Radar, on Inside Higher Ed, in The School Library Journal, in The Atlantic, on ReadWriteWeb, and Edutopia. She is the author of the recent book The Monsters of Education Technology (Smashwords, 2014) and working on a book called Teaching Machines. She maintains the widely-read Hack Education blog, and writes frequently for The b2 Review Digital Studies magazine on digital technology and education.

    Back to the essay

  • Vassilis Lambropoulos – A Review of Aamir Mufti’s “Forget English!”

    Vassilis Lambropoulos – A Review of Aamir Mufti’s “Forget English!”

    514ywdifl6l-_sx327_bo1204203200_Aamir R. Mufti:  Forget English!  Orientalisms and World Literatures (Harvard University Press, 2016)

    Reviewed by Vassilis Lambropoulos

    This essay was peer-reviewed by the editorial board of b2o: an online journal

    Aamir Mufti’s Forget English! exposes the regulatory operations of presumably borderless world literature.  Second, it questions the cultural control of presumably egalitarian global English.  Next, it traces the Orientalist administration of presumably universal colonial knowledge.  Readers may agree with all this despite the repeated warnings that these three systems remain closely implicated not only in the objects of study but also in epistemological critique.  Mufti’s most radical proposition comes last:  The basis of the modern national and global cultural field is the institution of literature, that is, the disciplinary literary regimen that includes the askeses of composition, the exercises of pleasure, the practices of interpretation, and the technologies of education.  Mufti’s critique of critique itself as an aesthetic ethics ought to be disturbing.  In what follows, I will repurpose his project, reshuffling its case studies, to foreground its ultimate target, literary ideology, namely, the constitutive antinomies of the interpretive freedom, the self-imposed limits and controls of aesthetic understanding.  I will do that by narrating the institutional story of “literature” that underlies his anatomy of world literature.

    Mufti proposes that today, as a popular project of translation, circulation, criticism, and scholarship, “world literature” turns an opaque and unequal process of violent appropriation into a supposedly transparent and equal one of free communication.  Its inviting name occludes “the ways in which contemporary critical thinking unwittingly replicates logics of a longer provenance in the colonial and postcolonial eras” (248).  This is particularly evident in multicultural celebrations of the Global South.  Mufti warns against “the triumphalist ‘We are the World’ tone so clearly discernible in the self-staging of world literature in our times.  In many ways, the rubric ‘postcolonial literature’ as used in the Global North now serves as a means of domesticating those radical energies – and not just linguistic or cultural differences – [for example, the now defunct “Bandung” internationalism] into the space of (bourgeois) world literature as varieties of local practice – as Indian, African, or Middle Eastern literary practices, for instance” (92).  Instead of liberal appeals to “diversity” and its token-like selections, what is needed is “a concept of world literature (and practices of teaching it) that work to reveal the ways in which diversity itself (national, religious, civilizational, continental) is a colonial and Orientalist problematic, one that emerges precisely on the plane of equivalence that is literature” (250).  Sensitivity to diversity and respect for difference may express noble sentiments but do nothing to question the values dominating the literary and academic market.

    Studies of scholars in world literature often “are salutary in having emphasized inequality as the primary structural principle of world literary space rather than difference, which has been the dominant preoccupation in the discussion of world literature since the late eighteenth century, including in Goethe’s late-in-life elaboration of the idea of Weltliteratur.  But they give us no account whatsoever of the exact nature of these forms of inequality and the sociocultural logics through which they have historically been instituted, logics of the institution of inequality that incorporate notions and practices of ‘difference’ and proceed precisely through them” (33).  Whether they are describing a “world system” or a “republic of letters,” these scholars fail “to understand the mutual imbrication of inequality and difference” (33) in their operations, which is as short sighted as studying autopoiesis in Niklas Luhmann but not Cornelius Castoriadis.  Mufti does not elaborate a new model of doing world literature.  Instead, he examines how this comprehensive approach to culture has been devised and institutionalized for some two hundred fifty years, starting with the observation that its current resurgence is “a post-1989 development, which has appeared against the background of the larger neoliberal attempt to monopolize all possibilities of the international into the global life of capital.  This mode of appearance of the literatures of the Global South in the literary sphere of the North is thus linked to the disappearance of those varieties of internationalism that had sought in various ways to bypass the circuits of interaction, transmission, and exchange of the emergent global bourgeois order in the postwar and early postcolonial decades in the interest of the decolonizing societies of the South” (91).  Mufti seeks “to unmask and to make available for criticism and analysis” (20) world literature in the twenty first century as the main “field force” (199) of the project to subsume all centrifugal possibilities for an international literature under the monopoly of global cultural capital.  He treats it simultaneously as a “concept,” a “field of study,” and a set of “practices and institutional frameworks” (10), and uses a genealogical approach for a “critical-historical examination of a certain constellation of ideas and practices in its accretions and transformations over time” (19-20).  In what follows I discuss much less the numerous and wonderful cases to focus on the larger historical trajectory produced by this approach.

    The genealogy of world literature begins with the role that “literature as national institution” (3) played “in the emergence of the hierarchies that structure relations between societies in the modern world” (97).  An international literary space first formed in Europe as a structure of rivalries among the traditions (58) emerging in the “intra-European ‘competitive’ vernacularization,” which was later followed by its “colonial absorption and transformation” (76).  The standardization of the vernaculars was a central part of “a project of ethnonational or civilizational nationalism in linguistically diverse and multicultural societies” (148).  This made possible the formation of “literature” as a separate domain of writing and reading out of diverse guild, church, local, and other traditions.  “The nationalization of languages over the past two centuries all over the world . . . transformed former extensive and dispersed cultures of writing . . . into narrowly conceived ethnonational spheres” (146).  Through an extensive philological and interpretive operation “often-overlapping bodies of writing came to acquire, through a process of historicization, distinct personalities as ‘literature’ along national lines” (97).  This is how literature achieves centrality in all constellations of national arts.  “The (now universal) category of literature itself . . . marks this process of assimilation of diverse cultures of writing” (80).  New practices of reading claim existing textual regimes for new purposes and milieus while new elites are also trained to curate them.  “In this process of the acquisition of literary history, the textual corpus acquires, first of all, the attributes of literariness.  That is to say, . . . it enters the world literary system as one among many other literatures, being subject henceforth to the requirements and measures of literariness, replacing the models and modes of evaluation internal to the textual corpus itself.  Furthermore, in the moment of its historicization, it undergoes a shift of orientation within the larger social formation, being reinscribed within a discursive system for the attribution of a literature to a language, understood as the unique possession and mode of expression of a people” (141).

    A foundational act of historicization produced for the first time the terms of a distinct and independent literary history, anchoring a regional tradition in a national logic (143).  When a premodern corpus of undifferentiated writing acquired such a prestigious history, its newly self-regulating “works” entered literary modernity (38-9).   The admission of a corpus “into world literary space as a distinct literary tradition has characteristically taken place since the nineteenth century through its acquisition of a narrative of (‘national’) historical development” (131).  A literary history proper legitimized the literary modernity of a writing tradition by granting it national authority.

    Thus the word “literature” in the term world literature “marks the plane of equivalence and compatibility between historically distinct and particular practices of writing” (240).  The word “world” in “world literature” is a world of nations, the new regimes of sovereignty.  “’World’ and ‘nation’ are in a determinate relationship of mutual reinforcement here, rather than simply one of contradiction or negation” (77).  When world literature is invoked, it is important to keep in mind “the forms of nationalization of language, literature, and culture installed . . . precisely in and through the world-historical process that is the emergence of world literature” (130).  Literature and nation are mutually authenticating and reinforcing:  They confirm the antiquity and autonomy of one another. “The concept and practices of world literature, far from representing the superseding of national forms of identification of language, literature, and culture, emerged for the first time precisely along the forms of . . . nation-thinking” (97).  In addition, world literature played an important role in the orientation of national literatures toward the global space to which every nation could make its own “distinct national contribution” (112).  This role ought to be placed in an even broader global context since it is important to stress that “the emergence and modes of functioning of world literature, as the space of interaction between and articulation of the ‘national’ or regional literatures, are elements of the much-wider historical process of the emergence of the modern, bourgeois state and its dissemination worldwide, under colonial and semicolonial conditions, as the normative state-form of the modern era” (98).  Literature strengthened the claim of the national state against other state forms by giving voice to its organic character.

    It is in this broader context that Mufti introduces world literature as “the (bourgeois) understanding and experience of the world as an assemblage of ‘literary’ or expressive traditions, whose very ground of possibility was the Orientalist knowledge revolution” (90).  Tracing “the historical dialectic of Orientalism and/as world literature” (38) within literary studies since the late 18th century (99), he highlights the production of entirely new objects of study and insists on the central role “that philological Orientalism played in producing and establishing a method and a system for classifying and evaluating diverse forms of textuality, now all processed and codified uniformly as literature” (80).  If national literature was from the beginning world literature too, this was based on Orientalist assumptions.  Mufti’s strong thesis is that “a genealogy of world literature . . . leads to the classical phase of modern Orientalism in the late eighteenth and early nineteenth centuries, an enormous assemblage of projects and practices that was the ground for the emergence of the concept of world literature as for the literary and scholarly practices it originally referenced” (19).  The project of philological Orientalism, from the microscopic level of the text to the macroscopic one of the library, produces an entire hermeneutics, which “may be understood as a set of processes for the reorganization of language, literature, and culture on a planetary scale that effected the assimilation of heterogeneous and dispersed bodies of writing onto the plane of equivalence and evaluability that is (world) literature, fundamentally transforming in the process their internal distribution and coherence, their modes of authorization, and their relationship to the larger social order and social imaginaries in their place of origin” (145).  In a nutshell, this is how the colonial Orient was collected, archived, studied, and administered, and the regimes of the truth of the empire established and imposed.

    Orientalism should be understood not only as the apparatus that produced the Orient as a domain of interpretation and administration but additionally as “the cultural system that for the first time articulated a concept of the world as an assemblage of ‘nations’ with distinct expressive traditions, above all ‘literary’ ones.  Orientalism thus played a crucial role in the emergence of the cultural logics of the modern bourgeois world, an element of European self-making, first of all” (35).  In this respect, as in others, the author acknowledges his predecessor, Edward Said, whose  “entire effort in Orientalism was (at one level) to argue for the centrality of Orientalism, as cultural logic and enterprise, to the emergence of modern European culture, to Europe’s self-making” (75).  Mufti illustrates his argument with a fascinating example, proposing that the “lyricization of poetry in the West,” that is, the “gradual expansion of . . . ‘lyric’ norms of expression . . .  to encompass” all practices of reading and writing poetry, is “an intercultural and worldwide process” that can be traced back to the “Orientalist ‘discovery’ of the ‘ancient’ poetic traditions of the ‘Eastern nations’” (71).   By considering the Orient/Occident interplay, a genealogy of the early concepts and practices of world literature shows how a “’lyric’ sensibility emerged in Europe at the threshold of modernity in the encounter with ‘Oriental’ verse and, having taken over the universe of poetic expression in the West, became a benchmark and a test for ‘Oriental’ writing traditions themselves, erasing in the process all memory of its intercultural origins” (74).

    Together, philological Orientalism and (adopting a contrast of Erich Auerbach’s, Herder’s “Nordic” national rather than Vico’s “Latinate civilizatory”) philosophical historicism made the new concept of world literature possible.  The combined Orientalist and historicist thinking legitimized both the different manners of being human and “the same manner of being different” (77).  In addition to its contribution to European self-making, Orientalism contributed to world making as well and deserves to be studied “as an articulated and effective imperial system of cultural mapping, which produced for the first time a conception of the world as an assemblage of civilizational entities, each in possession of its own textual and/or expressive traditions” (20).  Oriental mapping structured “the cultural logic of the modern, bourgeois West in its outward orientation” (11) and facilitated the expansionist “transformation of societies on a world scale” (90).  In non-Western societies it fabricated “forms of cultural authority tied to the claim to authenticity of (religious, cultural, and national) ‘tradition’” (27).

    Orientalism was first activated in the production, periodization, and territorialization of India.  “What the early generation of Orientalists encountered on the subcontinent was not one single culture of writing but rather a loose articulation of different, often overlapping but also mutually exclusive, systems based variously on Persian, Sanskrit, and a large number of the vernacular registers, often more than one in a single language, properly speaking” (104-5).  To make sense of this variety and complexity, they re-structured it completely on the basis of the only model they knew and trusted, the historicist narrative of an evolutionary national history.  “The German and eventually pan-European discourse of world literature is thus fundamentally indebted to and predicated on” (104) the British colonial project of Indological philology, launched near the end of the 18th century.  “It is in this manner, by providing the materials and the practices of a new cosmopolitanism (as well as indigenist or particularist) conception of the world as linguistic and cultural assemblage, that English began to supplant the neoclassical order on the continent in which above all others French and France had provided the norms for literary production” (109).  Non-Western textual traditions entered the literary space as “literature” through the revolution of the philological knowledge that included the “discovery” of classical languages in the East and the invention of their family tree (58).  Eastern writing practices were absorbed into “literature” when their ancient works were classicized, that is, established as the original tradition of a civilization and arranged as its core national canon.

    Mufti documents “that Orientalist theories of cultural difference are grounded in a notion of indigeneity as the condition of culture – a chronotope, properly speaking, of deep habitation in time – and that therefore nationalism is a fundamentally Orientalist cultural impulse” (37).  What he calls the “chronotope of the indigenous” (74) consists of “spatiotemporal figures of habitation” (74) deeply rooted in both place/territory and time/history (129).  Its territorially common ground validates “the authenticity of tradition” (112).  Consequently, the task of genealogical inquiry is “to give a historical account of the acquisition of literary history . . . by a vast, diffuse, and internally differentiated body of writing … a historical (and critical) account of the . . . ascription of historicality . . . structured around the chronotope of the indigenous” (143).  The Orientalist practice of indigenization standardized the pluralist logic of a pre-modern cultural space into a differentiated linguistic-literary field and ushered it into the colonial “world republic of letters.”

    The “dual process of indigenization” (116) of language, literature, and culture, which incorporates of the intertwined strategies of historicism and Orientalism, consisted in classicizing (say, into Sanskrit) a civilization (say, the Indo-Persian one) and vernacularizing (say, into Urdu and Hindi) its cosmopolitanism (say, the subcontinental one).  Τhus, through indigenization, Indian writing essentialized itself into a national literature in order to be admitted to the Orientalist canon of world literature and join the global system of different and unique cultures.  The overlapping colonial cultural projects of indigenization “in the name of return to the origin” (173) and vernacularization as recovery of “authenticity” (251) are inseparable from bourgeois modernization (119).  “It is thus in English as cultural system, broadly conceived – namely, in the new Indology and its wider reception in the Euro-American world – that the subcontinent was first conceived of in the modern era as a single cultural entity, a unique civilization with its roots on the Sanskritic and more particularly Vedic texts of the Aryans. . . .  The idea that India is a unique national civilization in possession of a ‘classical’ culture was first postulated on the terrain of literature, that is, in the very invention of the idea of Indian literature in the course of the philological revolution” (109).  The encounter between Oriental philology and Occidental literature produced a national literary model that inspired the Indian national sentiment and identity (115) and created the “institution of Indian literature” (37, 73).

    I have constructed here the chronological genealogy of world literature that drives Mufti’s argument, the linear story that is plotted in his book through complex discussions of practices, notions, and texts.  The “world” of world literature consists of indigenous cultures using vernaculars to sustain literature as their national institution.  Their heterogeneity is predicated on standardized difference, their cosmopolitanism is based on the nation-state, their unity guaranteed by unequal power relations, and they can all be traced to the Orientalist construction of the colonial archive, be it registry, collection, or museum.  Mufti puts into practice with great integrity and virtuosity his conviction that “the task of criticism today is at the very least the untangling and rearranging of the various elements presently congealed into seemingly distinct and autonomous objects of divergent literary histories.  The critical task of overcoming the colonial logics persistently at work in the formation of literary and linguistic identities today is thus indistinguishable from the task of pushing against the multiple identarian assumptions, colonial and Orientalist in nature, of Hindi and Urdu’s mutual and religiously marked distinctness and autonomy.  A post-colonial philology of this literary and linguistic complex can never adequately claim to be produced from a position uncontaminated by the language polemic that now constitutes it and can only proceed by working through its terms.  This secular-critical task, furthermore, corresponds not to the erection of some image of a heterogeneous past but to the elaboration of the contradictory contemporary situation of language and literature itself” (128-9).  Forgetting English is possible only in English.

    He advocates resistance both to the colonial gaze and national authenticity, asking fellow scholars to “forget” (that is, learn to question by working with) not only English and the “world” in world literature but also the prefix in post-colonial.  “If, on the one hand, I urge world literature studies to take seriously the colonial origins of the very concept and practices they take as their objet of study, on the other, I hope to question the more or less tacit nationalism of many cotemporary attempts to champion the cultural products of the colonial and postcolonial world against the dominance of European and more broadly Western cultures and practices” (53).  This position exemplifies notion of a contrapuntal criticism that takes into account intertwined perspectives and discourses. “No self-described attempt to ‘return’ to tradition, religious or secular, can sustain its claim to be autonomous of ‘the West’ as Other. . . . No attempt at self-definition and self-exploration can therefore bypass a historical critique of the West and its emergence into this particular position of dominance.  And, in this sense, the critique of the West and the logics of its imperial expansion from a postcolonial location is in fact a self-critique, since this location is at least partially a product of that historical process” (153-4).

    While both Orientalism and Occidentalism/Anglicism seek to capture an “one-world” reality, they are caught between the local and the cosmopolitan, the particular and the universal (3).  By consciously operating within these tensions without being at home in either of their poles, the exilic perspective introduced by Auerbach and later advocated by Said can avoid both cosmopolitan detachment and communal narcissism.  An “exilic rethinking of the philology of world literature” (41) would become the basis for a radicalized “philology as homeless practice” (200), for a “historically engaged and linguistically attuned” (241) secular criticism with a “missing homeland” (202).  Supporting neither transnational nor autochthonous social imaginaries, it can provide a dialectically alert account of concrete cultural circumstances “because it captures simultaneously the violent exclusions of the national frame, the material reality of its (physical as well as symbolic) borders, the dire need to overcome its destructive fixations, and its inescapability in the present moment” (194).

    In his conclusion, addressing the central case of post-colonial subcontinent, Mufti supplements the exilic perspective with an additional one, also drawn from twentieth century experience, which promises to offer intrinsic means of study by drawing explicitly on partition as condition and modality since the “politics of linguistic and literary indigenization is a distinct element in the larger historical process that culminated in the religio-political partition of India in 1947 and is thus at the same time an important element in the history of the worldwide institution of world literature” (38).  In a manner reminiscent of the ways in which post-Heideggerian thought puts metaphysics “under erasure,” Mufti puts the subcontinent under partition.  “In light of the historical analysis of the cultural logic of Orientalism-Anglicism operating in the long, fitful, and ongoing process of bourgeois modernization in the subcontinent that I have attempted here, the task of criticism with respect to the field of culture and society in the region is therefore to adopt partition as method, to enter this field and inhabit the processes of its bifurcation, partition not merely as event, result, or outcome but rather as the very modality of culture, a political logic that inheres in the core concepts and practices of the state” (200).  Not a closed part of the past or even its living memory, partition is “the very condition of possibility of nation-statehood and therefore the ever-renewed condition of national experience in the subcontinent” (201).  The political logic of partition is inherent in the normative majoritarianism of the modern nation-state which by definition entails the minoritarization of certain groups and practices, a crisis of legitimacy leading to the partition of society (200-1).  “To argue for partition as method is, therefore, to argue for extracting submerged modes of thinking and feeling from the ongoing historical experience that is partition” (202).

    Furthermore, in the twenty first century this condition operates far beyond the subcontinent.  Ours is a time of proliferating boundaries where the traditional institution of the border of the nation-state is undergoing internal and external challenges and transformations, with some of its functions “redistributed throughout social space” (7) and others globalized, turning it into a “universalized institution” (201).  What is the meaning of world literature in a world where borders are traversing urban, regional, national, and transnational environments and literature often functions as a generalized cartography?  With this question I will proceed to indicate just a few of the many fields of inquiry where this book deserves to be studied and activated.

    Mufti’s notion of “partition as method,” which enriches the problematic of books like Asia as Method:  Toward Deimperialization (21010) by Kuan-Hsing Chen’s and Border as Method (2013) by Sandro Mezzadra and Brett Neilson, should be of obvious interest to Border Studies, an interdisciplinary field that since the 1980s has been examining geographical, political, economic, cultural, and other boundaries primarily in Asia, Africa, and Latin America and with an emphasis on matters of migration and gender.  The field started by looking at legal, political, and lexical definitions but it has been expanding to consider how borderscapes are narrated, performed, and de-legitimized in the Global South.  An anatomy of world literature would complement current studies of the ways in which, in addition to lands, borderings distribute languages, communities, stories, signs, and jurisdictions.  The order of literature since its national and Oriental origins shows borders working as epistemological devices and markers of relations rather than lines and locations.

    An adjacent and even more interdisciplinary field is the study of territories and their flux in the integrated post-industrial world.  Influenced by the work of Deleuze & Guattari (with their interests from “minor literature” to plateaus to nomadology), it has radically shifted emphasis from the structure to the flow of capital and the dominant econo-semiotic system, which Mufti too has done with literature.  The “assemblage of enunciation” might fit well with his notion of the writing corpus, and the “plane of immanence” with his “plane of equivalence.”  Most importantly, the Deleuzian “rhythm” of difference and repetition would resonate with the contrapuntal circulation of literature in the post-colonial milieu.

    The sociology of culture would benefit greatly from attention to the emergence of the literary sphere and its citizenry, whose members often belong to the national intellectual aristocracy.  Given its interest in the ways in which Bourdieu’s habitus operates according to a logic of practice, it would examine the subfield of literature within the objects, norms, and practices of the cultural field.  Mufti’s work on production and appropriation, and above all domination through symbolic power, provide numerous examples of the kind of capital gained and interest served by disinterested taste as competence and distinction as performance.

    The quest for cultural capital and symbolic power has been driven by the counter-political ideology of the aesthetic state, a milieu and habitus where aesthetic practices constitute the highest form of politics.  Mufti contributes greatly to an understanding of this regime, including the institutions it establishes and cherishes.  The bourgeois subject, who is the citizen of that ideal state, responds to the functional differentiation of society in distinct borderlands with the democratization of art and the sacralization of high culture. Through the proper literary education, fiction and poetry train readers to achieve a Kantian freedom of aesthetic autonomy by giving the interpretive law to themselves above the constraints of any internal or external partition.

    The path from the sociology of culture to its ideology may lead next to its ethics, namely, art as a spiritual ascesis.  Mufti has discussed the political rationality of the humanities and the aesthetically administered university.  His rigorous genealogical approach may be supplemented by Ian Hunter’s interest in humanism and the pre-national state of the sixteenth and seventeenth centuries as well as the aesthetic discipline of literary cultivation that emerged with Romantic literature and philosophy.  The origins of the philological skills that mobilized Orientalism to create world literature may also lie in a combination of artistic pleasure as worldly ethical competence with literary criticism as a moral practice of the self, that is, in the aesthetico-ethical training of the self in interpretive (self-)problematization which first produced the reader of literature.

    In addition to chronicling the emergence of world literature, Aamir Mufti’s Forget English! reflects on “just about the most encompassing cultural concept of our times, the notion of the systematic totality of the expressive productions of nothing less than humanity in its entirety.” (252).  Through a genealogy of literary comparison it raises the question of doing comparative humanities on a global level.  That is why it ought to have a broad scholarly and pedagogical impact.  This is not a book that scholars may read with profit, and then simply add to their bibliography and syllabus.  It invites reflection on what it means to compare at a time of universal comparability, that is, when everything is comparable (and also appears contemporary) to everything else.  Rather than seeking to add unknown or neglected materials to our canons, it challenges us to reconfigure canon making itself as well as the way we put together panels, collective volumes, or institutes.  Ultimately, Mufti is proposing that, in addition to new critiques, World Humanities needs new ways of constituting the humanities as a common.

    Vassilis Lambropoulos is the C. P. Cavafy Professor of Modern Greek in the Departments of Classical Studies and Comparative Literature of the University of Michigan.  He is the author of Literature as National Institution (1988).

  • Elizabeth Losh — Hiding Inside the Magic Circle: Gamergate and the End of Safe Space

    Elizabeth Losh — Hiding Inside the Magic Circle: Gamergate and the End of Safe Space

    by Elizabeth Losh, The College of William and Mary

    The Gamergate controversy of recent years has brought renewed public attention to issues around online misogyny, as feminist game developers, critics, scholars, and fans of independent video gaming have been targeted by very intense campaigns of digital harassment that seem to threaten their fundamental rights to personal privacy, bodily safety, and sexual agency. Feminists under attack by users of the hashtag #GamerGate complain of being silenced, as they report being disciplined for imagined infractions of supposed sexual, social, journalistic, and ludic norms in computational culture with punishing messages of censure, ridicule, exclusion, and violence. As noted by the mainstream news media, extremely aggressive tactics have been deployed, including leaking women’s sensitive private information – such as unlisted addresses and social security numbers – to the web (a practice known as “doxxing”), placing false reports with law enforcement or emergency first responders (a practice known as “swatting”), and highly personalized stalking with rapid escalations of threats of graphic violence that are often sexualized as rape or racialized as lynching. Although it may be important for the eloquent first-person testimony of the terrorized women themselves to be given priority as speech acts that command attention in resisting prevailing misogyny, the women’s antagonists often are allowed to remain invisible. Furthermore, allies presuming to advocate for the feminist victims of Gamergate may not adequately honor their stated wishes for peace, privacy, and closure that those experiencing online violence may express (Quinn 2015). This essay attempts to examine the larger discursive context of Gamergate and why hardcore gamers who were fans of AAA videogames – often with military storylines and first-person shooter game mechanics – constructed a seemingly illogical and paranoid explanatory theory about so-called “social justice warriors” (Bokhari et al. 2015) or “SJWs,” pursuing unfair advantage to sway the game industry.

    How do we understand how Gamergaters’ claims for noninterference and sovereignty in game worlds and online forums function alongside their claims for no-holds-barred investigations and public debates? Common rhetorical tactics deployed by Gamergaters include using rights-based language to further this specific variant of the men’s rights movement (Esmay 2014) and making appeals to the values of a supposedly rational public sphere (MSMPlan 2015). As these hardcore gaming fans deny the materiality, affect, embodiment, labor, and situatedness of new media, they also affirm positive notions about the exceptionalism of a realm defined – in Nicholas Negroponte’s terms – by bits rather than atoms. Gamergaters are particularly vehement in denying that “online violence” is a possibility with tweets such as “>violence >online pick one” and “will you please point me to the online killing fields where all the bodies from violence online are kept?” (Wernimont 2015). The Gamergate vision of digital culture is one of disembodied and immaterial interactions in which emotional harm is considered to be nonviolent.

    According to Gamergate accounts, the assumption that hardcore gamers representing masculine white privilege were under attack was also apparently buttressed by a number of online articles by game journalists suggesting that that the species was endangered and soon to be extinct. Gamers were declared “over” (Alexander 2014), at their “end” (Golding 2014), or facing the “death” of their collective identity (Plunkett 2014). The arguments made for years by feminist game collectives for pursuing the large market share in lower-status “casual” games, often played by women, had finally seemed to create inroads for independent developers. At the same time Gamergaters described their defensive position as a response to what they often characterized as a feminist “incursion” or “invasion” of gaming that was conceptualized as a substantive attack or threat to gamers. So-called “men’s rights” proponents – who may characterize themselves as “Men’s Human Rights Activists” – differentiated themselves from the distributed and heterogeneous population of gamers but also proclaimed that “the same people attacking Gamergate have been attacking us for years, using exactly the same tactics” (Esmay 2014). According to Breitbart columnist Yiannopoulos (2014a), “cultural warriors” arrived on the scene of gaming like “genocidal, psychopathic aliens in Independence Day;” these “social justice warriors” allegedly attempted to colonize a diverse community, but their “killjoy” advances were repelled and defenders declared them “not welcome in the gaming community.” According to this columnist, supposedly “politeness and persistence” had guaranteed victory in “the culture wars against guilt-mongerers, nannies, authoritarians and far-Left agitators.” While Sara Ahmed (2010) has explicitly called for self-identified “feminist killjoys” to disrupt the perpetuation of patriarchal false consciousness and the enforcement of positive affect in society, the perceived opponents of Gamergate are often cast as the aggressors despite what may be deep desires to participate in the gaming communities that exclude them.

    Decades before Gamergate, Dutch game theorist Johan Huizinga (2014) described what he called the “magic circle” of the temporary world constituted by a game, which appears to function as an isolated “consecrated spot” within which “special rules obtain” for performances apart from everyday concerns (10). Gamergaters often use similar terminology to discuss how game spaces should be intended to serve as a refuge from real-world behavioral constraints and the restrictions of social roles, as in the case of one Breitbart blogger seeking to exclude “angry feminists” and “unethical journalists” from interference with game play.

    Gamers, as dozens of readers have told me in the relatively short time I have been covering the controversy now called #GamerGate, play games to escape the frustrations and absurdities of everyday life. That’s why they object so strongly to having those frustrations injected into their online worlds. The war in the gaming industry isn’t about right versus left, or tolerance versus bigotry: it’s between those who leverage video games to fight proxy wars about other things, introducing unwanted and unwarranted tension and misery, and those who simply want to enjoy themselves. (Yiannopoulos 2014a)

    Gamergate advocates claim that video games are expected to be arenas where gamers can assert their sovereignty and self-determination in spaces that can’t be “leveraged” or annexed to “fight proxy wars” by non-gamer outsiders.

    According to Huizinga (2014), the arena of game play is characterized by the freedom of voluntary participation, disinterested behavior, and an opposition to serious conduct. Similar criteria also often are presented as premises for action in the rhetoric of Gamergate enthusiasts in their comments on various sites for public debate. For example, feminist game developers and critics may be accused of coercing and manipulating potential allies who are journalists through sexual liaisons, romantic promises, or appeals to social justice that invoke guilt and shame. Feminist opponents of Gamergaters are also characterized on sites such as Breitbart as “self-promoters” and “opportunists” and labeled as “egotistical” people who “beg for sympathy and cash” (Yiannopoulos 2014b). Thus, according to the logic of free choice, feminist “social justice-oriented art” in digital culture is aimed at “robbing players of agency and individualism” in every possible kind of engagement (Yiannopoulos 2014b).

    Personal freedom and a separation from material interests or a profit motive are often cited as ethical values shared by Gamergate, although many of its tactics are not at all solemn or high-minded. Active Gamergaters on the Escapist and 8chan emphasize their own diverse and distributed structure, and these anarchic swarms of participants take action “for the lulz,” much as members of Anonymous and 4chan have engaged in outing and calling out campaigns (Coleman 2013). Images of feminist gamers are altered with editing software, phrases like “online violence” are mocked, and fake identities are manufactured with puns and inside jokes. For example, in a crowd-funding effort to promote women in games who disavowed feminist “SJWs,” Gamergate forum members created an elaborate green-eyed and hoodie-wearing fictional persona intended to represent a pro-Gamergate libertarian “everywoman.” The avatar dubbed “Vivian James” wears the four-leafed clover of 4chan, “tough-loves video games,” and “loathes dishonesty and hypocrisy” (“The Birth of Vivian” 2015).

    While Gamergaters emphasize “personal responsibility” and “individual agency” (Yiannopoulos 2014b) as values, feminist critics tend to emphasize interdependence and states of being always-already subject to the coercions of others. In Huizinga’s (2014) terms, feminists inside the magic circle may be perceived as “spoil-sports” who must be “ejected” from the “community,” because they are attempting to break the magic world by failing to acknowledge its misogynistic conventions (11-12). As Anastasia Salter (2016) notes, in Huizinga’s analysis the spoil-sport is most visible in “boys’ games,” thereby establishing solidarity around youthful masculinity as the norm.

    By discussing misogyny in different venues for conversation among networked publics in game forums, blogs, or vlogging communities, and even within live multi-player gaming itself, feminists are cast as a disruptive presence.  Social justice warriors must be treated as aggressors to be repulsed by Gamergaters from the magic circles of game worlds in order to reclaim these spaces and return them to their proper exceptional status and thus maintain their security from real-world incursions.

    Of course, the concept of “safe space” has been central to the history of the women’s liberation movement and its associated consciousness-raising efforts. After all, feminists have reasoned that safe space might be necessary to explore intimate issues about sexuality and reproductive health – which might even include techniques for gynecological self-examination championed by foundational texts like Our Bodies, Ourselves – and safe space would also be needed to share confidences about personal histories of rape, domestic violence, and other forms of gendered trauma. How safe space is constituted can be developed along a number of different axes. For example, as awareness about “microaggressions” – a term used to describe the automatic or unconscious utterance of subtle insults (Solorzano, Ceja, & Yosso 2000) – has proliferated, participants at feminist events may be asked to be mindful of their own assumptions, privileges, and power relations in social gatherings. The full sensorium of potential kinds of assault may also be invoked in defining safe spaces, so those speaking loudly or wearing scent may be prohibited from these activities to protect those intolerant, averse, or allergic to certain stimuli.

    Feminists themselves have been reevaluating the assumed need for safe space for a variety of reasons. While media outlets grappling with the concept of “trigger warnings” may characterize any special treatment of vulnerable individuals as coddling or “hiding from scary ideas” (Shulevitz 2015), feminists themselves are often concerned about how the gestures of exclusion mandated by protective impulses enforce particular norms counter to the goal of empowerment. Some argue that “brave spaces” that encourage public acts of asserting identity or declaring solidarity may be more productive than private “safe spaces” (Fox 2004). Homogeneous safe spaces designed for the security of cisgendered whites may be criticized as excluding transgender people (Browne 2009) or people of color (Halberstam 2014). As Betty Sasaki (2002) observes, “safety” can become “the code word for the absence of conflict, a tacit and seductive invitation to collude with the unspoken ideological machinery of the institutional family” (47). And Donadey (2009) points out the irony “that radical feminist pedagogy tends to replicate the assumptions of the bourgeois concept of the public sphere” (214).

    In addition to using the #Gamergate and #SJW (for “social justice warrior”) hashtags on social media platforms such as Twitter, Gamergate adherents frequently use #NotYourShield, which indicates that feminists shouldn’t be shielded from criticism merely because they might claim alliances with underrepresented groups, such as women or minorities, given the fact that members of these groups might not identify with feminism or feel exploited, disenfranchised, or excluded from hardcore gaming communities. #NotYourShield allies of Gamergate may embrace the quintessential hardcore gamer identity of AAA titles with military themes, or may indicate that they are content with conventionally feminized casual games played on mobile devices and don’t want to interfere with so-called “real” games. While Gamergaters may protect the borders of their own magic circles, they criticize those who claim feminist discourse operates in safe spaces devoid of challenges from opponents. Affixing the #NotYourShield piece of metadata to a message supports Gamergaters’ contentions that feminists use the victimization of women and people of color to shield themselves unfairly from rebuttals or tests of truth claims. In videos such as “#NotYourShield – We Are Gamers,” choruses of voices are carefully curated to emphasize “corruption” and “censorship” as features of feminism, and “transparency” and call-out culture as features of Gamergate.

    Although Huizinga’s (2014) magic circle may be more open to public spectatorship than the private sphere of feminist safe space, it is also a zone of exception that is marked off by “secrecy” and “disguise,” according to Homo Ludens (13). Even if the rules for the magic circle are assumed to be uncontested, and the space of play is accepted as apart from the everyday world, the exceptional territory of game play could be a space of less violence (if mockery of authoritarian rulers is tolerated in the case of the Bakhtinian carnivalesque) or more violence (if physical injuries from contact sports are permitted that would normally be prosecuted as assault). Nonetheless, according to Edward Castronova (2007), the membrane of the magic circle “can be considered a shield of sorts, protecting the fantasy world from the outside world. The inner world needs defining and protecting because it is necessary that everyone who goes there adhere to the different set of rules” (147).

    Feminist game critics have begun to question Huizinga’s (2014) concept of a zone of exceptionalism, particularly as the legal, economic, and social consequences of game play are manifested in a variety of “real world” contexts. For example, Mia Consalvo (2009) challenges Castronova’s belief that “fantasy worlds” are a separate domain: “even as he might wish for such spaces, such worlds must inevitably leave the hands of their creators and are then taken up (and altered, bent, modified, extended) by players or users—indicating that the inviolability of the game space is a fiction, as is the magic circle, as pertaining to digital games” (411). Within game spaces of conflict and collaboration, players may bring different agendas into the magic circle, and thus it might be more difficult than Huizinga (or Castronova) imagines to reach consensus about the common rules of play. For example, when a guild of players in World of Warcraft decided to hold a funeral in an area for player-versus-player combat, other participants justified attacking the solemn ceremony in a coordinated raid on the grounds of asserting existing play conventions (Losh 2009). Consalvo further claims the static, formalist vision of bounded play that is grounded in structuralist theory, which is articulated by Huizinga and his disciples, ignores the fact that context is constantly being evaluated by players. Instead of the magic circle, she posits that players “exist or understand ‘reality’ through recourse to various frames” (415).

    For women, queer and transgender persons, and people of color who identify as gamers, neither magic circle nor safe space often seem descriptive of the harsh settings of their game play experiences. As Lisa Nakamura (2012) observes, playing as a woman, a person of color, or a queer person requires extraordinary game skills and talent at a level of hyper-accomplishment because of the extremely rigorous “difficulty setting” of playing in an identity position other than straight white male. Unfortunately, to be an exceptional individual in an exceptional space is often punished rather than rewarded. Moreover, as a woman of color, Shonte Daniels (2014) has insisted that “gaming never was a safe space for women” because “their identity makes them vulnerable to threats or harassment.” However, she also speculates that Gamergate may prove to be “both a blessing and a curse,” given how much attention to online misogyny has been generated by the intensity and egregiousness of Gamergate behavior.

    Many date the Gamergate controversy from fall 2014 – when harassment of dozens of feminists in the videogame industry, including game developers Zoë Quinn and Brianna Wu and cultural critic Anita Sarkeesian, made headlines. However, online misogyny and gender-based aggression have had a long history in digital culture that goes back to bulletin boards, MOOs, and MUDs and the existence of virtual rape in early forms of cyberspace (Dibbell 1998). To coordinate the current campaign of harassment, IRC channels and online forums such as Reddit, 4chan, and 8chan were used by an anonymous and amorphous group that came to be represented by the Twitter hashtag #GamerGate after actor Adam Baldwin deployed a familiar suffix associated with prominent political cover-ups. According to the Wikipedia entry, Gamergate “has been described as a manifestation of a culture war over gaming culture diversification, artistic recognition and social criticism of video games, and the gamer social identity. Some of the people using the Gamergate hashtag allege collusion among feminists, progressives, journalists and social critics, which they believe is the cause of increasing social criticism in video game reviews” (“Gamergate Controversy” 2015).

    It is worth noting that Wikipedia’s handling of its own distributed labor practices defining Gamergate has had a contentious history that included a personal invitation to Gamergaters from Wikipedia founder Jimmy Wales to contribute to improving the Gamergate article (Wales 2014), a pointed rejection of financial contributions to Wikipedia from Gamergaters (“So I Decided to Email Jimbo” 2014), and a defense of banning Wikipedia editors perceived as biased against Gamergate (Beaudette 2014). Ironically, during this intense period of engagement with the “toxic” participants of Gamergate eventually dismissed by Wales, Wikipedia often deployed a rhetoric about volunteerism, disinterested conduct, and playing by a neutral set of rules that paralleled similar rhetorical appeals from Gamergaters.

    Attention to this recent controversy – about who is a gamer and what is a game – has already generated a literature of scholarly response that focuses, as this essay does, on Gamergate rhetoric itself. Shira Chess and Adrienne Shaw’s (2015) essay, “A Conspiracy of Fishes,” analyzes how a particular cultural moment in which “masculine gaming culture became aware of and began responding to feminist game scholars” produced conspiratorial discourses with a specific internal logic that shouldn’t be dismissed as nonsensical:

    It is less useful to consider the validity of a conspiracy in terms of actual persecution, and is more potent if we look at it in terms of a combination of perceived persecution and an examination of the anxieties that the conspiracy is articulating. From this perspective, we can look at gaming culture as a somewhat marginalized group: For years those who have participated in gaming culture have defended their interests in spite of claims by popular media and (some) academics blaming it for violence, racism, and sexism. A perceived threat opens a venue for those who feel their culture has been misunderstood—regardless of whether they are the oppressors or the ones being oppressed. It is easy to negate and mark the claims of this group as inconsequential, but it is more powerful to consider the cultural realities that underline those claims. (217)

    As Chess and Shaw point out, the gamer identity may function in the context of other kinds of intersectional identities in which subjects for which the personal is political can be imagined as oppressors in one context and the oppressed in another.

    In addition to deploying a primary strategy about constructing a narrative about persecution aimed at a marginalized group, Gamergate is also concerned with the secondary strategy of mapping supposed networks of influence across publication venues, media genres, knowledge domains, political spheres, and economic sectors. Such Gamergate infographics seem to have begun with visualizations that were often reminiscent of Wanted posters, in which names and photographs of individual offenders were clustered in particular interest areas. For example, 4chan assembled a list of “SJW Game Journalists” that was republished on Reddit, which goes far beyond the initial allegations of impropriety about game reviewing at Kotaku to target writers at over a dozen other publications.

    As Gamergaters go down the “rabbit hole” of exploring possible connections and exposing hidden networks, they eventually claim political and educational institutions as agents in the conspiracy with a particular focus on DiGRA, the Digital Games Research Association, which was founded in 2003 and holds an international conference each year. One diagram shows the tentacles of DiGRA extending into online venues for gaming news and reviews, such as Kotaku, Gamasutra, and Polygon, as well as mainstream publications with a print tradition, such as The Guardian and TIME, and conference venues for many AAA games, such as the annual Game Developers Conference (GDC), which was founded in 1988 with a focus on fostering more creativity in the industry. Pictures of offender/participants in the network continued to be featured in this denser and more recursive form of network mapping, as though facial recognition would be a key literacy for Gamergaters.

    It is worth noting that many feminists would describe DiGRA as far from being a haven organization from misogyny, given existing biases in game studies that may privilege academics with ties to computer science, corporate start-ups, or other male dominated fields. Members of the feminist game collective Ludica have described strong reactions of denial when they declared at DiGRA in 2007 that the “power elite of the game industry is a predominately white, and secondarily Asian, male-dominated corporate and creative elite that represents a select group of large, global publishing companies in conjunction with a handful of massive chain retail distributors” and thus constitutes a “hegemonic” power that “determines which technologies will be deployed, and which will not; which games will be made, and by which designers; which players are important to design for, and which play styles will be supported” (Fron et al. 2007). The rhetoric of the Ludica manifestos about how games and gamers were being defined too rigidly by an industry enamored of AAA titles often ran counter to the origin stories of organizations such as GDC and SIGGRAPH.

    The third key strategy of Gamergaters – in addition to the fabricating the persecution narrative and the influence maps – is formulating threats of financial retaliation. If liberal members of the press and academic and professional associations in game studies and game development benefit from a supposed flow of money, social capital, and privileged access to career advancement, libertarian Gamergaters will thwart them with economic threats. This creates a paradoxical dynamic in which Gamergaters both assert an ethos of economic disinterest – because gaming is supposed to be a non-profit/non-wage activity that is separate from accumulation of capital in the real world – and seek to exercise their collective power to crowdfund sympathizers, and boycott, divest, and freeze assets of feminist allies and ally organizations. Advertisers are besieged with consumer complaints about the ethics of reporting in game publications, university employees are reported to administrators with accusations about frittering away public funds, and even donations to Wikipedia are withdrawn by indignant Gamergaters.

    Because feminists supposedly use financial interest as a lever, Gamergaters must also use financial interest as a way to assert the fairness, neutrality, and civility of a rational public sphere, which is tied to their fourth strategy about policing discourse. In regulating language in order to keep it freely flowing in a neoliberal marketplace of ideas so that the best notions will be the most valued, hyperbolic and hysterical feminist “strawmanning” and “insulting” very explicitly will not be tolerated by Gamergaters. In insisting that harassers are a statistically insignificant fraction of their movement in a counterfactual account of their power to terrorize targets and dominate channels of communication, language reminiscent of Robert’s Rules of Order can be as commonly encountered in Gamergate discourses as more stereotypical forms of trolling.

    This does not mean that the campaigns of Gamergate to construct us-and-them narratives, to make explicit and to visualize connections in social networks, to block some financial transactions and facilitate others, and to regulate discourse with structures of rational dialogue, leveling effects, and tone policing are not misogynistic. They defend and enable doxxing, swatting, and stalking behaviors that undermine the very barriers between virtual reality and material existence that are central to their contradictory ideologies of exceptionalism and common jurisdiction.

    The need for nurturing diversity among game players and developers (Fron et al. 2007) has been a work in progress for the better part of a decade, but in the wake of Gamergate, hundreds of prominent signatories who asserted the “right to play games, criticize games and make games without getting harassed or threatened” published an “open letter to the gaming community” (IGDA 2014). The fact that this pointed defense of feminist gamers, critics, and designers also used rights-based language might be instructive for better understanding the discursive context of Gamergate as well.

    The Italian biopolitical philosopher Roberto Esposito (2010, 2011) has theorized that two conflicting modalities of “community” and “immunity” operate when members either accept or resist the obligations of the social contract. Looking at the rhetoric of Gamergaters about the magic circle and how they caricature the rhetoric of feminists about safe space, we see how these oppositions are underexamined, and we can ask why opportunities for reflection and reflexive thinking about intersectionality are being foreclosed.

    Works Cited

    • Ahmed, Sara. 2010. The Promise of Happiness. Durham: Duke University Press.
    • Alexander, Leigh. 2014. “‘Gamers’ Don’t Have to Be Your Audience. ‘Gamers’ Are Over.” Gamasutra, August 28. http://www.gamasutra.com/view/news/224400/Gamers_dont_have_to_be_your_audience_Gamers_are_over.php.
    • Bailey, Moya. 2015. “#transform(ing)DH Writing and Research: An Autoethnography of Digital Humanities and Feminist Ethics.” Digital Humanities Quarterly 9, no. 2.
    • Beaudette, Philippe. 2015. “Civility, Wikipedia, and the Conversation on Gamergate.” Wikimedia Blog. January 27. http://blog.wikimedia.org/2015/01/27/civility-wikipedia-Gamergate/.
    • Bokhari, Allum, and Milo Yiannopoulos. 2015. “Entertainment Industry Says ‘No More’ to Social Justice Warriors.” Breitbart. July 20. http://www.breitbart.com/big-hollywood/2015/07/20/enough-entire-entertainment-industry-says-no-more-to-social-justice-warriors/.
    • Browne, Kath. 2009. “Womyn’s Separatist Spaces: Rethinking Spaces of Difference and Exclusion.” Transactions of the Institute of British Geographers, New Series, 34 (4): 541–56.
    • Castronova, Edward. 2007. Synthetic Worlds: The Business and Culture of Online Games. Chicago: University of Chicago Press.
    • Chess, Shira, and Adrienne Shaw. 2015. “A Conspiracy of Fishes, Or, How We Learned to Stop Worrying About #Gamergate and Embrace Hegemonic Masculinity.” Journal of Broadcasting & Electronic Media 59, no. 1: 208–20.
    • Coleman, Beth. 2011. Hello Avatar: Rise of the Networked Generation. Cambridge, MA: MIT Press.
    • Coleman, E. Gabriella. 2014. Hacker, Hoaxer, Whistleblower, Spy: The Many Faces of Anonymous. Brooklyn, NY: Verso.
    • Consalvo, Mia. 2009. “There Is No Magic Circle.” Games and Culture 4, no. 4: 408–17.
    • Daniels, Shonte. 2014. “Gaming Was Never a Safe Space for Women.” RH Reality Check. November 4. http://rhrealitycheck.org/article/2014/11/10/gaming-never-safe-space-women/.
    • Dibbell, Julian. 1998. “A Rape in Cyberspace.” http://www.juliandibbell.com/articles/a-rape-in-cyberspace/.
    • Donadey, Anne. 2009. “Negotiating Tensions: Teaching about Race in a Graduate Feminist Classroom.” In Feminist Pedagogy: Looking back to Move Forward, edited by Robbin Crabtree, David Alan Sapp, and Adela C. Licona, 209–29. Baltimore, MD: Johns Hopkins University Press.
    • Esmay, Dean. 2014. “Keeping up with #Gamergate.” A Voice for Men. October 16. https://lockerdome.com/7754206970916417.
    • Esposito, Roberto. 2010. Communitas: The Origin and Destiny of Community. Stanford, Calif.: Stanford University Press.
    • ———. 2011. Immunitas: The Protection and Negation of Life. Cambridge; Malden MA: Polity.
    • Fox, D. L., and C. Fleischer. 2004. “Beginning Words: Toward ‘Brave Spaces’ in English Education.” English Education. 37, no. 1: 3–4.
    • Fron, Janine, Tracy Fullerton, Jacquelyn Ford Morie, and Celia Pearce. 2007. “The Hegemony of Play.” In Proceedings, DiGRA: Situated Play, Tokyo, September 24-27, 2007, 309–18. Tokyo, Japan. http://www.digra.org/dl/db/07312.31224.pdf.
    • “Gamergate Controversy.” 2015. Wikipedia, the Free Encyclopedia. https://en.wikipedia.org/w/index.php?title=Gamergate_controversy&oldid=682713753.
    • Golding, Dan. 2014. “The End of Gamers.” Dan Golding. August 28. http://dangolding.tumblr.com/post/95985875943/the-end-of-gamers.
    • Halberstam, Jack. 2014. “You Are Triggering Me! The Neo-Liberal Rhetoric of Harm, Danger and Trauma.” Bully Bloggers. July 5. https://bullybloggers.wordpress.com/2014/07/05/you-are-triggering-me-the-neo-liberal-rhetoric-of-harm-danger-and-trauma/.
    • Huizinga, Johan. 2014. Homo Ludens: A Study of the Play-Element in Culture. Mansfield Centre, CT: Martino Fine Books.
    • “IGDA Developer Satisfaction Survey Summary Report Available – International Game Developers Association (IGDA).” 2015. https://www.igda.org/news/179436/IGDA-Developer-Satisfaction-Survey-Summary-Report-Available.htm (accessed September 23, 2015).
    • Jacobs-Huey, Lanita. 2006. From the Kitchen to the Parlor Language and Becoming in African American Women’s Hair Care. Oxford, UK, and New York, NY: Oxford University Press.
    • Koebler, Jason. 2015. “Dear Gamergate: Please Stop Stealing Our Shit.” Motherboard. http://motherboard.vice.com/read/dear-Gamergate-please-stop-stealing-our-shit (accessed September 24, 2015).
    • Levmore, Saul, and Martha Craven Nussbaum. 2010. The Offensive Internet: Speech, Privacy, and Reputation. Cambridge, MA: Harvard University Press.
    • Losh, Elizabeth. 2009. “Regulating Violence in Virtual Worlds: Theorizing Just War and Defining War Crimes in World of Warcraft.” Pacific Coast Philology 44, no. 2: 159–72.
    • MSMPlan. 2015. “The Flaws in Adrienne Shaw’s Paper on Gamergate and Conspiracy Theories.” Medium. March 18. https://medium.com/@MSMPlan/the-flaws-in-adrienne-shaw-s-paper-on-Gamergate-and-conspiracy-theories-7fc91df43bc.
    • Nakamura, Lisa. 2012. “Queer Female of Color: The Highest Difficulty Setting There Is? Gaming Rhetoric as Gender Capital.” Ada: A Journal of Gender, New Media & Technology 1, no. 1. http://adanewmedia.org/2012/11/issue1-nakamura/
    • Negroponte, Nicholas. 1995. Being Digital. New York: Knopf.
    • Plunkett, Luke. 2014. “We Might Be Witnessing The ‘Death of An Identity.’” Kotaku, August 28. http://kotaku.com/we-might-be-witnessing-the-death-of-an-identity-1628203079.
    • Quinn, Zoe. 2015. “August Never Ends.” Quinnspiracy Blog. January 11. http://ohdeargodbees.tumblr.com/post/107838639074/august-never-ends.
    • Salter, Anastasia. 2016. “Code before Content? Brogrammer Culture in Games and Electronic Literature.” presented at the Electronic Literature Organization, University of Victoria, June 10.
    • Sargon of Akkad. 2014. A Conspiracy Within Gaming #Gamergate #NotYourShield. https://www.youtube.com/watch?v=yJyU7RSvs_s.
    • Sasaki, Betty. 2002. “Toward a Pedagogy of Coalition.” In Twenty-First-Century Feminist Classrooms: Pedagogies of Identity and Difference, edited by Amie A. Macdonald and Susan Sánchez-Casal, 31–57. New York, NY: Palgrave Macmillan.
    • Shield Project. 2014. #NotYourShield – We Are Gamers. https://www.youtube.com/watch?v=SYqBdCmDR0M#t=81.
    • Shulevitz, Judith. 2015. “In College and Hiding From Scary Ideas.” The New York Times, March 21. http://www.nytimes.com/2015/03/22/opinion/sunday/judith-shulevitz-hiding-from-scary-ideas.html.
    • “So I Decided to Email Jimbo…” 2015. Reddit. Accessed September 25. https://www.reddit.com/r/KotakuInAction/comments/2pphuo/so_i_decided_to_email_jimbo/cmyzva7?context=3.
    • Solorzano, Daniel, Miguel Ceja, and Tara Yosso. 2000. “Critical Race Theory, Racial Microaggressions, and Campus Racial Climate: The Experiences of African American College Students.” The Journal of Negro Education 69, no. 1/2: 60–73.
    • “The Birth of Vivian.” 2015. http://i.imgur.com/FdqKFwu.jpg (accessed September 27, 2015).
    • Wales, Jimmy. 2014. “I Have an Idea for pro #Gamergate Folks of Good Will. Go to http://Gamergate.wikia.com/Proposed_Wikipedia_Entry … and Write What You Think Is an Appropriate Article.” Microblog. @jimmy_wales. November 12. https://twitter.com/jimmy_wales/status/532624325694992385?ref_src=twsrc%5Etfw.
    • Wernimont, Jacqueline. 2015. “A ‘Conversation’ about Violence against Women Online (with Images, Tweets) · Jwernimo.” Storify. https://storify.com/jwernimo/a-conversation-about-violence-against-women-online (accessed September 23, 2015).
    • Yiannopoulos, Milo. 2014a. “Gamergate: Angry Feminists, Unethical Journalists Are the Ones Not Welcome in the Gaming Community.” Breitbart. September 14. http://www.breitbart.com/big-hollywood/2014/09/15/the-Gamergate-movement-is-making-terrific-progress-don-t-stop-now/.
    • ———. 2014b. “The Authoritarian Left Was on Course to Win the Culture Wars… Then Along Came #Gamergate.” Breitbart. November 12. http://www.breitbart.com/london/2014/11/12/the-authoritarian-left-was-on-course-to-win-the-culture-wars-then-along-came-Gamergate/.