boundary 2

Category: _reviews_blockhover

  • Quinn DuPont – Ubiquitous Computing, Intermittent Critique

    Quinn DuPont – Ubiquitous Computing, Intermittent Critique

    a review of Ulrik Ekman, Jay David Bolter, Lily Díaz, Morten Søndergaard, and Maria Engberg, eds., Ubiquitous Computing, Complexity, and Culture (Routledge 2016)

    by Quinn DuPont

    ~

    It is a truism today that digital technologies are ubiquitous in Western society (and increasingly so for the rest of the globe). With this ubiquity, it seems, comes complexity. This is the gambit of Ubiquitous Computing, Complexity, and Culture (Routledge 2016), a new volume edited by Ulrik Ekman, Jay David Bolter, Lily Díaz, Morten Søndergaard, and Maria Engberg.

    There are of course many ways to approach such a large and important topic: from the study of political economy, technology (sometimes leaning towards technological determinism or instrumentalism), discourse and rhetoric, globalization, or art and media. This collection focuses on art and media. In fact, only a small fraction of the chapters do not deal either entirely or mostly with art, art practices, and artists. Similarly, the volume includes a significant number of interviews with artists (six out of the forty-three chapters and editorial introductions). This focus on art and media is both the volume’s strength, and one of its major weaknesses.

    By focusing on art, Ubiquitous Computing, Complexity, and Culture pushes the bounds of how we might commonly understand contemporary technology practice and development. For example, in their chapter, Dietmar Offenhuber and Orkan Telhan develop a framework for understanding, and potentially deploying, indexical visualizations for complex interfaces. Offenhuber and Telhan use James Turrell’s art installation Meeting as an example of the conceptual shortening of causal distance between object and representation, as a kind of Peircean index, and one such way to think about systems of representation. Another example of theirs, Natalie Jermijenko’s One Trees installation of one hundred cloned trees, strengthens and complicates the idea of the causal index, since the trees are from identical genetic stock, yet develop in natural and different ways. The uniqueness of the fully-grown trees is a literal “visualization” of their different environments, not unlike a seismograph, a characteristic indexical visualization technology. From these examples, Offenhuber and Telhan conclude that indexical visualizations may offer a fruitful “set of constraints” (300) that the information designer might draw on when developing new interfaces that deal with massive complexity. Many other examples and interrogations of art and art practices throughout the chapters offer unexpected and penetrating analysis into facets of ubiquitous and complex technologies.

    James Turrell, Meeting 2016
    MoMA PS1 | James Turrell, Meeting 2016, Photos by Pablo Enriquez

    A persistent challenge with art and media analyses of digital technology and computing, however, is that the familiar and convenient epistemological orientation, and the ready comparisons that result, are often to film, cinema, and theater. Studies reliant on this epistemology tend to make a range of interesting yet ultimately illusory observations, which fail to explain the richness and uniqueness of modern information technologies. In my opinion, there are many important ways that film, cinema, and theater are simply not like modern digital technologies. Such an epistemological orientation is, arguably, a consequence of the history of disciplinary allegiances—symptomatic of digital studies and new media studies originating from screen studies—and a proximate cause of Lev Manovich’s agenda-setting Language of New Media (2001), which relished in the mimetic connections resulting from the historical quirk that the most obvious computing technologies tend to have screens.

    Because of this orientation, some of the chapters fail to critically engage with technologies, events, and practices largely affecting lived society. A very good artwork may go a long way to exposing social and political activities that might otherwise be invisible or known only to specialists, but it is the role of the critic and the academic to concretize these activities, and draw thick connections between art and “conventional” social issues. Concrete specificity, while avoiding reductionist traps, is the key to avoiding what amounts to belated criticism.

    This specificity about social issues might come in the form of engagement with normative aspects of ubiquitous and complex digital technologies. Instead of explaining why surveillance is a feature of modern life (as several chapters do, which is, by now, well-worn academic ground), it might be more useful to ask why consumers and policy-makers alike have turned so quickly to privacy-enhancing technologies as a solution (to be sold by the high-technology industry). In a similar vein, unsexy aspects of wearable technologies (accessibility) now offer potential assistance and perceptual, physical, or cognitive enhancement (as described in Ellis and Goggin’s chapter), alongside unprecedented surveillance and monetization opportunities. Digital infrastructures—both active and failing—now drive a great deal of modern society, but despite their ubiquity, they are hard to see, and therefore, tend not to get much attention. These kinds of banal and invisible—ubiquitous—cases tend not to be captured in the boundary-pushing work of artists, and are underrepresented (though not entirely absent) in the analyses here.

    A number of chapters also trade on old canards, such as worrying about information overload, “junk” data whizzing across the Internet, time “wasted” online, online narcissism, business models based on solely on data collection, and “declining” privacy. To the extent that any of these things are empirically true—when viewed contextually and precisely—is somewhat beside the point if we are not offered new analyses or solutions. Otherwise, these kinds of criticisms run the risk of sounding like old people nostalgically complaining about an imagined world before technological or informational ubiquity and complexity. “Traditional” human values might be an important form of study, but not as the pile-on Left-leaning liberal romanticism prevalent in far too many humanistic inquiries of the digital.

    Another issue is that some of the chapters seem to be oddly antiquated for a book published in 2016. As we all know, the publication of edited collections can often take longer than anyone would like, but for several chapters, the examples, terminology, and references feel unusually dated. These dated chapters do not necessarily have the advantage of critical distance (in the way that properly historical study does), and neither do they capture the pulse of the current situation—they just feel old.

    Before turning to a sample of the truly excellent chapters in this volume, I must pause to make a comment about the book’s physical production. On the back cover, Jussi Parikka calls Ubiquitous Computing, Complexity, and Culture a “massively important volume.” This assessment might have been simplified by just calling it “a massive volume.” Indeed, using some back-of-the-napkin calculations, the 406 dense pages amounts to about 330,000 words. Like cheesecake, sometimes a little bit of something is better than a lot. And, while such a large book might seem like good value, the pragmatics of putting an estimated 330,000 words into a single volume requires considerable care to typesetting and layout, which unfortunately is not the case here. At about 90 characters per line, and 46 lines per page—all set in a single column—the tiny text set on extremely long lines strains even this relatively young reviewer’s eyes and practical comprehension. When trudging through already-dense theory and the obfuscated rhetoric that typically accompanies it (common in this edited collection), the reading experience is often painful. On the positive side, in the middle of the 406 pages of text there are an additional 32 pages of full-color plates, a nice addition and an effective way to highlight the volume’s sympathies in art and media. An extensive index is also included.

    Despite my criticisms of the approach of many of the chapters, the book’s typesetting and layout, and the editors’ decision to attempt to collocate so much material in a single volume, there are a number of outstanding chapters, which more than redeem any other weaknesses.

    Elaborating on a theme from her 2011 book Programmed Visions (MIT), Wendy H.K. Chun describes why memory, and the ability to forget, is an important aspect to Mark Weiser’s original notion of ubiquitous computing (in his 1991 Scientific American article). (Chun also notes that the word “ubiquitous” comes from “Ubiquitarians,” a Lutherans sect who believed Christ was present ‘everywhere at once’ and therefore invisible.) According to Chun’s reading of Weiser, to get to a state of ubiquitous computing, machines must lose their individualized identity or importance. Therefore, unindividuated computers had to remember, by tracking users, so that users could correspondingly forget (about the technology) and “thus think and live” (161). The long history of computer memory, and its rhetorical emergence out of technical “storage” is an essential aspect to the origins of our current technological landscape. Chun notes that prior to the EDVAC machine (and its strategic alignment to cognitive models of computation), storage was a well understood word, which etymologically suggested an orientation to the future (“stores look toward a future”). Memory, on the other hand, contained within it the act of recall and repetition (recall Meno’s slave in Plato’s dialogue). So, when EDVAC embedded memory within the machine, it changed “memory by making memory storage” (162). In doing so, if we wanted to rehabilitate Weiser’s original image, of being able to “think and live,” we would need to refuse the “deadening of the world brought about by memory as storage and realize the fundamentally collective nature of memory and writing” (162).

    Sean Cubitt does an excellent job of exposing the political economy of ubiquitous technologies by focusing on the ways that enclosure and externalization occur in information environments, interrogating the term “information economy.” Cubitt traces the history of enclosures from the alienation of fifteenth-century peasants from their land, the enclosure of skills to produce dead labour in nineteenth-century factories, to the conversion of knowledge into information today, which is subsequently stored in databases and commercialized as intellectual property—alienating individuals from their own knowledge. Accompanying this process are a range of externalizations, predominantly impacting the poor and the indigenous. One of the insightful examples Cubitt offers of this process of externalization is the regulation of radio spectrum in New Zealand, and the subsequent challenge by Maori people who, under the Waitangi Treaty, are entitled to “all forms of commons that pre-existed the European arrival” (218). According to the Maori, radio spectrum is a form of commons, and therefore, the New Zealand government is not permitted to claim exclusive authority to manage the spectrum (as practically all Western governments do). Not content to simply offer critique, Cubitt concludes his chapter with a (very) brief discussion of potential solutions, focusing on the reimagining of peer-to-peer technology by Robert Verzola of the Philippines Green Party. Peer to peer technology, Cubitt tentatively suggests, may help reassert the commons as commonwealth, which might even salvage traditional knowledge from information capitalism.

    Katie Ellis and Gerard Goggin discuss the mechanisms of locative technologies for differently-abled people. Ellis and Goggin conclude that devices like the later-model iPhone (not the first release), and the now-maligned Google Glass offer unique value propositions for those engaged in a spectrum of impairment and “complex disability effects” (274). For people who rely on these devices for day-to-day assistance and wayfinding, these devices are ubiquitous in the sense Weiser originally imagined—disappearing from view and becoming integrated into individual lifeworlds.

    John Johnston ends the volume as strongly as N. Katherine Hayles’s short foreword opened it, describing the dynamics of “information events” in a world of viral media, big data, and, as he elaborates in an extended example, complex and high-speed financial instruments. Johnston describes how events like the 2010 “Flash Crash,” when the Dow fell nearly a thousand points and lost a trillion dollars and rebounded within five minutes, are essentially uncontrollable and unpredictable. This narrative, Johnston points out, has been detailed before, but Johnston twists the narrative and argues that such a financial system, in its totality, may be “fundamentally resistant to stability and controllability” (389). The reason for this fundamental instability and uncontrollability is that the financial market cannot be understood as a systematic, efficient system of exchange events, which just happens to be problematically coded by high-frequency, automated, and limit-driven technologies today. Rather, the financial market is a “series of different layers of coded flows that are differentiated according to their relative power” (390). By understanding financialization as coded flows, of both power and information, we gain new insight into critical technology that is both ubiquitous and complex.

    _____

    Quinn DuPont studies the roles of cryptography, cybersecurity, and code in society, and is an active researcher in digital studies, digital humanities, and media studies. He also writes on Bitcoin, cryptocurrencies, and blockchain technologies, and is currently involved in Canadian SCC/ISO blockchain standardization efforts. He has nearly a decade of industry experience as a Senior Information Specialist at IBM, IT consultant, and usability and experience designer.

    Back to the essay

  • Andrew Martino – Exhuming the Text: Alice Kaplan’s “Looking for the Stranger: Albert Camus and the Life of a Literary Classic”

    Andrew Martino – Exhuming the Text: Alice Kaplan’s “Looking for the Stranger: Albert Camus and the Life of a Literary Classic”

    Alice Kaplan’s Looking for the Stranger: Albert Camus and the Life of a Literary Classic

    Reviewed by Andrew Martino

    Albert Camus never considered himself an existentialist. In fact, Camus never exclusively believed in any school of thought. Camus was the consummate outsider, the one who stood apart from those who subscribed to views that forced those subscribers into a narrow ideology, especially when that ideology mixed with violence, something Camus steadfastly resisted. If we had to place Camus into any category it would be that of the humanist caught in the absurd. Camus believed in life over death (without believing in an afterlife), yet this belief did not keep him from contemplating the question of suicide, the only serious philosophical problem confronting us, as he writes in The Myth of Sisyphus. Camus’ humble beginnings in extreme poverty and illiteracy in his native Algeria  testify to the power of the human spirit in the face of an indifferent world. When he was awarded the Nobel Prize for Literature in 1957 he expressed reservations and claimed that the prize should have gone to André Malraux, an early influence on his writing. Camus also realized that the Nobel would bring a certain celebrity that would complicate his life, perhaps even sabotaging his art. Add to this his “silence” on the Algerian problem and his very public and acrimonious break with Sartre, and Camus becomes a figure trapped in a world where he is increasingly unable to control his own image. Camus is a problematic figure who is claimed by both the Right and the Left, leaving the man and his writing caught in a political vortex. Focusing on the postcolonial aspect of The Stranger, Edward W. Said writes that Camus “is a moral man in an immoral situation.”[i] When Camus died at the age of 46 in a car accident in 1960, he left the world with the image of the charismatic young man, Bogart-like in his coolness, and still with the promise of great things to come. But a saint he was not. His numerous affairs and constant womanizing, his reluctance to act or speak out against French imperialism in Algeria, his disillusionment with and expulsion from the Communist Party, render him more human than academics might be comfortable with. Camus’ life was full of contradictions, full of silences. Yet, it was precisely from these contradictions and silences that Camus produced one of the most important and widely read books of the twentieth century.

     Looking back over the seven decades since the publication of The Stranger, Camus’ reluctance to situate (in the Sartrean sense of the term) himself in the bubble of existentialism, a bubble in which The Stranger and his relationship with Sartre placed him, the novel blazed a path that opened up fields where the absurd might be articulated, contemplated, and confronted from the inside (the modernist bent) rather than from above and beyond, as the canonical novels of the nineteenth century may have done. In her essay “French Existentialism,” Hannah Arendt briefly examines Sartre and Camus’ influence on the “new” movement where novels carry the weight of philosophy. Throughout that essay she also comments on Camus’ reluctance to be labeled an existentialist. “Camus has probably protested against being called an Existentialist because for him the absurdity does not lie in man as such or in the world as such but only in their being thrown together.”[ii] Here we have what is perhaps the most concise and articulate formulation of absurdist philosophy to date. Camus’ definition of absurdity, painstakingly mapped out in Caligula, The Stranger, and The Myth of Sisyphus, is not quite existentialism, but does contain existentialist DNA, especially Kierkegaardian and Dostoevskian (two of Camus’ patron saints) DNA. As Camus remarks in The Myth of Sisyphus: “I can therefore say that the Absurd is not in man (if such a metaphor could have a meaning) nor in the world, but in their presence together.”[iii] Camus’ definition of the absurd is also the epistemological curve in the road separating him from Sartre’s thinking. If Sartre’s philosophy can be distilled into his phrase “Hell is other people,” then Camus’ is a philosophy of the absurd dependent upon relationships among people. On the other hand, Camus’ articulation of the absurd, as we’ve seen above, resides in the relationship of humans with their world.

    Together, Sartre and Camus blazed a path where philosophy and art, in this case literature, met, thereby ushering in a new form of the novel, one that would examine existence from a philosophical perspective while making use of a form in which to mold these philosophical perspectives. What emerges from this is a hybrid. According to Randall Collins, “What was identified was a tradition of literary-philosophical hybrids. Sartre and Camus were key formulators of the canon, and themselves archetypes of the career overlap between academic networks and the writers’ market. The phenomenon of existentialism in the 1940s and 1950s added another layer to this overlap.”[iv] But this hybridization was more than a heady cerebral new movement in fiction; this hybrid constituted a new way of thinking about the world, a world that emerged primarily from a particular network of intellectuals at a particular time in Paris. Sartre and Camus are on the crest of this wave of existentialism and their thinking would go on to change the world.

    Alice Kaplan’s extraordinary new book Looking for the Stranger: Albert Camus and the Life of a Literary Classic, is a careful and meticulously researched examination of Camus’ 1942 novel. Kaplan is one of the leading scholars of twentieth century French culture and history. She is currently the John M. Masser Professor of French at Yale University where she also received her Ph.D. in French in 1981. She has published seven books, including: French Lessons: A Memoir (1993), The Collaborator: The Trial and Execution of Robert Brasillach (2000), and Dreaming in French: The Paris Years of Jacqueline Bouvier Kennedy, Susan Sontag, and Angela Davis (2012). In 2013 Kaplan edited and provided the introduction to The Algerian Chronicles, a collection of articles and essays Camus wrote from 1939-1958. Kaplan’s edited edition is the first time these writings have appeared in English, so she is no stranger to Camus and his place in twentieth century French culture.

    Early on Kaplan claims that Looking for the Stranger is actually a biography of Camus’ best known work, and one of the most famous and widely read texts of the twentieth-century. However, this does not mean that Kaplan foregoes a glimpse into Camus’ life, thus resurrecting the Barthesian “death of the author” debate. Instead, Kaplan goes looking for The Stranger in the author instead of the author in The Stranger; the difference is subtly stunning. In other words, her investigation is more preoccupied with the creative process and its cultural and social context than it is with getting to the author as a god-like figure. Camus always claimed that The Stranger was the second in a three part series exploring the absurd from three different perspectives: a novel (The Stranger), a dramatization (Caligula), and a philosophical work (The Myth of Sisyphus). But The Stranger is hardly a book that needs rescuing from obscurity, nor does Kaplan claim that it does. To date the novel has sold over ten million copies and is still read in over 40 languages. It is still on high school and college syllabi, thus making it required reading for young men and women. In fact, a student’s first encounter with existentialism and the absurd is likely to come from a reading of The Stranger. Instead, she offers us a more comprehensive look into the text, running down every lead, exploring every avenue that might expand our understanding of what makes The Stranger the text that it is.

    Kaplan begins by acknowledging the spectacular success of The Stranger, making it one of the most popular and important texts of the twentieth century. She quickly glosses over the critical reaction to The Stranger by pointing out that readings of the novel map some of the most important theoretical lenses that have influenced twentieth century thought. “In fact, you can construct a pretty accurate history of twentieth-century literary criticism by following the successive waves of analysis of The Stranger: existentialism, new criticism, deconstruction, feminism, postcolonial studies” (2). The Stanger, she claims, has influenced thinking of a diverse population that spans generations. Indeed, the remarkable staying power of the novel to remain relevant, perhaps even more relevant now than when it was published, is a feat that its author and its critics at the time could not have foreseen. I am not sure that students continue to read The Stranger with the commitment that they once did, but it is undeniable that the novel still matters, that it still provokes us into thinking, especially in a time when fundamentalism and terrorism are on the rise, and Europe and the United States are flirting with a new form of fascism in the guise of a renewed interest in ridged nationalism. But Kaplan is not necessarily interested in the public and academic reception of The Stranger. Instead, she claims that the novel’s readers and commentators have overlooked something from our reading of the text since its publication: that something is a biography of the novel. “Yet something essential is lacking in our understanding of the author and the book. By concentrating on themes and theories—esthetic, moral, political—critics have taken the very existence of The Stranger for granted” (2-3). She takes the unprecedented, and academically unpopular path that looks into the life of the author and the circumstances that allowed the author at a particular place and time to write one of the most powerful works of world literature. However, it is important to point out that Kaplan sets out to write a biography of the novel, and not the author. In fact, Camus’ life becomes a part of the puzzle that is The Stranger.

    Kaplan is not the first to comment on the unlikely success of The Stranger and its problematic birth. She is, however, the first to devote an entire book to an investigation, an investigation that is almost documentary-like in its approach, to the novel from conception to publication and beyond. And she accomplishes this brilliantly. Told in twenty-six short chapters, bookended by a prologue and an epilogue, Kaplan leads us into the depths of the novel in a highly engaging and thought-provoking fashion. In fact, the structure of her book presents its readers with the “life” of the novel, a life that has continued on long after the death of its creator. Drawing from a reservoir of sources, including Camus’ notebooks and her own trips to Algeria, Looking for the Stranger is a scholarly adventure story. As Kaplan claims in her acknowledgements: “I looked for The Stranger in libraries, in archives, in neighborhoods on three continents” (219). Of course, the idea of The Stranger was with her all of the time, but what makes Kaplan’s book so provocative is precisely the lengths she goes to in search of the novel. Kaplan explores The Stranger in three parts: before its publication, during its publication, and after its publication.

    In the first chapter Kaplan gives us the image of a young man in front of a bonfire burning various papers that link him to a past, a past that could be dangerous to him and those who know him. But as Kaplan tells it, the young Camus could not bring himself to burn all of his letters and writings. What he saved would act as a cache of material, both physical and remembered, from which he would extract and rework into a slim, simply told tale of a man who fails to cry at his mother’s funeral and, by a series of circumstances, ends up shooting an unnamed Arab on a beach, only to be arrested, tried, convicted and sentenced to death. Yet, the reader is never quite sure if the protagonist is convicted and sentenced to death because of the murder or his refusal to conform to the rules of a society that demands that one cry at one’s mother’s funeral. The image of the bonfire given to us by Kaplan is a powerful one. As we travel with her deeper into her investigation, we learn that the bonfire was a kind of rite Camus needed to perform in order to purge his mind and soul so that he could go on to write what he felt needed to be written—unimpeded by ghosts, but still attentive to their silences, which spoke to and through him.

    Throughout the spring of 1940, six years after the bonfire, Camus worked furiously on The Stranger, almost in total isolation holed up in his miserable hotel room in Montmartre, interrupted only to work for five hours a day at Paris-Soir. The twenty-six year old was as cut off from the world as he had ever been. Alone in a foreign city, with German bombs exploding all over France, Camus fought his loneliness and misery by throwing himself into his writing. Not yet divorced from his first wife, Simone Hié, his fiancé Francine Faure refused to accompany him to Paris. The only thing he brought with him was the first chapter of The Stranger and a few of his press clippings. Kaplan: “His sense of separation from everyone he loved put him in a state of mind that was both painful and enabling” (71). Like Camus’ biographer Olivier Todd, Kaplan highlights the importance of Camus’ isolation when he first arrives in Paris. Camus believed that the failure of A Happy Death, his abandoned first novel, was due to his inability to write without interruption. Camus’ isolation in Paris enabled him, out of necessity, to devote all of his attention to The Stranger. Kaplan’s research offers us a marvelous glimpse into the creative process Camus used, or perhaps more accurately, was host to, during his writing of the novel. Kaplan claims that Camus wrote The Stranger almost line for line, as if he were dictating a story he was seeing play out before his eyes. Where he struggled with the writing of A Happy Death, The Stranger seems to have emerged almost fully formed, complete.

    However, his writing of The Stranger does not mean that it was without its problems. In fact, the birth of The Stranger was long and fraught with difficulties both internal and external. Until his arrival in Paris, Camus struggled with getting into the narrative, creating a new story, as well as using material from A Happy Death. Interestingly, most reviewers of Kaplan’s book, Robert Zaretsky, himself an accomplished Camus scholar, and John Williams in particular[v], have devoted a majority of their reviews to the shortage of paper in France as the novel was set to go to press. “To say that the very existence of The Stranger was threatened by the material conditions of the war is no exaggeration, since paper supplies were becoming more and more precious. It looked at one point as if Camus would have to supply his own paper stock!” (136). Camus was in Oran with his family at the time, and was happy to help Gallimard with locating paper. The novel came very close to not being published, but paper stock was found at the last minute and Camus was not obliged to supply his own.

    Once the novel was published it was met with immediate success. But perhaps its success was not so unusual after all. From the beginning Camus wanted the French publishing world, located in Paris, to represent him. In the chapter “A Jealous Teacher and a Generous Comrade,” Kaplan tells the story of Camus’ almost frantic correspondence with Jean Grenier and Pascal Pia, the teacher and the comrade, respectfully, and their influence on The Stranger in its early stages. More importantly, if Camus were to move from a provincial author to a wider audience, one that would include the whole of Europe and possibly America, he would have to seek publication outside of Algeria. As Kaplan notes: “Yet Paris was still the center of book publishing in France, and if Camus wanted to publish outside Algeria, he’d eventually have to find a way to get his manuscript to the capital” (107). This, it seems to me, provides the necessary evidence that Camus was thinking bigger than his native land. He desired a world stage, a stage that would allow his work to be read by the widest possible public and Gallimard was the publisher that could provide him with that opportunity. In his book The Existentialist Moment: The Rise of Sartre as a Public Intellectual, Patrick Baert illustrates the importance of publishing, especially those publishing houses in Paris, for providing the necessary outlet for ideas. “Intellectual ideas spread mainly through publications. Whether through books, magazines, or articles, publishing is central to the rise of intellectual movements. For such movements to be successful, authors have to be well connected to the main publishers and need to have sufficient freedom and power to be able to write what they want to write.”[vi] The network Gallimard could provide Camus with would plug him into some of the most resonant writers and thinkers of the time. As mentioned above, The Stranger was not just a novel, but also an important piece of a longer meditation on the absurd. Therefore, Camus’ relationship with Gallimard, as Kaplan points out, is a key component to his rise to international prominence. Quite frankly, without Gallimard, The Stranger might not have met with its tremendous success.

    Camus’ association with Gallimard was not the only key to his success, however. Gallimard’s star and existentialism’s major voice, Jean-Paul Sartre, also had a lot to do with the success of The Stranger. In his celebrated review of The Stranger, originally published in 1943, Sartre almost single handedly anoints Camus into the French intellectual network, thus solidifying his reputation as a resonant French intellectual. Still, early on in his review Sartre points out that, like its author, The Stranger is a book from “across the sea,” highlighting Camus’ Algerian heritage. Sartre’s generous and insightful review gives a certain intellectual legitimacy to the novel. Sartre: “The Stranger is a classical work, a work of order, written about the absurd and against the absurd.”[vii] This Apollonian form, in the Nietzschean sense, of the novel further reinforces the boundary lines that mark the absurd context, a context that we might fold into the Dionysian, again in the Nietzschean sense.

    But it would be a mistake to consider The Stranger a French novel; it is, in almost every sense, an Algerian novel, a novel obsessed with the sun and the sea. What is perhaps closer to the novel’s intention is, at least in part, a Mediterranean world in a colonial context. In other words, the pied noirs who enjoy French citizenship and the protection it offers as opposed to Arab subjectivity. Arab subjectivity is one of the chief criticisms postcolonial scholars hurl at The Stranger and its author. Yet, a purely postcolonial reading of The Stranger severely limits our understanding of the novel. As David Carroll points out, “I would even say that to judge and indict Camus [as Edward Said does] for his “colonialist ideology” is not to read him; it is not to treat his literary texts in terms of the specific questions they actually raise, the contradictions they confront, and the uncertainties and dilemmas they express. It is not to read them in terms of their narrative strategies and complexity. It is to bring everything back to the same political point and ignore or underplay everything that might complicate or refute such a judgment.”[viii] The postcolonial lens that has dominated readings of The Stranger has also relegated it and its creator to a graveyard for Eurocentric authors. Kaplan’s attention to detail, however, locates the nameless murdered Arab in The Stranger in a central, one might even say, privileged, position. Almost from the beginning, Kaplan admits to being nearly obsessed with the figure of the nameless Arab. Indeed, the namelessness of this character is one of the pivotal points in her book. As Kaplan discovers, there was a nameless Arab in Camus’ life, one that would lead him straight to the central scene in The Stranger.

    In 2015 Other Press published the English translation of Kamel Daoud’s The Meursault Investigation, a retelling of The Stranger from the point of view of the brother of the Arab killed on the beach by Meursault. Daoud, an Algerian journalist living in Oran, writes for Quotidien d’Oran, a French language newspaper in Algeria. The Meursault Investigation is an interesting book that reads more in the style of Camus’ The Fall than The Stranger. The protagonist, speaking to us in the first person from a bar in Oran, informs us that there are other facts in the case that we did not hear, the chief among these is the name of his brother, Meursault’s victim, Musa: “Who was Musa? He was my brother. That’s what I’m getting at. I want to tell you the story Musa was never able to tell. When you opened the door of this bar, you opened a grave, my young friend” (4). Daoud’s text comes dangerously close to being fan fiction. However, there is something profoundly relevant in the novel. The Meursault Investigation demonstrates a deeper understanding of The Stranger, and Camus’ style. In order to write this book, Daoud proves that he knows The Stranger intimately, and his contribution to the story is, indeed, worthy of consideration. The Meursault Investigation demands to be read, digested, and then read again in the context to the cultural as well as the literary conditions of Algeria before, during, and after its independence.

    Kaplan devotes nearly an entire chapter (chapter 26) to Daoud’s novel and the figure of the unnamed Arab who appears in nearly spectral form in The Stranger. She tells us that she had a meeting with Daoud in 2014 in Oran, in which he claimed “we don’t read The Stranger the same way as Americans, French, Algerians” (210).  Kaplan’s reading of Daoud’s novel is a revelatory experience for her, and by association, for us. She strategically situates The Meursault Investigation both within and beyond the lens of postcolonial theory.

    Kaplan’s research into the source of the killing of the Arab scene in The Stranger is a remarkable piece of journalism. Her investigation led her through the towns and alleyways of Oran, to dusty archives, and populated streets, all despite an Algerian travel advisory for those holding a United States passport. “For two years, I had traveled to places in France and Algeria connected to The Stranger: I had walked down the former rue de Lyon in Algiers, past Camus’s childhood home. With photographer Kays Djilali, I climbed the steep Chemin Sidi Brahim, knocking on doors until we found the House Above the World, now the home of three generations of Kabyle women who speak neither French nor Arabic. With Father Guillaume Michel from Glycines Study Center in Algiers, I drove out to gold and blue vistas of Tipasa. In Paris, I stood in the dreary spot on the hill of Montmartre where Camus wrote in solitude” (211). At the end of the trail is a name: Kaddour Touil, and a story.

    Kaplan’s research demonstrates that it is not really Camus the author who haunts The Stranger, but rather it is the specter of Meursault who haunts Camus, both in life and after death. Meursault, as Olivier Todd informs us, is a combination of several people Camus knew. “The character of Meursault was inspired by Camus, Pascal Pia, Pierre Galindo, the Bensoussan brothers, Sauveur Galliero, and Yvonne herself. Marie was not Francine. Camus the writer mastered his novel in a way that Camus the man did not control in his life. Meursault never asked himself any questions, whereas Camus was always examining his actions and motivations.”[ix] Authors routinely use what and who they know for characters and their actions in books, but Camus’ relationship with Meursault seems to be as complicated as that character’s relationship with the reader. Kaplan’s book sheds a new light on the complexities of those relationships.

    The Stranger is truly a work of world literature, in the sense that David Damrosch defines the concept.[x] With The Stranger we have an Algerian author who wrote in French but was influenced by Danish, Russian, and German thinking, and was stylistically influenced by American authors like Hemingway and James M. Cain. Alice Kaplan gives us a view of The Stranger that joins a growing chorus of scholarship on the controversial book and its author. She provides keen insight that opens up other avenues of thinking about that book and its author. Camus’ influence seems to be growing, not diminishing as we move deeper into the twenty-first century, and this is needed, especially given the growing resurgence of nationalism and isolationist polices, i.e. Brexit and Trump. Perhaps it’s only literature, and international fiction in particular, that can save us from ourselves. In this age of social media epitomized by the egotistical selfie, international fiction has become more important than ever. Kaplan’s book reminds us that nothing exists in a vacuum, that great works of art come about contextually and pan-culturally. The Stranger might never have been a success without the French existentialist network of the time.

    Andrew Martino is Professor of English at Southern New Hampshire University where he also directs the University Honors Program. He has published on contemporary literature and is currently finishing a manuscript on the concept of security in the work of Paul Bowles.

    Notes

    [i] Edward W. Said. Culture and Imperialism. (New York: Vintage Books, 1994), 174.

    [ii] Arendt, Hannah. “French Existentialism.” Essays in Understanding: 1930-1954. (New York: Schocken Books, 1994), 192.

    [iii]Albert Camus. The Myth of Sisyphus. Trans. Justin O’Brien. ) New York: Vintage Books, 1991), 30.

    [iv] Randall Collins. The Sociology of Philosophies: A Global Theory of Intellectual Change. (Cambridge, Massachusetts: The Belknap Press of Harvard University Press, 2002), 764.

    [v] See Zaretsky’s review in Los Angeles Review of Books (https://lareviewofbooks.org/article/biography-zaretsky-kaplan-camus/) and Williams’ review in the New York Times (Sept. 15, 2016).

    [vi] Patrick Baert. The Existentialist Moment: The Rise of Sartre as a Public Intellectual. (Cambridge, England: Polity Press, 2015), 138-139.

    [vii] Jean-Paul Sartre. “The Stranger Explained.” We Have Only This Life to Live: The Selected Essays of Jean-Paul Sartre 1939-1975. Ed. Ronald Aronson and Adrian Van Den Hoven. (New York: New York Review Books, 2013), 43.

    [viii] David Carroll. Albert Camus the Algerian: Colonialism, Terrorism, Justice. (New York: Columbia University Press, 2007), 15.

    [ix] Todd, Olivier. Albert Camus: A Life. (New York: Alfred A. Knopf. 1997), 107.

    [x] Here I am thinking specifically of Damrosch’s theory of circulation. See David Damrosch’s What is World Literature. (New Jersey: Princeton University Press, 2003) for a full definition of the concept.

  • Daniel Greene – Digital Dark Matters

    Daniel Greene – Digital Dark Matters

    a review of Simone Browne, Dark Matters: On the Surveillance of Blackness (Duke University Press, 2015)

    by Daniel Greene

    ~

    The Book of Negroes was the first census of black residents of North America. In it, the British military took down the names of some three thousand ex-slaves between April and November of 1783, alongside details of appearance and personality, destination and, if applicable, previous owner. The self-emancipated—some free, some indentured to English or German soldiers—were seeking passage to Canada or Europe, and lobbied the defeated British Loyalists fleeing New York City for their place in the Book. The Book of Negroes thus functioned as “the first government-issued document for state-regulated migration between the United States and Canada that explicitly linked corporeal markers to the right to travel” (67). An index of slave society in turmoil, its data fields were populated with careful gradations of labor power, denoting the value of black life within slave capitalism: “nearly worn out,” “healthy negress,” “stout labourer.”  Much of the data in The Book of Negroes was absorbed from so-called Birch Certificates, issued by a British Brigadier General of that name, which acted as passports certifying the freedom of ex-slaves and their right to travel abroad. The Certificates became evidence submitted by ex-slaves arguing for their inclusion in the Book of Negroes, and became sites of contention for those slave-owners looking to reclaim people they saw as property.

    If, as Simone Browne argues in Dark Matters: On the Surveillance of Blackness, “the Book of Negroes [was] a searchable database for the future tracking of those listed in it” (83), the details of preparing, editing, monitoring, sorting and circulating these data become direct matters of (black) life and death. Ex-slaves would fight for their legibility within the system through their use of Birch Certificates and the like; but they had often arrived in New York in the first place through a series of fights to remain illegible to the “many start-ups in slave-catching” that arose to do the work of distant slavers. Aliases, costumes, forged documents and the like were on the one hand used to remain invisible to the surveillance mechanisms geared towards capture, and on the other hand used to become visible to the surveillance mechanisms—like the Book—that could potentially offer freedom. Those ex-slaves who failed to appear as the right sort of data were effectively “put on a no-sail list” (68), and either held in New York City or re-rendered into property and delivered back to the slave-owner.

    Start-ups, passports, no-sail lists, databases: These may appear anachronistic at first, modern technological thinking out of sync with colonial America. But Browne deploys these labels with care and precision, like much else in this remarkable book. Dark Matters reframes our contemporary thinking about surveillance, and digital media more broadly, through a simple question with challenging answers: What if our mental map of the global surveillance apparatus began not with 9/11 but with the slave ship? Surveillance is considered here not as a specific technological development but a practice of tracking people and putting them into place. Browne demonstrates how certain people have long been imagined as out of place and that technologies of control and order were developed in order to diagnose, map, and correct these conditions: “Surveillance is nothing new to black folks. It is a fact of antiblackness” (10). That this ”fact” is often invisible even in our studies of surveillance and digital media more broadly speaks, perversely, to the power of white supremacy to structure our vision of the world. Browne’s apparent anachronisms make stranger the techniques of surveillance with which we are familiar, revealing the dark matter that has structured their use and development this whole time. Difficult to visualize, Browne shows us how to trace this dark matter through its effects: the ordering of people into place, and the escape from that order through “freedom acts” of obfuscation, sabotage, and trickery.

    This then is a book about new (and very old) methods of research in surveillance studies in particular, and digital studies in general, centered in black studies—particularly the work of critical theorists of race such as Saidiya Hartman and Sylvia Wynter who find in chattel slavery a prototypical modernity. More broadly, it is a book about new ways of engaging with our technocultural present, centered in the black diasporic experience of slavery and its afterlife. Frantz Fanon is a key figure throughout. Browne introduces us to her own approach through an early reflection on the revolutionary philosopher’s dying days in Washington, DC, overcome with paranoia over the very real surveillance to which he suspected he was subjected. Browne’s FOIA requests to the CIA regarding their tracking of Fanon during his time at the National Institutes of Health Clinical Center returned only a newspaper clipping, a book review, and a heavily redacted FBI memo reporting on Fanon’s travels. So she digs further into the archive, finding in Fanon’s lectures at the University of Tunis, delivered in the late 1950s after being expelled from Algeria by French colonial authorities, a critical exploration of policing and surveillance. Fanon’s psychiatric imagination, granting such visceral connection between white supremacist institutions and lived black experience in The Wretched of the Earth, here addresses the new techniques of ‘control by quantification’—punch clocks, time sheets, phone taps, and CCTV—in factories and department stores, and the alienation engendered in the surveilled.

    Browne’s recovery of this work grounds a creative extension of Fanon’s thinking into surveillance practices and surveillance studies. From his concept of “epidermalization”—“the imposition of race on the body” (7)—Browne builds a theory of racializing surveillance. Like many other key terms in Dark Matters, this names an empirical phenomenon—the crafting of racial boundaries through tracking and monitoring—and critiques the “absented presence” (13) of race in surveillance studies. Its opposition is found in dark sousveillance, a revision of Steve Mann’s term for watching the watchers that, again, describes both the freedom acts of black folks against a visual field saturated with racism, as well as an epistemology capable of perceiving, studying, and deconstructing apparatuses of racial surveillance.

    Each chapter of Dark Matters presents a different archive of racializing surveillance paired with reflections on black cultural production Browne reads as dark sousveillance. At each turn, Browne encourages us to see in slavery and its afterlife new modes of control, old ways of studying them, and potential paths of resistance. Her most direct critique of surveillance studies comes in Chapter 1’s precise exegesis of the key ideas that emerge from reading Jeremy Bentham’s plans for the Panopticon and Foucault’s study of it—the signal archive and theory of the field—against the plans for the slave ship Brookes. It turns out Bentham travelled on a ship transporting slaves during the trip where he sketched out the Panopticon, a model penitentiary wherein, through the clever use of lights, mirrors, and partitions, prisoners are totally isolated from one another and never sure whether they are being monitored or not. The archetype for modern power as self-discipline is thus nurtured, counter to its own telling, alongside sovereign violence. Browne’s reading of archives from the slave ship, the auction block, and the plantation reveal the careful biopolitics that created “blackness as a saleable commodity in the Western Hemisphere” (42). She asks how “the view from ‘under the hatches’” of Bentham’s Turkish ship, transporting, in his words, “18 young negresses (slaves),” might change our narrative about the emergence of disciplinary power and the modern management of life as a resource. It becomes clear that the power to instill self-governance through surveillance did not subordinate but rather partnered with the brutal spectacle of sovereign power that was intended to educate enslaved people on the limits of their humanity. This correction to the Foucauldian narrative is sorely necessary in a field, and a general political conversation about surveillance, that too often focuses on the technical novelty of drones, to give one example, without a connection to a generation learning to fear the skies.

    Stowage of the British slave ship Brookes under the regulated slave trade act of 1788
    “Stowage of the British slave ship Brookes under the regulated slave trade act of 1788.” Illustration. 1788. Library of Congress Rare Book and Special Collections Division Washington, D.C.

    These sorts of theoretical course corrections are among the most valuable lessons in Dark Matters. There is fastidious empirical work here, particularly in Chapter 2’s exploration of the Book of Negroes and colonial New York’s lantern laws requiring all black and indigenous people to bear lights after dark. But this empirical work is not the book’s focus, nor its main promise. That promise comes in prompting new empirical and political questions about how we see surveillance and what it means, and for whom, through an archaeology of black life under surveillance (indeed, Chapter 4, on airport surveillance, is the one I find weakest largely because it abandons this archaeological technique and focuses wholly on the present). Chapter 1’s reading of Charles William Tait’s prescriptions for slave management, for example, is part of a broader turn in the study of the history of capitalism where the roots of modern business practices like data-driven human resource management are traced to the supposedly pre-modern slave economy. Chapter 3’s assertion that slave branding “was a biometric technology…a measure of slavery’s making, marking, and marketing of the black subject as commodity” (91) does similar work, making strange the contemporary security technologies that purport the reveal racial truths which unwilling subjects do not give up. Facial recognition technologies and other biometrics are calibrated based on what Browne calls a “prototypical whiteness…privileged in enrollment, measurement, and recognition processes…reliant upon dark matter for its own meaning” (162). Particularly in the context of border control, these default settings reveal the calculations built into our security technologies regarding who “counts” enough to be recognized. Calculations grounded in an unceasing desire for new means with which to draw clear-cut racial boundaries.

    The point here is not that a direct line of technological development can be drawn from brands to facial recognition or from lanterns to ankle bracelets. Rather, if racism, as Ruth Wilson Gilmore argues, is “the state-sanctioned or extralegal production and exploitation of group-differentiated vulnerability to premature death,” then what Browne points to are methods of group differentiation, the means by which the value of black lives are calculated and how those calculations are stored, transmitted, and concretized in institutional life. If Browne’s cultural studies approach neglects a sustained empirical engagement with a particular mode of racializing surveillance—say, the uneven geography produced by the Fugitive Slave Act, mentioned in passing in relation to “start-ups in slave catching”—it is because she has taken on the unenviable task of shifting the focus of whole fields to dark matter previously ignored, opening a series of doors through which readers can glimpse the technologies that make race.

    Here then is a space cleared for surveillance studies, and digital studies more broadly, in an historical moment when so many are loudly proclaiming that Black Lives Matter, when the dark sousveillance of smartphone recordings has made the violence of institutional racism impossible to ignore. Work in digital studies has readily and repeatedly unearthed the capitalist imperatives built into our phones, feeds, and friends lists. Shoshanna Zuboff’s recent work on “surveillance capitalism” is perhaps a bellwether here: a rich theorization of the data accumulation imperative that transforms intra-capitalist competition, the nature of the contract, and the paths of everyday life. But her account of the growth of an extractive data economy that leads to a Big Other of behavior modification does not so far have a place for race.

    This is not a call on my part to sprinkle a missing ingredient atop a shoddy analysis in order to check a box. Zuboff is critiqued here precisely because she is among our most thoughtful, careful critics of contemporary capitalism. Rather, Browne’s account of surveillance capitalism—though she does not call it that—shows that race does not need to be introduced to the critical frame from outside. That dark matter has always been present, shaping what is visible even if it goes unseen itself. This manifests in at least two ways in Zuboff’s critique of the Big Other. First, her critique of Google’s accumulation of  “data exhaust” is framed primarily as a ‘pull’ of ever more sites and sensors into Google’s maw, passively given up users. But there is a great deal of “push” here as well. The accumulation of consumable data also occurs through the very human work of solving CAPTCHAs and scanning books. The latter is the subject of an iconic photo that shows the brown hand of a Google Books scanner—a low-wage subcontractor, index finger wrapped in plastic to avoid cuts from a day of page-turning—caught on a scanned page. Second, for Zuboff part of the frightening novelty of Google’s data extraction regime is its “formal indifference” to individual users, as well as to existing legal regimes that might impede the extraction of population-scale data. This, she argues, stands in marked contrast to the midcentury capitalist regimes which embraced a degree of democracy in order to prop up both political legitimacy and effective demand. But this was a democratic compromise limited in time and space. Extractive capitalist regimes of the past and present, including those producing the conflict minerals so necessary for hardware running Google services, have been marked by, at best, formal indifference in the North to conditions in the South. An analysis of surveillance capitalism’s struggle for hegemony would be greatly enriched by a consideration of how industrial capitalism legitimated itself in the metropole at the expense of the colony. Nor is this racial-economic dynamic and its political legitimation purely a cross-continental concern. US prisons have long extracted value from the incarcerated, racialized as second-class citizens. Today this practice continues, but surveillance technologies like ankle bracelets extend this extraction beyond prison walls, often at parolees’ expense.

    A Google Books scanner’s hand
    A Google Books scanner’s hand, caught working on WEB Du Bois’ The Souls of Black Folk. Via The Art of Google Books.

    Capitalism has always, as Browne’s notes on plantation surveillance make clear, been racial capitalism. Capital enters the world arrayed in the blood of primitive accumulation, and reproduces itself in part through the violent differentiation of labor powers. While the accumulation imperative has long been accepted as a value shaping media’s design and use, it is unfortunate that race has largely entered the frame of digital studies, and particularly, as Jessie Daniels argues, internet studies, through a study of either racial variables (e.g., “race” inheres to the body of the nonwhite person and causes other social phenomena) or racial identities (e.g., race is represented through minority cultural production, racism is produced through individual prejudice). There are perhaps good institutional reasons for this framing, owing to disciplinary training and the like, beyond the colorblind political ethic of much contemporary liberalism. But it has left us without digital stories of race (although there are certainly exceptions, particularly in the work of writers like Lisa Nakamura and her collaborators), perceived to be a niche concern, on par with our digital stories of capitalism—much less digital stories of racial capitalism.

    Browne provides a path forward for a study of race and technology more attuned to institutions and structures, to the long shadows old violence casts on our daily, digital lives. This slim, rich book is ultimately a reflection on method, on learning new ways to see. “Technology is made of people!” is where so many of our critiques end, discovering, once again, the values we build into machines. This is where Dark Matters begins. And it proceeds through slave ships, databases, branding irons, iris scanners, airports, and fingerprints to map the built project of racism and the work it takes to pass unnoticed in those halls or steal the map and draw something else entirely.

    _____

    Daniel Greene holds a PhD in American Studies from the University of Maryland. He is currently a Postdoctoral Researcher with the Social Media Collective at Microsoft Research, studying the future of work and the future of unemployment. He lives online at dmgreene.net.

    Back to the essay

  • Travis Alexander – Deregulating Grief: A Review of Dagmawi Woubshet’s “The Calendar of Loss: Race, Sexuality, and Mourning in the Early Era of AIDS”

    Travis Alexander – Deregulating Grief: A Review of Dagmawi Woubshet’s “The Calendar of Loss: Race, Sexuality, and Mourning in the Early Era of AIDS”

    a review of Dagmawi Woubshet’s The Calendar of Loss: Race, Sexuality, and Mourning in the Early Era of AIDS (Baltimore: The Johns Hopkins University Press, 2015)

    by Travis Alexander

    ~

    Not long after someone dies in Ethiopia, the edir—friend, relative, or neighbor—takes to the streets to blow a horn and call out the deceased’s name. Thus begins the process of mourning. After this announcement, the edir pitches a tent in front of the bereaved’s home. Over the next three days, mourners congregate in the tent and grieve. By the seventh day, public grieving has largely subsided. More urgency still has passed by the fortieth and eightieth days, by the seventh year. Dagmawi Woubshet opens The Calendar of Loss with a lyrical description of this practice, according to which the temporality of the living attunes itself to the claim of the dead. It’s a fitting introduction, as The Calendar casts Woubshet himself as no less edir than scholar. His particular charge is the AIDS dead from the “early years” of the epidemic—1981 to 1996, when highly active antiretroviral treatment became widely available. It was in 1996 that AIDS, according to certain political constituencies, was rendered nonlethal; according to others, it was even cured.

    The ambition of The Calendar, though, exceeds mourning the AIDS dead in either the form of a memoir or uncritical memorialization. To be sure, there exists a prolific tradition of just this kind of memoirish text, epitomized by writers like Sarah Schulman. Woubshet looks instead to efforts made by AIDS mourners to simultaneously grieve their dead, process the historical contingency of these deaths, and reckon with the probability that their own deaths were on the horizon. As such, these works are “steeped in a ‘poetics of compounding loss’” (3). This idiosyncratic form of mourning not only registers a novel structure of feeling, but, in “confound[ing] and travers[ing] the limits of mourning” renders extant literary and cultural elegaic genres inadequate (3). Evincing his interdisciplinary sensibility, Woubshet trains his analysis on genres running to obituaries, funerals, graffiti art, photography, film, epistolaries, choreography, installations, and of course, the poetic elegy itself. The resulting critical work is a dialogue at the intersection of trauma studies, psychoanalysis, queer theory, and African Diaspora studies.

    Woubshet organizes the book’s chapters according to the various ways that queer loss was reinserted into a public discourse that had attempted first to conceal it, and then to efface its embodied specificities. To take only one of his most powerful examples, Woubshet addresses how in its traditional form the obituary had functioned as a disciplinary genre of (hetero-) reproductive futurism. In its foregrounding of birth-family kinship networks, the obituary not only omitted mention of gay partners, but reified the futurism (those, especially, children, who live on) that sublimates and mediates such reproductivism. Moreover, these pieces never mentioned AIDS, coyly alluding instead to a “long disease” the deceased had suffered, thereby interring the dead in one last closet. In response to the mainstream news outlets running these posthumously disciplinary remembrances, gay newspapers “arrogated to themselves the authority of the obituary,” emphasizing the cause of death and the queer networks left in the wake of the decedent’s passing, thus both constituting queer counterpublics and protecting the “rights of the queer dead from the normative rites of the living” (59, 61, 67, 84). Woubshet’s ability to demonstrate how works of mourning exhumed the queer body interdicted from the scene of public grief is equally salient in his poetic analysis, centering on figures like Melvin Dixon and Paul Monette and informed by poetry and elegy scholars ranging from Peter Sacks to Max Cavitch to Jonathan Culler. He hastens to remind us that the explicitly fatal homophobia of the 1980s and ’90s has simply been sanitized into the gay liberalism of the present. In its triumphalist projection of gay normalcy and citizenship, gay liberalism (akin to what Jasbir Puar calls homonationalism) demands the erasure of AIDS, of the embodied queer past. “[B]y looking for the dead now, therefore,” The Calendar of Loss “challenge[s] gay liberalism’s present undertaking” (23).

    As such, the reformulation of central mourning genres such as the obituary , Woubshet notes, wasn’t demanded simply by the novel epidemiological and biocultural poetics of AIDS itself. It also responded to the unique forms of silence and erasure under which queer loss was placed in the 1980s and 90s by civil and governmental institutions alike. It is this “regulation of the ‘sphere of appearances’” (to borrow Judith Butler’s phrase) that the activist group ACT UP (AIDS Coalition to Unleash Power) addressed in its motto “Silence = Death” (16). Woubshet argues that the protocols of silence in this era “disprized” mourners of queer loss, “shroud[ing]” their grief “in silence, shame, and disgrace” (4). The texts and performances collected in Calendar refuse this status, and collectively insist that “mourning = survival.”

    In its recuperation of a form of grief that is indeterminate and inconsolable, The Calendar of Loss is also a referendum on the approach to loss and trauma offered by Freudian psychoanalysis, which sets forth a pat binary between normative grief (mourning) and pathological grief (melancholia). Where the mourner eventually replaces his lost object, the melancholic cannot, and languishes. Amid the exigencies of AIDS, however, this binary falls short insofar as it fails to apprehend the fact that for these mourners, death is not a “singular” event, but part of an ever-expanding series of deaths, including—most likely—the mourner’s own (5). The melancholic grief of queer communities constituted by AIDS are certainly not “normal” according to Freud, but neither are they pathological, inasmuch as they “achieve cathexis in mourning itself and in its art and activism. However, […] as newly cathected objects, [these] cannot displace loss; on the contrary, they place loss center stage” (18). In worrying the normal/pathological binary, Woubshet delivers a theoretical instrument to those employing psychoanalysis, and a bracing intervention to a queer theory whose conceptualizations of trauma have unproblematically embraced this conspicuously unqueer binarism for too long.

    Drawing on work by Howard Thurman, Woubshet observes that this non-pathological melancholy finds clear historical expression in the genre of slave songs and black spirituals. In the spirituals as well as in black life generally, “[d]eath and dying are not just ‘unusual, untoward events’ or ‘inevitably end-of-lifespan events,’ but instead punctuate [it] routinely and proleptically” (19). This constant anticipation of loss is central to the conceptions of social death elaborated by scholars such as Orlando Patterson. Thus, the paradigm of black mourning (as in the slave songs) and black life generally, “accommodates” and illuminates early AIDS mourning, particularly in its “insistence that death is ever present, that death is somehow always impending, and that survivors can confront all this death in the face of shame and stigma in eloquent ways that also often imply a fierce political sensibility and a longing for justice” (5). This comparative work confirms The Calendar Of Loss as the first monograph in the humanities at the intersection of queer theory and African Diaspora studies and allows it to spark a true theoretical commerce between those fields (26).

    Already in this book, in fact, interdisciplinarity has sensitized Woubshet to a liability of queer theory over and above its internalization of Freud’s pathologization of melancholy. I’m speaking here of queer theory’s characterization of the child derived heavily from Lee Edelman’s pathbreaking No Future: Queer Theory and the Death Drive (2004). In this latter account, the figure of the child is not only opposed to the queer subject, but is deployed—insofar as it represents the claims of futurity—to discipline and defer queer pleasure, which represents by contrast not only the present at the expense of the future, but also the very foreclosure of the future itself. In his final chapter, Woubshet details the Sudden Flowers collective, which provides the resources for Ethiopian orphans whose parents were lost to AIDS to create works of art and performances that help mediate their grief. Many of these orphans choose to write letters to their deceased parents in which they chronicle the stages and practices of their mourning, and the sensation of the absence, the lost object(s), they have not (yet) filled or replaced. These children “rely not on idealized figures of innocence and purity to characterize their own experiences, but instead on queer figures of abjection, disparagement, and fearlessness,” thereby “thwart[ing] the naturalized figure of the child as the very embodiment of futurity” (140). The experiences of these children, then, are a living rebuke to the cleanliness of queer theory’s characterization of the child. But Woubshet doesn’t simply gesture to the children of Sudden Flowers to append an asterisk to the queer theory’s anti-natalism, to correctively bolster its critical acumen (though he certainly does accomplish this). While joining Edelman in the latter’s critique of hegemonic natalism, he breaks away in aiming to indicate what we might well call the white privilege of queer theory—the complacency of the latter’s archive, its evident disinterest in the particularities of life in the submerged global south in favor of an aestheticized lumping-together of African people with AIDS under the signifier of unalterable tragedy.

    But more witheringly still, The Calendar of Loss reveals the extent to which queer theory becomes a vested defender, an unwitting academic strategist, in the process of universalizing whiteness. Drawing on Robin Bernstein’s Racial Innocence, Woubshet recounts how, unlike the image of the white child that gelled (under the auspices of nineteenth-century Romanticism) to figure innocence, purity, and futurity, the black child discursively produced simultaneously (most canonically in the pickaninny) evoked repulsion, abjection, and social death (142). “Emptied of innocence and futurity,” he speculates, “the black child […] cannot be a marker against which queerness can be negatively defined” (142). Hidden behind the tact of Woubshet’s account is the indictment that positions like Edelman’s not only prefer the white child for its compatibility with a given theoretical imperative, but perpetuate a universalization according to which the white child, unburdened by racial marking, becomes the child as such, which iterates in turn the social death (in its rhetorical concealment) of the black child. This revelation represents just one of the fruits of Woubshet’s inflection of queer theory by the itinerary of African Diaspora studies.

    While we might fairly critique Woubshet’s failure to address the role of NGOs (like those that care for Ethiopian Orphans) as the “mendicant orders” (cf. Hardt and Negri) of the very same biopolitical governmentality that allowed AIDS to become a pandemic in the first place, this oversight seems the exception rather than the rule. The Calendar’s more concerning oversight is instead its unintentional reification of vitalist, optimistic, and citizenship-oriented rubrics of affect in its moments of “recuperation.” Consider for example Woubshet’s description of the children in the Sudden Flowers art collective who become “political figure[s], publicly taking on one of the most urgent issues of our time, [while simultaneously] departing from the norm” (144). These children are revealed in turn as “powerful agents, as subjects capable of reflection on and articulation of their experiences” (140). Here these children become deserving of praise insofar as they embrace an active, vigorous relationship with their circumstances. Elsewhere Woubshet will attribute the same valorizing characteristics to the gay American subject of his book too. AIDS mourners “across the Atlantic […] embodied AIDS openly and fearlessly” (5). Here “openly and fearlessly” carries the same sense of vigor and interactivity he attributed to the “powerful,” “agent[ial]” children of Ethiopia.

    Not only do these forms of affect coincide neatly with the behavioral strictures demanded by a late liberalism that exercises itself in intellectual and emotional economies, but they also threaten to undo the depathologization of melancholy executed above. That is to say, where Woubshet had previously claimed to find melancholy non-pathological insofar as it generates a new cathexis (attention to compounding loss), here he seems to smuggle in—through “articulation of […] experiences”—the kind of object-replacement or work-completion characteristic of normative mourning. Indeed, he says so himself in expressing his desire to show that nonnormative mourning “can be ‘productive rather than pathological, abundant rather than lacking, social rather than solipsistic, militant rather than reactionary’” (22). Here Woubshet no longer desires simply a neutral opposition to the pathological (that is, the nonnormative), but—in the term “productive”—casts his lot in with a term derived from the cathectic economy of capital. In turn “social” evokes liberal citizenship and pluralism, while “militant” continues in the valorization of vigorous and positive affect suggested earlier by “powerful,” “agent[ial],” “open,” and “fearless.” Inasmuch as “militancy,” “articulation,” “social[ity],” and “productiv[ity]” address themselves to futurity, they reiterate the natalism that Woubshet in agreement with Edelman deemed unsalvageable.

    Indeed, Edelman himself is perhaps most helpful in diagnosing the forms of complicity I’ve attributed to Woubshet. In a 2006 piece, he cautions us against the trap of “affirm[ing] an angry, uncivil ‘politics of negativity’” (“The Antisocial Thesis in Queer Theory” 821). Insofar as such negativity is “affirmed,” it becomes “little more than Oedipal kitsch,” performing the sentimental and “fundamentalist […] attachment to ‘sense, mastery, and meaning,’” and thereby striking “the pose of negativity while evacuating its force” (822). True negativity, meanwhile, refuses what Adorno calls the “all subjugating identity principle” (Negative Dialectics 320). In his attempt to depathologize queer melancholy, Woubshet pays homage to negativity, spurning the identification between melancholy and pathology. But in framing that melancholy as “militant,” “productive,” “social,” “articulate,” “open,” “fearless,” and certainly “agent[ial],” his negativity is outed as an identity principle in drag. This complicity also lends support to Jasbir Puar’s recent critique of affect theory (“Prognosis Time: Toward a Geopolitics of Affect, Debility, and Capacity”). For her the latter, in attempting to conceptualize a register of energies and forces uncapturable by a form of governmentality dependent on the capitalization of intellectual and emotional labor, unwittingly finds itself attributing to affect a set of optimistic, buoyant characteristics that are themselves of a piece with the imperatives of productivity and ablement central to late capital in the first place (“Prognosis Time”). While Woubshet’s methodology has no stake in affect, the optimism inherent in his characterizations of melancholic grief and its creative expression—even his exclusionary attention to only those who have taken it upon themselves to create—instantiates the ideological double-bind of Puar’s affect theorists.

    Of course, a productivity that is cyclical and endlessly iterative would be recuperable where one that is teleological would not. And his investment in the trope of the calendar, which evokes a form of articulation that repeats—despite its “militan[cy]”—in stasis, suggests that this is version of productivity Woubshet has in mind. So his flirtation with productivity is potentially aesthetic rather than ideological. Whatever the case may be, The Calendar of Loss remains a rich and urgently needed contribution. When the legacy of AIDS is being submerged, not only by the rhetoric of gay liberalism, but by a generation of queer theorists who have turned their attentions elsewhere, efforts like Woubshet’s to “speak again” its history and “reanimate lives that demand remembering” cannot go unnoticed (xi).


    _____

    Travis Alexander is a Mellon Graduate Fellow at The University of North Carolina, Chapel Hill. Though broadly interested in Post-45 literature and visual art, his specific interests cluster around portrayals of the HIV/AIDS epidemic in film, literature, television, and cultural theory between the 1980s and 1990s. Website: http://englishcomplit.unc.edu/people/travis-alexander.

    Back to the essay
    _____

    Works Cited

    • Adorno, Theodor. Negative Dialectics. Trans. E.B. Ashton. New York: Continuum, 1994.
    • Edelman, Lee with Robert L. Caserio, Judith Halberstam, José Esteban Muñoz, and Tim Dean. “The Antisocial Thesis in Queer Theory.” PMLA 121.3 (2006): 819 – 828.
    • Puar, Jasbir. “Prognosis Time: Toward a Geopolitics of Affect, Debility, and Capacity.” Women & Performance: A Journal of Feminist Theory 19.2 (2009): 161 – 172.
    • Woubshet, Dagmawi. The Calendar of Loss: Race, Sexuality, and Mourning in the Early Era of AIDS. Baltimore: The Johns Hopkins University Press, 2015.
  • Audrey Watters – The Best Way to Predict the Future is to Issue a Press Release

    Audrey Watters – The Best Way to Predict the Future is to Issue a Press Release

    By Audrey Watters

    ~

    This talk was delivered at Virginia Commonwealth University today as part of a seminar co-sponsored by the Departments of English and Sociology and the Media, Art, and Text PhD Program. The slides are also available here.

    Thank you very much for inviting me here to speak today. I’m particularly pleased to be speaking to those from Sociology and those from the English and those from the Media, Art, and Text departments, and I hope my talk can walk the line between and among disciplines and methods – or piss everyone off in equal measure. Either way.

    This is the last public talk I’ll deliver in 2016, and I confess I am relieved (I am exhausted!) as well as honored to be here. But when I finish this talk, my work for the year isn’t done. No rest for the wicked – ever, but particularly in the freelance economy.

    As I have done for the past six years, I will spend the rest of November and December publishing my review of what I deem the “Top Ed-Tech Trends” of the year. It’s an intense research project that usually tops out at about 75,000 words, written over the course of four to six weeks. I pick ten trends and themes in order to closely at the recent past, the near-term history of education technology. Because of the amount of information that is published about ed-tech – the amount of information, its irrelevance, its incoherence, its lack of context – it can be quite challenging to keep up with what is really happening in ed-tech. And just as importantly, what is not happening.

    So that’s what I try to do. And I’ll boast right here – no shame in that – no one else does as in-depth or thorough job as me, certainly no one who is entirely independent from venture capital, corporate or institutional backing, or philanthropic funding. (Of course, if you look for those education technology writers who are independent from venture capital, corporate or institutional backing, or philanthropic funding, there is pretty much only me.)

    The stories that I write about the “Top Ed-Tech Trends” are the antithesis of most articles you’ll see about education technology that invoke “top” and “trends.” For me, still framing my work that way – “top trends” – is a purposeful rhetorical move to shed light, to subvert, to offer a sly commentary of sorts on the shallowness of what passes as journalism, criticism, analysis. I’m not interested in making quickly thrown-together lists and bullet points. I’m not interested in publishing clickbait. I am interested nevertheless in the stories – shallow or sweeping – that we tell and spread about technology and education technology, about the future of education technology, about our technological future.

    Let me be clear, I am not a futurist – even though I’m often described as “ed-tech’s Cassandra.” The tagline of my website is “the history of the future of education,” and I’m much more interested in chronicling the predictions that others make, have made about the future of education than I am writing predictions of my own.

    One of my favorites: “Books will soon be obsolete in schools,” Thomas Edison said in 1913. Any day now. Any day now.

    Here are a couple of more recent predictions:

    “In fifty years, there will be only ten institutions in the world delivering higher education and Udacity has a shot at being one of them.” – that’s Sebastian Thrun, best known perhaps for his work at Google on the self-driving car and as a co-founder of the MOOC (massive open online course) startup Udacity. The quotation is from 2012.

    And from 2013, by Harvard Business School professor, author of the book The Innovator’s Dilemma, and popularizer of the phrase “disruptive innovation,” Clayton Christensen: “In fifteen years from now, half of US universities may be in bankruptcy. In the end I’m excited to see that happen. So pray for Harvard Business School if you wouldn’t mind.”

    Pray for Harvard Business School. No. I don’t think so.

    Both of these predictions are fantasy. Nightmarish, yes. But fantasy. Fantasy about a future of education. It’s a powerful story, but not a prediction made based on data or modeling or quantitative research into the growing (or shrinking) higher education sector. Indeed, according to the latest statistics from the Department of Education – now granted, this is from the 2012–2013 academic year – there are 4726 degree-granting postsecondary institutions in the United States. A 46% increase since 1980. There are, according to another source (non-governmental and less reliable, I think), over 25,000 universities in the world. This number is increasing year-over-year as well. So to predict that the vast vast majority of these schools (save Harvard, of course) will go away in the next decade or so or that they’ll be bankrupt or replaced by Silicon Valley’s version of online training is simply wishful thinking – dangerous, wishful thinking from two prominent figures who will benefit greatly if this particular fantasy comes true (and not just because they’ll get to claim that they predicted this future).

    Here’s my “take home” point: if you repeat this fantasy, these predictions often enough, if you repeat it in front of powerful investors, university administrators, politicians, journalists, then the fantasy becomes factualized. (Not factual. Not true. But “truthy,” to borrow from Stephen Colbert’s notion of “truthiness.”) So you repeat the fantasy in order to direct and to control the future. Because this is key: the fantasy then becomes the basis for decision-making.

    Fantasy. Fortune-telling. Or as capitalism prefers to call it “market research.”

    “Market research” involves fantastic stories of future markets. These predictions are often accompanied with a press release touting the size that this or that market will soon grow to – how many billions of dollars schools will spend on computers by 2020, how many billions of dollars of virtual reality gear schools will buy by 2025, how many billions of dollars of schools will spend on robot tutors by 2030, how many billions of dollars will companies spend on online training by 2035, how big will coding bootcamp market will be by 2040, and so on. The markets, according to the press releases, are always growing. Fantasy.

    In 2011, the analyst firm Gartner predicted that annual tablet shipments would exceed 300 million units by 2015. Half of those, the firm said, would be iPads. IDC estimates that the total number of shipments in 2015 was actually around 207 million units. Apple sold just 50 million iPads. That’s not even the best worst Gartner prediction. In October of 2006, Gartner said that Apple’s “best bet for long-term success is to quit the hardware business and license the Mac to Dell.” Less than three months later, Apple introduced the iPhone. The very next day, Apple shares hit $97.80, an all-time high for the company. By 2012 – yes, thanks to its hardware business – Apple’s stock had risen to the point that the company was worth a record-breaking $624 billion.

    But somehow, folks – including many, many in education and education technology – still pay attention to Gartner. They still pay Gartner a lot of money for consulting and forecasting services.

    People find comfort in these predictions, in these fantasies. Why?

    Gartner is perhaps best known for its “Hype Cycle,” a proprietary graphic presentation that claims to show how emerging technologies will be adopted.

    According to Gartner, technologies go through five stages: first, there is a “technology trigger.” As the new technology emerges, a lot of attention is paid to it in the press. Eventually it reaches the second stage: the “peak of inflated expectations.” So many promises have been made about this technological breakthrough. Then, the third stage: the “trough of disillusionment.” Interest wanes. Experiments fail. Promises are broken. As the technology matures, the hype picks up again, more slowly – this is the “slope of enlightenment.” Eventually the new technology becomes mainstream – the “plateau of productivity.”

    It’s not that hard to identify significant problems with the Hype Cycle, least of which being it’s not a cycle. It’s a curve. It’s not a particularly scientific model. It demands that technologies always move forward along it.

    Gartner says its methodology is proprietary – which is code for “hidden from scrutiny.” Gartner says, rather vaguely, that it relies on scenarios and surveys and pattern recognition to place technologies on the line. But most of the time when Gartner uses the word “methodology,” it is trying to signify “science,” and what it really means is “expensive reports you should buy to help you make better business decisions.”

    Can it really help you make better business decisions? It’s just a curve with some technologies plotted along it. The Hype Cycle doesn’t help explain why technologies move from one stage to another. It doesn’t account for technological precursors – new technologies rarely appear out of nowhere – or political or social changes that might prompt or preclude adoption. And at the end it is simply too optimistic, unreasonably so, I’d argue. No matter how dumb or useless a new technology is, according to the Hype Cycle at least, it will eventually become widely adopted. Where would you plot the Segway, for example? (In 2008, ever hopeful, Gartner insisted that “This thing certainly isn’t dead and maybe it will yet blossom.” Maybe it will, Gartner. Maybe it will.)

    And maybe this gets to the heart as to why I’m not a futurist. I don’t share this belief in an increasingly technological future; I don’t believe that more technology means the world gets “more better.” I don’t believe that more technology means that education gets “more better.”

    Every year since 2004, the New Media Consortium, a non-profit organization that advocates for new media and new technologies in education, has issued its own forecasting report, the Horizon Report, naming a handful of technologies that, as the name suggests, it contends are “on the horizon.”

    Unlike Gartner, the New Media Consortium is fairly transparent about how this process works. The organization invites various “experts” to participate in the advisory board that, throughout the course of each year, works on assembling its list of emerging technologies. The process relies on the Delphi method, whittling down a long list of trends and technologies by a process of ranking and voting until six key trends, six emerging technologies remain.

    Disclosure/disclaimer: I am a folklorist by training. The last time I took a class on “methods” was, like, 1998. And admittedly I never learned about the Delphi method – what the New Media Consortium uses for this research project – until I became a scholar of education technology looking into the Horizon Report. As a folklorist, of course, I did catch the reference to the Oracle of Delphi.

    Like so much of computer technology, the roots of the Delphi method are in the military, developed during the Cold War to forecast technological developments that the military might use and that the military might have to respond to. The military wanted better predictive capabilities. But – and here’s the catch – it wanted to identify technology trends without being caught up in theory. It wanted to identify technology trends without developing models. How do you do that? You gather experts. You get those experts to consensus.

    So here is the consensus from the past twelve years of the Horizon Report for higher education. These are the technologies it has identified that are between one and five years from mainstream adoption:

    It’s pretty easy, as with the Gartner Hype Cycle, to look at these predictions and note that they are almost all wrong in some way or another.

    Some are wrong because, say, the timeline is a bit off. The Horizon Report said in 2010 that “open content” was less than a year away from widespread adoption. I think we’re still inching towards that goal – admittedly “open textbooks” have seen a big push at the federal and at some state levels in the last year or so.

    Some of these predictions are just plain wrong. Virtual worlds in 2007, for example.

    And some are wrong because, to borrow a phrase from the theoretical physicist Wolfgang Pauli, they’re “not even wrong.” Take “collaborative learning,” for example, which this year’s K–12 report posits as a mid-term trend. Like, how would you argue against “collaborative learning” as occurring – now or some day – in classrooms? As a prediction about the future, it is not even wrong.

    But wrong or right – that’s not really the problem. Or rather, it’s not the only problem even if it is the easiest critique to make. I’m not terribly concerned about the accuracy of the predictions about the future of education technology that the Horizon Report has made over the last decade. But I do wonder how these stories influence decision-making across campuses.

    What might these predictions – this history of the future – tell us about the wishful thinking surrounding education technology and about the direction that the people the New Media Consortium views as “experts” want the future to take. What can we learn about the future by looking at the history of our imagining about education’s future. What role does powerful ed-tech storytelling (also known as marketing) play in shaping that future? Because remember: to predict the future is to control it – to attempt to control the story, to attempt to control what comes to pass.

    It’s both convenient and troubling then these forward-looking reports act as though they have no history of their own; they purposefully minimize or erase their own past. Each year – and I think this is what irks me most – the NMC fails to looks back at what it had predicted just the year before. It never revisits older predictions. It never mentions that they even exist. Gartner too removes technologies from the Hype Cycle each year with no explanation for what happened, no explanation as to why trends suddenly appear and disappear and reappear. These reports only look forward, with no history to ground their direction in.

    I understand why these sorts of reports exist, I do. I recognize that they are rhetorically useful to certain people in certain positions making certain claims about “what to do” in the future. You can write in a proposal that, “According to Gartner… blah blah blah.” Or “The Horizon Reports indicates that this is one of the most important trends in coming years, and that is why we need to commit significant resources – money and staff – to this initiative.” But then, let’s be honest, these reports aren’t about forecasting a future. They’re about justifying expenditures.

    “The best way to predict the future is to invent it,” computer scientist Alan Kay once famously said. I’d wager that the easiest way is just to make stuff up and issue a press release. I mean, really. You don’t even need the pretense of a methodology. Nobody is going to remember what you predicted. Nobody is going to remember if your prediction was right or wrong. Nobody – certainly not the technology press, which is often painfully unaware of any history, near-term or long ago – is going to call you to task. This is particularly true if you make your prediction vague – like “within our lifetime” – or set your target date just far enough in the future – “In fifty years, there will be only ten institutions in the world delivering higher education and Udacity has a shot at being one of them.”

    Let’s consider: is there something about the field of computer science in particular – and its ideological underpinnings – that makes it more prone to encourage, embrace, espouse these sorts of predictions? Is there something about Americans’ faith in science and technology, about our belief in technological progress as a signal of socio-economic or political progress, that makes us more susceptible to take these predictions at face value? Is there something about our fears and uncertainties – and not just now, days before this Presidential Election where we are obsessed with polls, refreshing Nate Silver’s website obsessively – that makes us prone to seek comfort, reassurance, certainty from those who can claim that they know what the future will hold?

    “Software is eating the world,” investor Marc Andreessen pronounced in a Wall Street Journal op-ed in 2011. “Over the next 10 years,” he wrote, “I expect many more industries to be disrupted by software, with new world-beating Silicon Valley companies doing the disruption in more cases than not.” Buy stock in technology companies was really the underlying message of Andreessen’s op-ed; this isn’t another tech bubble, he wanted to reinsure investors. But many in Silicon Valley have interpreted this pronouncement – “software is eating the world” – as an affirmation and an inevitability. I hear it repeated all the time – “software is eating the world” – as though, once again, repeating things makes them true or makes them profound.

    If we believe that, indeed, “software is eating the world,” that we are living in a moment of extraordinary technological change, that we must – according to Gartner or the Horizon Report – be ever-vigilant about emerging technologies, that these technologies are contributing to uncertainty, to disruption, then it seems likely that we will demand a change in turn to our educational institutions (to lots of institutions, but let’s just focus on education). This is why this sort of forecasting is so important for us to scrutinize – to do so quantitatively and qualitatively, to look at methods and at theory, to ask who’s telling the story and who’s spreading the story, to listen for counter-narratives.

    This technological change, according to some of the most popular stories, is happening faster than ever before. It is creating an unprecedented explosion in the production of information. New information technologies, so we’re told, must therefore change how we learn – change what we need to know, how we know, how we create and share knowledge. Because of the pace of change and the scale of change and the locus of change (that is, “Silicon Valley” not “The Ivory Tower”) – again, so we’re told – our institutions, our public institutions can no longer keep up. These institutions will soon be outmoded, irrelevant. Again – “In fifty years, there will be only ten institutions in the world delivering higher education and Udacity has a shot at being one of them.”

    These forecasting reports, these predictions about the future make themselves necessary through this powerful refrain, insisting that technological change is creating so much uncertainty that decision-makers need to be ever vigilant, ever attentive to new products.

    As Neil Postman and others have cautioned us, technologies tend to become mythic – unassailable, God-given, natural, irrefutable, absolute. So it is predicted. So it is written. Techno-scripture, to which we hand over a certain level of control – to the technologies themselves, sure, but just as importantly to the industries and the ideologies behind them. Take, for example, the founding editor of the technology trade magazine Wired, Kevin Kelly. His 2010 book was called What Technology Wants, as though technology is a living being with desires and drives; the title of his 2016 book, The Inevitable. We humans, in this framework, have no choice. The future – a certain flavor of technological future – is pre-ordained. Inevitable.

    I’ll repeat: I am not a futurist. I don’t make predictions. But I can look at the past and at the present in order to dissect stories about the future.

    So is the pace of technological change accelerating? Is society adopting technologies faster than it’s ever done before? Perhaps it feels like it. It certainly makes for a good headline, a good stump speech, a good keynote, a good marketing claim, a good myth. But the claim starts to fall apart under scrutiny.

    This graph comes from an article in the online publication Vox that includes a couple of those darling made-to-go-viral videos of young children using “old” technologies like rotary phones and portable cassette players – highly clickable, highly sharable stuff. The visual argument in the graph: the number of years it takes for one quarter of the US population to adopt a new technology has been shrinking with each new innovation.

    But the data is flawed. Some of the dates given for these inventions are questionable at best, if not outright inaccurate. If nothing else, it’s not so easy to pinpoint the exact moment, the exact year when a new technology came into being. There often are competing claims as to who invented a technology and when, for example, and there are early prototypes that may or may not “count.” James Clerk Maxwell did publish A Treatise on Electricity and Magnetism in 1873. Alexander Graham Bell made his famous telephone call to his assistant in 1876. Guglielmo Marconi did file his patent for radio in 1897. John Logie Baird demonstrated a working television system in 1926. The MITS Altair 8800, an early personal computer that came as a kit you had to assemble, was released in 1975. But Martin Cooper, a Motorola exec, made the first mobile telephone call in 1973, not 1983. And the Internet? The first ARPANET link was established between UCLA and the Stanford Research Institute in 1969. The Internet was not invented in 1991.

    So we can reorganize the bar graph. But it’s still got problems.

    The Internet did become more privatized, more commercialized around that date – 1991 – and thanks to companies like AOL, a version of it became more accessible to more people. But if you’re looking at when technologies became accessible to people, you can’t use 1873 as your date for electricity, you can’t use 1876 as your year for the telephone, and you can’t use 1926 as your year for the television. It took years for the infrastructure of electricity and telephony to be built, for access to become widespread; and subsequent technologies, let’s remember, have simply piggy-backed on these existing networks. Our Internet service providers today are likely telephone and TV companies; our houses are already wired for new WiFi-enabled products and predictions.

    Economic historians who are interested in these sorts of comparisons of technologies and their effects typically set the threshold at 50% – that is, how long does it take after a technology is commercialized (not simply “invented”) for half the population to adopt it. This way, you’re not only looking at the economic behaviors of the wealthy, the early-adopters, the city-dwellers, and so on (but to be clear, you are still looking at a particular demographic – the privileged half.)

    And that changes the graph again:

    How many years do you think it’ll be before half of US households have a smart watch? A drone? A 3D printer? Virtual reality goggles? A self-driving car? Will they? Will it be fewer years than 9? I mean, it would have to be if, indeed, “technology” is speeding up and we are adopting new technologies faster than ever before.

    Some of us might adopt technology products quickly, to be sure. Some of us might eagerly buy every new Apple gadget that’s released. But we can’t claim that the pace of technological change is speeding up just because we personally go out and buy a new iPhone every time Apple tells us the old model is obsolete. Removing the headphone jack from the latest iPhone does not mean “technology changing faster than ever,” nor does showing how headphones have changed since the 1970s. None of this is really a reflection of the pace of change; it’s a reflection of our disposable income and a ideology of obsolescence.

    Some economic historians like Robert J. Gordon actually contend that we’re not in a period of great technological innovation at all; instead, we find ourselves in a period of technological stagnation. The changes brought about by the development of information technologies in the last 40 years or so pale in comparison, Gordon argues (and this is from his recent book The Rise and Fall of American Growth: The US Standard of Living Since the Civil War), to those “great inventions” that powered massive economic growth and tremendous social change in the period from 1870 to 1970 – namely electricity, sanitation, chemicals and pharmaceuticals, the internal combustion engine, and mass communication. But that doesn’t jibe with “software is eating the world,” does it?

    Let’s return briefly to those Horizon Report predictions again. They certainly reflect this belief that technology must be speeding up. Every year, there’s something new. There has to be. That’s the purpose of the report. The horizon is always “out there,” off in the distance.

    But if you squint, you can see each year’s report also reflects a decided lack of technological change. Every year, something is repeated – perhaps rephrased. And look at the predictions about mobile computing:

    • 2006 – the phones in their pockets
    • 2007 – the phones in their pockets
    • 2008 – oh crap, we don’t have enough bandwidth for the phones in their pockets
    • 2009 – the phones in their pockets
    • 2010 – the phones in their pockets
    • 2011 – the phones in their pockets
    • 2012 – the phones too big for their pockets
    • 2013 – the apps on the phones too big for their pockets
    • 2015 – the phones in their pockets
    • 2016 – the phones in their pockets

    This hardly makes the case for technological speeding up, for technology changing faster than it’s ever changed before. But that’s the story that people tell nevertheless. Why?

    I pay attention to this story, as someone who studies education and education technology, because I think these sorts of predictions, these assessments about the present and the future, frequently serve to define, disrupt, destabilize our institutions. This is particularly pertinent to our schools which are already caught between a boundedness to the past – replicating scholarship, cultural capital, for example – and the demands they bend to the future – preparing students for civic, economic, social relations yet to be determined.

    But I also pay attention to these sorts of stories because there’s that part of me that is horrified at the stuff – predictions – that people pass off as true or as inevitable.

    “65% of today’s students will be employed in jobs that don’t exist yet.” I hear this statistic cited all the time. And it’s important, rhetorically, that it’s a statistic – that gives the appearance of being scientific. Why 65%? Why not 72% or 53%? How could we even know such a thing? Some people cite this as a figure from the Department of Labor. It is not. I can’t find its origin – but it must be true: a futurist said it in a keynote, and the video was posted to the Internet.

    The statistic is particularly amusing when quoted alongside one of the many predictions we’ve been inundated with lately about the coming automation of work. In 2014, The Economist asserted that “nearly half of American jobs could be automated in a decade or two.”“Before the end of this century,” Wired Magazine’s Kevin Kelly announced earlier this year, “70 percent of today’s occupations will be replaced by automation.”

    Therefore the task for schools – and I hope you can start to see where these different predictions start to converge – is to prepare students for a highly technological future, a future that has been almost entirely severed from the systems and processes and practices and institutions of the past. And if schools cannot conform to this particular future, then “In fifty years, there will be only ten institutions in the world delivering higher education and Udacity has a shot at being one of them.”

    Now, I don’t believe that there’s anything inevitable about the future. I don’t believe that Moore’s Law – that the number of transistors on an integrated circuit doubles every two years and therefore computers are always exponentially smaller and faster – is actually a law. I don’t believe that robots will take, let alone need take, all our jobs. I don’t believe that YouTube has been rendered school irrevocably out-of-date. I don’t believe that technologies are changing so quickly that we should hand over our institutions to entrepreneurs, privatize our public sphere for techno-plutocrats.

    I don’t believe that we should cheer Elon Musk’s plans to abandon this planet and colonize Mars – he’s predicted he’ll do so by 2026. I believe we stay and we fight. I believe we need to recognize this as an ego-driven escapist evangelism.

    I believe we need to recognize that predicting the future is a form of evangelism as well. Sure gets couched in terms of science, it is underwritten by global capitalism. But it’s a story – a story that then takes on these mythic proportions, insisting that it is unassailable, unverifiable, but true.

    The best way to invent the future is to issue a press release. The best way to resist this future is to recognize that, once you poke at the methodology and the ideology that underpins it, a press release is all that it is.

    Image credits: 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28. And a special thanks to Tressie McMillan Cottom and David Golumbia for organizing this talk. And to Mike Caulfield for always helping me hash out these ideas.
    _____

    Audrey Watters is a writer who focuses on education technology – the relationship between politics, pedagogy, business, culture, and ed-tech. She has worked in the education field for over 15 years: teaching, researching, organizing, and project-managing. Although she was two chapters into her dissertation (on a topic completely unrelated to ed-tech), she decided to abandon academia, and she now happily fulfills the one job recommended to her by a junior high aptitude test: freelance writer. Her stories have appeared on NPR/KQED’s education technology blog MindShift, in the data section of O’Reilly Radar, on Inside Higher Ed, in The School Library Journal, in The Atlantic, on ReadWriteWeb, and Edutopia. She is the author of the recent book The Monsters of Education Technology (Smashwords, 2014) and working on a book called Teaching Machines. She maintains the widely-read Hack Education blog, and writes frequently for The b2 Review Digital Studies magazine on digital technology and education.

    Back to the essay

  • Vassilis Lambropoulos – A Review of Aamir Mufti’s “Forget English!”

    Vassilis Lambropoulos – A Review of Aamir Mufti’s “Forget English!”

    514ywdifl6l-_sx327_bo1204203200_Aamir R. Mufti:  Forget English!  Orientalisms and World Literatures (Harvard University Press, 2016)

    Reviewed by Vassilis Lambropoulos

    This essay was peer-reviewed by the editorial board of b2o: an online journal

    Aamir Mufti’s Forget English! exposes the regulatory operations of presumably borderless world literature.  Second, it questions the cultural control of presumably egalitarian global English.  Next, it traces the Orientalist administration of presumably universal colonial knowledge.  Readers may agree with all this despite the repeated warnings that these three systems remain closely implicated not only in the objects of study but also in epistemological critique.  Mufti’s most radical proposition comes last:  The basis of the modern national and global cultural field is the institution of literature, that is, the disciplinary literary regimen that includes the askeses of composition, the exercises of pleasure, the practices of interpretation, and the technologies of education.  Mufti’s critique of critique itself as an aesthetic ethics ought to be disturbing.  In what follows, I will repurpose his project, reshuffling its case studies, to foreground its ultimate target, literary ideology, namely, the constitutive antinomies of the interpretive freedom, the self-imposed limits and controls of aesthetic understanding.  I will do that by narrating the institutional story of “literature” that underlies his anatomy of world literature.

    Mufti proposes that today, as a popular project of translation, circulation, criticism, and scholarship, “world literature” turns an opaque and unequal process of violent appropriation into a supposedly transparent and equal one of free communication.  Its inviting name occludes “the ways in which contemporary critical thinking unwittingly replicates logics of a longer provenance in the colonial and postcolonial eras” (248).  This is particularly evident in multicultural celebrations of the Global South.  Mufti warns against “the triumphalist ‘We are the World’ tone so clearly discernible in the self-staging of world literature in our times.  In many ways, the rubric ‘postcolonial literature’ as used in the Global North now serves as a means of domesticating those radical energies – and not just linguistic or cultural differences – [for example, the now defunct “Bandung” internationalism] into the space of (bourgeois) world literature as varieties of local practice – as Indian, African, or Middle Eastern literary practices, for instance” (92).  Instead of liberal appeals to “diversity” and its token-like selections, what is needed is “a concept of world literature (and practices of teaching it) that work to reveal the ways in which diversity itself (national, religious, civilizational, continental) is a colonial and Orientalist problematic, one that emerges precisely on the plane of equivalence that is literature” (250).  Sensitivity to diversity and respect for difference may express noble sentiments but do nothing to question the values dominating the literary and academic market.

    Studies of scholars in world literature often “are salutary in having emphasized inequality as the primary structural principle of world literary space rather than difference, which has been the dominant preoccupation in the discussion of world literature since the late eighteenth century, including in Goethe’s late-in-life elaboration of the idea of Weltliteratur.  But they give us no account whatsoever of the exact nature of these forms of inequality and the sociocultural logics through which they have historically been instituted, logics of the institution of inequality that incorporate notions and practices of ‘difference’ and proceed precisely through them” (33).  Whether they are describing a “world system” or a “republic of letters,” these scholars fail “to understand the mutual imbrication of inequality and difference” (33) in their operations, which is as short sighted as studying autopoiesis in Niklas Luhmann but not Cornelius Castoriadis.  Mufti does not elaborate a new model of doing world literature.  Instead, he examines how this comprehensive approach to culture has been devised and institutionalized for some two hundred fifty years, starting with the observation that its current resurgence is “a post-1989 development, which has appeared against the background of the larger neoliberal attempt to monopolize all possibilities of the international into the global life of capital.  This mode of appearance of the literatures of the Global South in the literary sphere of the North is thus linked to the disappearance of those varieties of internationalism that had sought in various ways to bypass the circuits of interaction, transmission, and exchange of the emergent global bourgeois order in the postwar and early postcolonial decades in the interest of the decolonizing societies of the South” (91).  Mufti seeks “to unmask and to make available for criticism and analysis” (20) world literature in the twenty first century as the main “field force” (199) of the project to subsume all centrifugal possibilities for an international literature under the monopoly of global cultural capital.  He treats it simultaneously as a “concept,” a “field of study,” and a set of “practices and institutional frameworks” (10), and uses a genealogical approach for a “critical-historical examination of a certain constellation of ideas and practices in its accretions and transformations over time” (19-20).  In what follows I discuss much less the numerous and wonderful cases to focus on the larger historical trajectory produced by this approach.

    The genealogy of world literature begins with the role that “literature as national institution” (3) played “in the emergence of the hierarchies that structure relations between societies in the modern world” (97).  An international literary space first formed in Europe as a structure of rivalries among the traditions (58) emerging in the “intra-European ‘competitive’ vernacularization,” which was later followed by its “colonial absorption and transformation” (76).  The standardization of the vernaculars was a central part of “a project of ethnonational or civilizational nationalism in linguistically diverse and multicultural societies” (148).  This made possible the formation of “literature” as a separate domain of writing and reading out of diverse guild, church, local, and other traditions.  “The nationalization of languages over the past two centuries all over the world . . . transformed former extensive and dispersed cultures of writing . . . into narrowly conceived ethnonational spheres” (146).  Through an extensive philological and interpretive operation “often-overlapping bodies of writing came to acquire, through a process of historicization, distinct personalities as ‘literature’ along national lines” (97).  This is how literature achieves centrality in all constellations of national arts.  “The (now universal) category of literature itself . . . marks this process of assimilation of diverse cultures of writing” (80).  New practices of reading claim existing textual regimes for new purposes and milieus while new elites are also trained to curate them.  “In this process of the acquisition of literary history, the textual corpus acquires, first of all, the attributes of literariness.  That is to say, . . . it enters the world literary system as one among many other literatures, being subject henceforth to the requirements and measures of literariness, replacing the models and modes of evaluation internal to the textual corpus itself.  Furthermore, in the moment of its historicization, it undergoes a shift of orientation within the larger social formation, being reinscribed within a discursive system for the attribution of a literature to a language, understood as the unique possession and mode of expression of a people” (141).

    A foundational act of historicization produced for the first time the terms of a distinct and independent literary history, anchoring a regional tradition in a national logic (143).  When a premodern corpus of undifferentiated writing acquired such a prestigious history, its newly self-regulating “works” entered literary modernity (38-9).   The admission of a corpus “into world literary space as a distinct literary tradition has characteristically taken place since the nineteenth century through its acquisition of a narrative of (‘national’) historical development” (131).  A literary history proper legitimized the literary modernity of a writing tradition by granting it national authority.

    Thus the word “literature” in the term world literature “marks the plane of equivalence and compatibility between historically distinct and particular practices of writing” (240).  The word “world” in “world literature” is a world of nations, the new regimes of sovereignty.  “’World’ and ‘nation’ are in a determinate relationship of mutual reinforcement here, rather than simply one of contradiction or negation” (77).  When world literature is invoked, it is important to keep in mind “the forms of nationalization of language, literature, and culture installed . . . precisely in and through the world-historical process that is the emergence of world literature” (130).  Literature and nation are mutually authenticating and reinforcing:  They confirm the antiquity and autonomy of one another. “The concept and practices of world literature, far from representing the superseding of national forms of identification of language, literature, and culture, emerged for the first time precisely along the forms of . . . nation-thinking” (97).  In addition, world literature played an important role in the orientation of national literatures toward the global space to which every nation could make its own “distinct national contribution” (112).  This role ought to be placed in an even broader global context since it is important to stress that “the emergence and modes of functioning of world literature, as the space of interaction between and articulation of the ‘national’ or regional literatures, are elements of the much-wider historical process of the emergence of the modern, bourgeois state and its dissemination worldwide, under colonial and semicolonial conditions, as the normative state-form of the modern era” (98).  Literature strengthened the claim of the national state against other state forms by giving voice to its organic character.

    It is in this broader context that Mufti introduces world literature as “the (bourgeois) understanding and experience of the world as an assemblage of ‘literary’ or expressive traditions, whose very ground of possibility was the Orientalist knowledge revolution” (90).  Tracing “the historical dialectic of Orientalism and/as world literature” (38) within literary studies since the late 18th century (99), he highlights the production of entirely new objects of study and insists on the central role “that philological Orientalism played in producing and establishing a method and a system for classifying and evaluating diverse forms of textuality, now all processed and codified uniformly as literature” (80).  If national literature was from the beginning world literature too, this was based on Orientalist assumptions.  Mufti’s strong thesis is that “a genealogy of world literature . . . leads to the classical phase of modern Orientalism in the late eighteenth and early nineteenth centuries, an enormous assemblage of projects and practices that was the ground for the emergence of the concept of world literature as for the literary and scholarly practices it originally referenced” (19).  The project of philological Orientalism, from the microscopic level of the text to the macroscopic one of the library, produces an entire hermeneutics, which “may be understood as a set of processes for the reorganization of language, literature, and culture on a planetary scale that effected the assimilation of heterogeneous and dispersed bodies of writing onto the plane of equivalence and evaluability that is (world) literature, fundamentally transforming in the process their internal distribution and coherence, their modes of authorization, and their relationship to the larger social order and social imaginaries in their place of origin” (145).  In a nutshell, this is how the colonial Orient was collected, archived, studied, and administered, and the regimes of the truth of the empire established and imposed.

    Orientalism should be understood not only as the apparatus that produced the Orient as a domain of interpretation and administration but additionally as “the cultural system that for the first time articulated a concept of the world as an assemblage of ‘nations’ with distinct expressive traditions, above all ‘literary’ ones.  Orientalism thus played a crucial role in the emergence of the cultural logics of the modern bourgeois world, an element of European self-making, first of all” (35).  In this respect, as in others, the author acknowledges his predecessor, Edward Said, whose  “entire effort in Orientalism was (at one level) to argue for the centrality of Orientalism, as cultural logic and enterprise, to the emergence of modern European culture, to Europe’s self-making” (75).  Mufti illustrates his argument with a fascinating example, proposing that the “lyricization of poetry in the West,” that is, the “gradual expansion of . . . ‘lyric’ norms of expression . . .  to encompass” all practices of reading and writing poetry, is “an intercultural and worldwide process” that can be traced back to the “Orientalist ‘discovery’ of the ‘ancient’ poetic traditions of the ‘Eastern nations’” (71).   By considering the Orient/Occident interplay, a genealogy of the early concepts and practices of world literature shows how a “’lyric’ sensibility emerged in Europe at the threshold of modernity in the encounter with ‘Oriental’ verse and, having taken over the universe of poetic expression in the West, became a benchmark and a test for ‘Oriental’ writing traditions themselves, erasing in the process all memory of its intercultural origins” (74).

    Together, philological Orientalism and (adopting a contrast of Erich Auerbach’s, Herder’s “Nordic” national rather than Vico’s “Latinate civilizatory”) philosophical historicism made the new concept of world literature possible.  The combined Orientalist and historicist thinking legitimized both the different manners of being human and “the same manner of being different” (77).  In addition to its contribution to European self-making, Orientalism contributed to world making as well and deserves to be studied “as an articulated and effective imperial system of cultural mapping, which produced for the first time a conception of the world as an assemblage of civilizational entities, each in possession of its own textual and/or expressive traditions” (20).  Oriental mapping structured “the cultural logic of the modern, bourgeois West in its outward orientation” (11) and facilitated the expansionist “transformation of societies on a world scale” (90).  In non-Western societies it fabricated “forms of cultural authority tied to the claim to authenticity of (religious, cultural, and national) ‘tradition’” (27).

    Orientalism was first activated in the production, periodization, and territorialization of India.  “What the early generation of Orientalists encountered on the subcontinent was not one single culture of writing but rather a loose articulation of different, often overlapping but also mutually exclusive, systems based variously on Persian, Sanskrit, and a large number of the vernacular registers, often more than one in a single language, properly speaking” (104-5).  To make sense of this variety and complexity, they re-structured it completely on the basis of the only model they knew and trusted, the historicist narrative of an evolutionary national history.  “The German and eventually pan-European discourse of world literature is thus fundamentally indebted to and predicated on” (104) the British colonial project of Indological philology, launched near the end of the 18th century.  “It is in this manner, by providing the materials and the practices of a new cosmopolitanism (as well as indigenist or particularist) conception of the world as linguistic and cultural assemblage, that English began to supplant the neoclassical order on the continent in which above all others French and France had provided the norms for literary production” (109).  Non-Western textual traditions entered the literary space as “literature” through the revolution of the philological knowledge that included the “discovery” of classical languages in the East and the invention of their family tree (58).  Eastern writing practices were absorbed into “literature” when their ancient works were classicized, that is, established as the original tradition of a civilization and arranged as its core national canon.

    Mufti documents “that Orientalist theories of cultural difference are grounded in a notion of indigeneity as the condition of culture – a chronotope, properly speaking, of deep habitation in time – and that therefore nationalism is a fundamentally Orientalist cultural impulse” (37).  What he calls the “chronotope of the indigenous” (74) consists of “spatiotemporal figures of habitation” (74) deeply rooted in both place/territory and time/history (129).  Its territorially common ground validates “the authenticity of tradition” (112).  Consequently, the task of genealogical inquiry is “to give a historical account of the acquisition of literary history . . . by a vast, diffuse, and internally differentiated body of writing … a historical (and critical) account of the . . . ascription of historicality . . . structured around the chronotope of the indigenous” (143).  The Orientalist practice of indigenization standardized the pluralist logic of a pre-modern cultural space into a differentiated linguistic-literary field and ushered it into the colonial “world republic of letters.”

    The “dual process of indigenization” (116) of language, literature, and culture, which incorporates of the intertwined strategies of historicism and Orientalism, consisted in classicizing (say, into Sanskrit) a civilization (say, the Indo-Persian one) and vernacularizing (say, into Urdu and Hindi) its cosmopolitanism (say, the subcontinental one).  Τhus, through indigenization, Indian writing essentialized itself into a national literature in order to be admitted to the Orientalist canon of world literature and join the global system of different and unique cultures.  The overlapping colonial cultural projects of indigenization “in the name of return to the origin” (173) and vernacularization as recovery of “authenticity” (251) are inseparable from bourgeois modernization (119).  “It is thus in English as cultural system, broadly conceived – namely, in the new Indology and its wider reception in the Euro-American world – that the subcontinent was first conceived of in the modern era as a single cultural entity, a unique civilization with its roots on the Sanskritic and more particularly Vedic texts of the Aryans. . . .  The idea that India is a unique national civilization in possession of a ‘classical’ culture was first postulated on the terrain of literature, that is, in the very invention of the idea of Indian literature in the course of the philological revolution” (109).  The encounter between Oriental philology and Occidental literature produced a national literary model that inspired the Indian national sentiment and identity (115) and created the “institution of Indian literature” (37, 73).

    I have constructed here the chronological genealogy of world literature that drives Mufti’s argument, the linear story that is plotted in his book through complex discussions of practices, notions, and texts.  The “world” of world literature consists of indigenous cultures using vernaculars to sustain literature as their national institution.  Their heterogeneity is predicated on standardized difference, their cosmopolitanism is based on the nation-state, their unity guaranteed by unequal power relations, and they can all be traced to the Orientalist construction of the colonial archive, be it registry, collection, or museum.  Mufti puts into practice with great integrity and virtuosity his conviction that “the task of criticism today is at the very least the untangling and rearranging of the various elements presently congealed into seemingly distinct and autonomous objects of divergent literary histories.  The critical task of overcoming the colonial logics persistently at work in the formation of literary and linguistic identities today is thus indistinguishable from the task of pushing against the multiple identarian assumptions, colonial and Orientalist in nature, of Hindi and Urdu’s mutual and religiously marked distinctness and autonomy.  A post-colonial philology of this literary and linguistic complex can never adequately claim to be produced from a position uncontaminated by the language polemic that now constitutes it and can only proceed by working through its terms.  This secular-critical task, furthermore, corresponds not to the erection of some image of a heterogeneous past but to the elaboration of the contradictory contemporary situation of language and literature itself” (128-9).  Forgetting English is possible only in English.

    He advocates resistance both to the colonial gaze and national authenticity, asking fellow scholars to “forget” (that is, learn to question by working with) not only English and the “world” in world literature but also the prefix in post-colonial.  “If, on the one hand, I urge world literature studies to take seriously the colonial origins of the very concept and practices they take as their objet of study, on the other, I hope to question the more or less tacit nationalism of many cotemporary attempts to champion the cultural products of the colonial and postcolonial world against the dominance of European and more broadly Western cultures and practices” (53).  This position exemplifies notion of a contrapuntal criticism that takes into account intertwined perspectives and discourses. “No self-described attempt to ‘return’ to tradition, religious or secular, can sustain its claim to be autonomous of ‘the West’ as Other. . . . No attempt at self-definition and self-exploration can therefore bypass a historical critique of the West and its emergence into this particular position of dominance.  And, in this sense, the critique of the West and the logics of its imperial expansion from a postcolonial location is in fact a self-critique, since this location is at least partially a product of that historical process” (153-4).

    While both Orientalism and Occidentalism/Anglicism seek to capture an “one-world” reality, they are caught between the local and the cosmopolitan, the particular and the universal (3).  By consciously operating within these tensions without being at home in either of their poles, the exilic perspective introduced by Auerbach and later advocated by Said can avoid both cosmopolitan detachment and communal narcissism.  An “exilic rethinking of the philology of world literature” (41) would become the basis for a radicalized “philology as homeless practice” (200), for a “historically engaged and linguistically attuned” (241) secular criticism with a “missing homeland” (202).  Supporting neither transnational nor autochthonous social imaginaries, it can provide a dialectically alert account of concrete cultural circumstances “because it captures simultaneously the violent exclusions of the national frame, the material reality of its (physical as well as symbolic) borders, the dire need to overcome its destructive fixations, and its inescapability in the present moment” (194).

    In his conclusion, addressing the central case of post-colonial subcontinent, Mufti supplements the exilic perspective with an additional one, also drawn from twentieth century experience, which promises to offer intrinsic means of study by drawing explicitly on partition as condition and modality since the “politics of linguistic and literary indigenization is a distinct element in the larger historical process that culminated in the religio-political partition of India in 1947 and is thus at the same time an important element in the history of the worldwide institution of world literature” (38).  In a manner reminiscent of the ways in which post-Heideggerian thought puts metaphysics “under erasure,” Mufti puts the subcontinent under partition.  “In light of the historical analysis of the cultural logic of Orientalism-Anglicism operating in the long, fitful, and ongoing process of bourgeois modernization in the subcontinent that I have attempted here, the task of criticism with respect to the field of culture and society in the region is therefore to adopt partition as method, to enter this field and inhabit the processes of its bifurcation, partition not merely as event, result, or outcome but rather as the very modality of culture, a political logic that inheres in the core concepts and practices of the state” (200).  Not a closed part of the past or even its living memory, partition is “the very condition of possibility of nation-statehood and therefore the ever-renewed condition of national experience in the subcontinent” (201).  The political logic of partition is inherent in the normative majoritarianism of the modern nation-state which by definition entails the minoritarization of certain groups and practices, a crisis of legitimacy leading to the partition of society (200-1).  “To argue for partition as method is, therefore, to argue for extracting submerged modes of thinking and feeling from the ongoing historical experience that is partition” (202).

    Furthermore, in the twenty first century this condition operates far beyond the subcontinent.  Ours is a time of proliferating boundaries where the traditional institution of the border of the nation-state is undergoing internal and external challenges and transformations, with some of its functions “redistributed throughout social space” (7) and others globalized, turning it into a “universalized institution” (201).  What is the meaning of world literature in a world where borders are traversing urban, regional, national, and transnational environments and literature often functions as a generalized cartography?  With this question I will proceed to indicate just a few of the many fields of inquiry where this book deserves to be studied and activated.

    Mufti’s notion of “partition as method,” which enriches the problematic of books like Asia as Method:  Toward Deimperialization (21010) by Kuan-Hsing Chen’s and Border as Method (2013) by Sandro Mezzadra and Brett Neilson, should be of obvious interest to Border Studies, an interdisciplinary field that since the 1980s has been examining geographical, political, economic, cultural, and other boundaries primarily in Asia, Africa, and Latin America and with an emphasis on matters of migration and gender.  The field started by looking at legal, political, and lexical definitions but it has been expanding to consider how borderscapes are narrated, performed, and de-legitimized in the Global South.  An anatomy of world literature would complement current studies of the ways in which, in addition to lands, borderings distribute languages, communities, stories, signs, and jurisdictions.  The order of literature since its national and Oriental origins shows borders working as epistemological devices and markers of relations rather than lines and locations.

    An adjacent and even more interdisciplinary field is the study of territories and their flux in the integrated post-industrial world.  Influenced by the work of Deleuze & Guattari (with their interests from “minor literature” to plateaus to nomadology), it has radically shifted emphasis from the structure to the flow of capital and the dominant econo-semiotic system, which Mufti too has done with literature.  The “assemblage of enunciation” might fit well with his notion of the writing corpus, and the “plane of immanence” with his “plane of equivalence.”  Most importantly, the Deleuzian “rhythm” of difference and repetition would resonate with the contrapuntal circulation of literature in the post-colonial milieu.

    The sociology of culture would benefit greatly from attention to the emergence of the literary sphere and its citizenry, whose members often belong to the national intellectual aristocracy.  Given its interest in the ways in which Bourdieu’s habitus operates according to a logic of practice, it would examine the subfield of literature within the objects, norms, and practices of the cultural field.  Mufti’s work on production and appropriation, and above all domination through symbolic power, provide numerous examples of the kind of capital gained and interest served by disinterested taste as competence and distinction as performance.

    The quest for cultural capital and symbolic power has been driven by the counter-political ideology of the aesthetic state, a milieu and habitus where aesthetic practices constitute the highest form of politics.  Mufti contributes greatly to an understanding of this regime, including the institutions it establishes and cherishes.  The bourgeois subject, who is the citizen of that ideal state, responds to the functional differentiation of society in distinct borderlands with the democratization of art and the sacralization of high culture. Through the proper literary education, fiction and poetry train readers to achieve a Kantian freedom of aesthetic autonomy by giving the interpretive law to themselves above the constraints of any internal or external partition.

    The path from the sociology of culture to its ideology may lead next to its ethics, namely, art as a spiritual ascesis.  Mufti has discussed the political rationality of the humanities and the aesthetically administered university.  His rigorous genealogical approach may be supplemented by Ian Hunter’s interest in humanism and the pre-national state of the sixteenth and seventeenth centuries as well as the aesthetic discipline of literary cultivation that emerged with Romantic literature and philosophy.  The origins of the philological skills that mobilized Orientalism to create world literature may also lie in a combination of artistic pleasure as worldly ethical competence with literary criticism as a moral practice of the self, that is, in the aesthetico-ethical training of the self in interpretive (self-)problematization which first produced the reader of literature.

    In addition to chronicling the emergence of world literature, Aamir Mufti’s Forget English! reflects on “just about the most encompassing cultural concept of our times, the notion of the systematic totality of the expressive productions of nothing less than humanity in its entirety.” (252).  Through a genealogy of literary comparison it raises the question of doing comparative humanities on a global level.  That is why it ought to have a broad scholarly and pedagogical impact.  This is not a book that scholars may read with profit, and then simply add to their bibliography and syllabus.  It invites reflection on what it means to compare at a time of universal comparability, that is, when everything is comparable (and also appears contemporary) to everything else.  Rather than seeking to add unknown or neglected materials to our canons, it challenges us to reconfigure canon making itself as well as the way we put together panels, collective volumes, or institutes.  Ultimately, Mufti is proposing that, in addition to new critiques, World Humanities needs new ways of constituting the humanities as a common.

    Vassilis Lambropoulos is the C. P. Cavafy Professor of Modern Greek in the Departments of Classical Studies and Comparative Literature of the University of Michigan.  He is the author of Literature as National Institution (1988).

  • Elizabeth Losh — Hiding Inside the Magic Circle: Gamergate and the End of Safe Space

    Elizabeth Losh — Hiding Inside the Magic Circle: Gamergate and the End of Safe Space

    by Elizabeth Losh, The College of William and Mary

    The Gamergate controversy of recent years has brought renewed public attention to issues around online misogyny, as feminist game developers, critics, scholars, and fans of independent video gaming have been targeted by very intense campaigns of digital harassment that seem to threaten their fundamental rights to personal privacy, bodily safety, and sexual agency. Feminists under attack by users of the hashtag #GamerGate complain of being silenced, as they report being disciplined for imagined infractions of supposed sexual, social, journalistic, and ludic norms in computational culture with punishing messages of censure, ridicule, exclusion, and violence. As noted by the mainstream news media, extremely aggressive tactics have been deployed, including leaking women’s sensitive private information – such as unlisted addresses and social security numbers – to the web (a practice known as “doxxing”), placing false reports with law enforcement or emergency first responders (a practice known as “swatting”), and highly personalized stalking with rapid escalations of threats of graphic violence that are often sexualized as rape or racialized as lynching. Although it may be important for the eloquent first-person testimony of the terrorized women themselves to be given priority as speech acts that command attention in resisting prevailing misogyny, the women’s antagonists often are allowed to remain invisible. Furthermore, allies presuming to advocate for the feminist victims of Gamergate may not adequately honor their stated wishes for peace, privacy, and closure that those experiencing online violence may express (Quinn 2015). This essay attempts to examine the larger discursive context of Gamergate and why hardcore gamers who were fans of AAA videogames – often with military storylines and first-person shooter game mechanics – constructed a seemingly illogical and paranoid explanatory theory about so-called “social justice warriors” (Bokhari et al. 2015) or “SJWs,” pursuing unfair advantage to sway the game industry.

    How do we understand how Gamergaters’ claims for noninterference and sovereignty in game worlds and online forums function alongside their claims for no-holds-barred investigations and public debates? Common rhetorical tactics deployed by Gamergaters include using rights-based language to further this specific variant of the men’s rights movement (Esmay 2014) and making appeals to the values of a supposedly rational public sphere (MSMPlan 2015). As these hardcore gaming fans deny the materiality, affect, embodiment, labor, and situatedness of new media, they also affirm positive notions about the exceptionalism of a realm defined – in Nicholas Negroponte’s terms – by bits rather than atoms. Gamergaters are particularly vehement in denying that “online violence” is a possibility with tweets such as “>violence >online pick one” and “will you please point me to the online killing fields where all the bodies from violence online are kept?” (Wernimont 2015). The Gamergate vision of digital culture is one of disembodied and immaterial interactions in which emotional harm is considered to be nonviolent.

    According to Gamergate accounts, the assumption that hardcore gamers representing masculine white privilege were under attack was also apparently buttressed by a number of online articles by game journalists suggesting that that the species was endangered and soon to be extinct. Gamers were declared “over” (Alexander 2014), at their “end” (Golding 2014), or facing the “death” of their collective identity (Plunkett 2014). The arguments made for years by feminist game collectives for pursuing the large market share in lower-status “casual” games, often played by women, had finally seemed to create inroads for independent developers. At the same time Gamergaters described their defensive position as a response to what they often characterized as a feminist “incursion” or “invasion” of gaming that was conceptualized as a substantive attack or threat to gamers. So-called “men’s rights” proponents – who may characterize themselves as “Men’s Human Rights Activists” – differentiated themselves from the distributed and heterogeneous population of gamers but also proclaimed that “the same people attacking Gamergate have been attacking us for years, using exactly the same tactics” (Esmay 2014). According to Breitbart columnist Yiannopoulos (2014a), “cultural warriors” arrived on the scene of gaming like “genocidal, psychopathic aliens in Independence Day;” these “social justice warriors” allegedly attempted to colonize a diverse community, but their “killjoy” advances were repelled and defenders declared them “not welcome in the gaming community.” According to this columnist, supposedly “politeness and persistence” had guaranteed victory in “the culture wars against guilt-mongerers, nannies, authoritarians and far-Left agitators.” While Sara Ahmed (2010) has explicitly called for self-identified “feminist killjoys” to disrupt the perpetuation of patriarchal false consciousness and the enforcement of positive affect in society, the perceived opponents of Gamergate are often cast as the aggressors despite what may be deep desires to participate in the gaming communities that exclude them.

    Decades before Gamergate, Dutch game theorist Johan Huizinga (2014) described what he called the “magic circle” of the temporary world constituted by a game, which appears to function as an isolated “consecrated spot” within which “special rules obtain” for performances apart from everyday concerns (10). Gamergaters often use similar terminology to discuss how game spaces should be intended to serve as a refuge from real-world behavioral constraints and the restrictions of social roles, as in the case of one Breitbart blogger seeking to exclude “angry feminists” and “unethical journalists” from interference with game play.

    Gamers, as dozens of readers have told me in the relatively short time I have been covering the controversy now called #GamerGate, play games to escape the frustrations and absurdities of everyday life. That’s why they object so strongly to having those frustrations injected into their online worlds. The war in the gaming industry isn’t about right versus left, or tolerance versus bigotry: it’s between those who leverage video games to fight proxy wars about other things, introducing unwanted and unwarranted tension and misery, and those who simply want to enjoy themselves. (Yiannopoulos 2014a)

    Gamergate advocates claim that video games are expected to be arenas where gamers can assert their sovereignty and self-determination in spaces that can’t be “leveraged” or annexed to “fight proxy wars” by non-gamer outsiders.

    According to Huizinga (2014), the arena of game play is characterized by the freedom of voluntary participation, disinterested behavior, and an opposition to serious conduct. Similar criteria also often are presented as premises for action in the rhetoric of Gamergate enthusiasts in their comments on various sites for public debate. For example, feminist game developers and critics may be accused of coercing and manipulating potential allies who are journalists through sexual liaisons, romantic promises, or appeals to social justice that invoke guilt and shame. Feminist opponents of Gamergaters are also characterized on sites such as Breitbart as “self-promoters” and “opportunists” and labeled as “egotistical” people who “beg for sympathy and cash” (Yiannopoulos 2014b). Thus, according to the logic of free choice, feminist “social justice-oriented art” in digital culture is aimed at “robbing players of agency and individualism” in every possible kind of engagement (Yiannopoulos 2014b).

    Personal freedom and a separation from material interests or a profit motive are often cited as ethical values shared by Gamergate, although many of its tactics are not at all solemn or high-minded. Active Gamergaters on the Escapist and 8chan emphasize their own diverse and distributed structure, and these anarchic swarms of participants take action “for the lulz,” much as members of Anonymous and 4chan have engaged in outing and calling out campaigns (Coleman 2013). Images of feminist gamers are altered with editing software, phrases like “online violence” are mocked, and fake identities are manufactured with puns and inside jokes. For example, in a crowd-funding effort to promote women in games who disavowed feminist “SJWs,” Gamergate forum members created an elaborate green-eyed and hoodie-wearing fictional persona intended to represent a pro-Gamergate libertarian “everywoman.” The avatar dubbed “Vivian James” wears the four-leafed clover of 4chan, “tough-loves video games,” and “loathes dishonesty and hypocrisy” (“The Birth of Vivian” 2015).

    While Gamergaters emphasize “personal responsibility” and “individual agency” (Yiannopoulos 2014b) as values, feminist critics tend to emphasize interdependence and states of being always-already subject to the coercions of others. In Huizinga’s (2014) terms, feminists inside the magic circle may be perceived as “spoil-sports” who must be “ejected” from the “community,” because they are attempting to break the magic world by failing to acknowledge its misogynistic conventions (11-12). As Anastasia Salter (2016) notes, in Huizinga’s analysis the spoil-sport is most visible in “boys’ games,” thereby establishing solidarity around youthful masculinity as the norm.

    By discussing misogyny in different venues for conversation among networked publics in game forums, blogs, or vlogging communities, and even within live multi-player gaming itself, feminists are cast as a disruptive presence.  Social justice warriors must be treated as aggressors to be repulsed by Gamergaters from the magic circles of game worlds in order to reclaim these spaces and return them to their proper exceptional status and thus maintain their security from real-world incursions.

    Of course, the concept of “safe space” has been central to the history of the women’s liberation movement and its associated consciousness-raising efforts. After all, feminists have reasoned that safe space might be necessary to explore intimate issues about sexuality and reproductive health – which might even include techniques for gynecological self-examination championed by foundational texts like Our Bodies, Ourselves – and safe space would also be needed to share confidences about personal histories of rape, domestic violence, and other forms of gendered trauma. How safe space is constituted can be developed along a number of different axes. For example, as awareness about “microaggressions” – a term used to describe the automatic or unconscious utterance of subtle insults (Solorzano, Ceja, & Yosso 2000) – has proliferated, participants at feminist events may be asked to be mindful of their own assumptions, privileges, and power relations in social gatherings. The full sensorium of potential kinds of assault may also be invoked in defining safe spaces, so those speaking loudly or wearing scent may be prohibited from these activities to protect those intolerant, averse, or allergic to certain stimuli.

    Feminists themselves have been reevaluating the assumed need for safe space for a variety of reasons. While media outlets grappling with the concept of “trigger warnings” may characterize any special treatment of vulnerable individuals as coddling or “hiding from scary ideas” (Shulevitz 2015), feminists themselves are often concerned about how the gestures of exclusion mandated by protective impulses enforce particular norms counter to the goal of empowerment. Some argue that “brave spaces” that encourage public acts of asserting identity or declaring solidarity may be more productive than private “safe spaces” (Fox 2004). Homogeneous safe spaces designed for the security of cisgendered whites may be criticized as excluding transgender people (Browne 2009) or people of color (Halberstam 2014). As Betty Sasaki (2002) observes, “safety” can become “the code word for the absence of conflict, a tacit and seductive invitation to collude with the unspoken ideological machinery of the institutional family” (47). And Donadey (2009) points out the irony “that radical feminist pedagogy tends to replicate the assumptions of the bourgeois concept of the public sphere” (214).

    In addition to using the #Gamergate and #SJW (for “social justice warrior”) hashtags on social media platforms such as Twitter, Gamergate adherents frequently use #NotYourShield, which indicates that feminists shouldn’t be shielded from criticism merely because they might claim alliances with underrepresented groups, such as women or minorities, given the fact that members of these groups might not identify with feminism or feel exploited, disenfranchised, or excluded from hardcore gaming communities. #NotYourShield allies of Gamergate may embrace the quintessential hardcore gamer identity of AAA titles with military themes, or may indicate that they are content with conventionally feminized casual games played on mobile devices and don’t want to interfere with so-called “real” games. While Gamergaters may protect the borders of their own magic circles, they criticize those who claim feminist discourse operates in safe spaces devoid of challenges from opponents. Affixing the #NotYourShield piece of metadata to a message supports Gamergaters’ contentions that feminists use the victimization of women and people of color to shield themselves unfairly from rebuttals or tests of truth claims. In videos such as “#NotYourShield – We Are Gamers,” choruses of voices are carefully curated to emphasize “corruption” and “censorship” as features of feminism, and “transparency” and call-out culture as features of Gamergate.

    Although Huizinga’s (2014) magic circle may be more open to public spectatorship than the private sphere of feminist safe space, it is also a zone of exception that is marked off by “secrecy” and “disguise,” according to Homo Ludens (13). Even if the rules for the magic circle are assumed to be uncontested, and the space of play is accepted as apart from the everyday world, the exceptional territory of game play could be a space of less violence (if mockery of authoritarian rulers is tolerated in the case of the Bakhtinian carnivalesque) or more violence (if physical injuries from contact sports are permitted that would normally be prosecuted as assault). Nonetheless, according to Edward Castronova (2007), the membrane of the magic circle “can be considered a shield of sorts, protecting the fantasy world from the outside world. The inner world needs defining and protecting because it is necessary that everyone who goes there adhere to the different set of rules” (147).

    Feminist game critics have begun to question Huizinga’s (2014) concept of a zone of exceptionalism, particularly as the legal, economic, and social consequences of game play are manifested in a variety of “real world” contexts. For example, Mia Consalvo (2009) challenges Castronova’s belief that “fantasy worlds” are a separate domain: “even as he might wish for such spaces, such worlds must inevitably leave the hands of their creators and are then taken up (and altered, bent, modified, extended) by players or users—indicating that the inviolability of the game space is a fiction, as is the magic circle, as pertaining to digital games” (411). Within game spaces of conflict and collaboration, players may bring different agendas into the magic circle, and thus it might be more difficult than Huizinga (or Castronova) imagines to reach consensus about the common rules of play. For example, when a guild of players in World of Warcraft decided to hold a funeral in an area for player-versus-player combat, other participants justified attacking the solemn ceremony in a coordinated raid on the grounds of asserting existing play conventions (Losh 2009). Consalvo further claims the static, formalist vision of bounded play that is grounded in structuralist theory, which is articulated by Huizinga and his disciples, ignores the fact that context is constantly being evaluated by players. Instead of the magic circle, she posits that players “exist or understand ‘reality’ through recourse to various frames” (415).

    For women, queer and transgender persons, and people of color who identify as gamers, neither magic circle nor safe space often seem descriptive of the harsh settings of their game play experiences. As Lisa Nakamura (2012) observes, playing as a woman, a person of color, or a queer person requires extraordinary game skills and talent at a level of hyper-accomplishment because of the extremely rigorous “difficulty setting” of playing in an identity position other than straight white male. Unfortunately, to be an exceptional individual in an exceptional space is often punished rather than rewarded. Moreover, as a woman of color, Shonte Daniels (2014) has insisted that “gaming never was a safe space for women” because “their identity makes them vulnerable to threats or harassment.” However, she also speculates that Gamergate may prove to be “both a blessing and a curse,” given how much attention to online misogyny has been generated by the intensity and egregiousness of Gamergate behavior.

    Many date the Gamergate controversy from fall 2014 – when harassment of dozens of feminists in the videogame industry, including game developers Zoë Quinn and Brianna Wu and cultural critic Anita Sarkeesian, made headlines. However, online misogyny and gender-based aggression have had a long history in digital culture that goes back to bulletin boards, MOOs, and MUDs and the existence of virtual rape in early forms of cyberspace (Dibbell 1998). To coordinate the current campaign of harassment, IRC channels and online forums such as Reddit, 4chan, and 8chan were used by an anonymous and amorphous group that came to be represented by the Twitter hashtag #GamerGate after actor Adam Baldwin deployed a familiar suffix associated with prominent political cover-ups. According to the Wikipedia entry, Gamergate “has been described as a manifestation of a culture war over gaming culture diversification, artistic recognition and social criticism of video games, and the gamer social identity. Some of the people using the Gamergate hashtag allege collusion among feminists, progressives, journalists and social critics, which they believe is the cause of increasing social criticism in video game reviews” (“Gamergate Controversy” 2015).

    It is worth noting that Wikipedia’s handling of its own distributed labor practices defining Gamergate has had a contentious history that included a personal invitation to Gamergaters from Wikipedia founder Jimmy Wales to contribute to improving the Gamergate article (Wales 2014), a pointed rejection of financial contributions to Wikipedia from Gamergaters (“So I Decided to Email Jimbo” 2014), and a defense of banning Wikipedia editors perceived as biased against Gamergate (Beaudette 2014). Ironically, during this intense period of engagement with the “toxic” participants of Gamergate eventually dismissed by Wales, Wikipedia often deployed a rhetoric about volunteerism, disinterested conduct, and playing by a neutral set of rules that paralleled similar rhetorical appeals from Gamergaters.

    Attention to this recent controversy – about who is a gamer and what is a game – has already generated a literature of scholarly response that focuses, as this essay does, on Gamergate rhetoric itself. Shira Chess and Adrienne Shaw’s (2015) essay, “A Conspiracy of Fishes,” analyzes how a particular cultural moment in which “masculine gaming culture became aware of and began responding to feminist game scholars” produced conspiratorial discourses with a specific internal logic that shouldn’t be dismissed as nonsensical:

    It is less useful to consider the validity of a conspiracy in terms of actual persecution, and is more potent if we look at it in terms of a combination of perceived persecution and an examination of the anxieties that the conspiracy is articulating. From this perspective, we can look at gaming culture as a somewhat marginalized group: For years those who have participated in gaming culture have defended their interests in spite of claims by popular media and (some) academics blaming it for violence, racism, and sexism. A perceived threat opens a venue for those who feel their culture has been misunderstood—regardless of whether they are the oppressors or the ones being oppressed. It is easy to negate and mark the claims of this group as inconsequential, but it is more powerful to consider the cultural realities that underline those claims. (217)

    As Chess and Shaw point out, the gamer identity may function in the context of other kinds of intersectional identities in which subjects for which the personal is political can be imagined as oppressors in one context and the oppressed in another.

    In addition to deploying a primary strategy about constructing a narrative about persecution aimed at a marginalized group, Gamergate is also concerned with the secondary strategy of mapping supposed networks of influence across publication venues, media genres, knowledge domains, political spheres, and economic sectors. Such Gamergate infographics seem to have begun with visualizations that were often reminiscent of Wanted posters, in which names and photographs of individual offenders were clustered in particular interest areas. For example, 4chan assembled a list of “SJW Game Journalists” that was republished on Reddit, which goes far beyond the initial allegations of impropriety about game reviewing at Kotaku to target writers at over a dozen other publications.

    As Gamergaters go down the “rabbit hole” of exploring possible connections and exposing hidden networks, they eventually claim political and educational institutions as agents in the conspiracy with a particular focus on DiGRA, the Digital Games Research Association, which was founded in 2003 and holds an international conference each year. One diagram shows the tentacles of DiGRA extending into online venues for gaming news and reviews, such as Kotaku, Gamasutra, and Polygon, as well as mainstream publications with a print tradition, such as The Guardian and TIME, and conference venues for many AAA games, such as the annual Game Developers Conference (GDC), which was founded in 1988 with a focus on fostering more creativity in the industry. Pictures of offender/participants in the network continued to be featured in this denser and more recursive form of network mapping, as though facial recognition would be a key literacy for Gamergaters.

    It is worth noting that many feminists would describe DiGRA as far from being a haven organization from misogyny, given existing biases in game studies that may privilege academics with ties to computer science, corporate start-ups, or other male dominated fields. Members of the feminist game collective Ludica have described strong reactions of denial when they declared at DiGRA in 2007 that the “power elite of the game industry is a predominately white, and secondarily Asian, male-dominated corporate and creative elite that represents a select group of large, global publishing companies in conjunction with a handful of massive chain retail distributors” and thus constitutes a “hegemonic” power that “determines which technologies will be deployed, and which will not; which games will be made, and by which designers; which players are important to design for, and which play styles will be supported” (Fron et al. 2007). The rhetoric of the Ludica manifestos about how games and gamers were being defined too rigidly by an industry enamored of AAA titles often ran counter to the origin stories of organizations such as GDC and SIGGRAPH.

    The third key strategy of Gamergaters – in addition to the fabricating the persecution narrative and the influence maps – is formulating threats of financial retaliation. If liberal members of the press and academic and professional associations in game studies and game development benefit from a supposed flow of money, social capital, and privileged access to career advancement, libertarian Gamergaters will thwart them with economic threats. This creates a paradoxical dynamic in which Gamergaters both assert an ethos of economic disinterest – because gaming is supposed to be a non-profit/non-wage activity that is separate from accumulation of capital in the real world – and seek to exercise their collective power to crowdfund sympathizers, and boycott, divest, and freeze assets of feminist allies and ally organizations. Advertisers are besieged with consumer complaints about the ethics of reporting in game publications, university employees are reported to administrators with accusations about frittering away public funds, and even donations to Wikipedia are withdrawn by indignant Gamergaters.

    Because feminists supposedly use financial interest as a lever, Gamergaters must also use financial interest as a way to assert the fairness, neutrality, and civility of a rational public sphere, which is tied to their fourth strategy about policing discourse. In regulating language in order to keep it freely flowing in a neoliberal marketplace of ideas so that the best notions will be the most valued, hyperbolic and hysterical feminist “strawmanning” and “insulting” very explicitly will not be tolerated by Gamergaters. In insisting that harassers are a statistically insignificant fraction of their movement in a counterfactual account of their power to terrorize targets and dominate channels of communication, language reminiscent of Robert’s Rules of Order can be as commonly encountered in Gamergate discourses as more stereotypical forms of trolling.

    This does not mean that the campaigns of Gamergate to construct us-and-them narratives, to make explicit and to visualize connections in social networks, to block some financial transactions and facilitate others, and to regulate discourse with structures of rational dialogue, leveling effects, and tone policing are not misogynistic. They defend and enable doxxing, swatting, and stalking behaviors that undermine the very barriers between virtual reality and material existence that are central to their contradictory ideologies of exceptionalism and common jurisdiction.

    The need for nurturing diversity among game players and developers (Fron et al. 2007) has been a work in progress for the better part of a decade, but in the wake of Gamergate, hundreds of prominent signatories who asserted the “right to play games, criticize games and make games without getting harassed or threatened” published an “open letter to the gaming community” (IGDA 2014). The fact that this pointed defense of feminist gamers, critics, and designers also used rights-based language might be instructive for better understanding the discursive context of Gamergate as well.

    The Italian biopolitical philosopher Roberto Esposito (2010, 2011) has theorized that two conflicting modalities of “community” and “immunity” operate when members either accept or resist the obligations of the social contract. Looking at the rhetoric of Gamergaters about the magic circle and how they caricature the rhetoric of feminists about safe space, we see how these oppositions are underexamined, and we can ask why opportunities for reflection and reflexive thinking about intersectionality are being foreclosed.

    Works Cited

    • Ahmed, Sara. 2010. The Promise of Happiness. Durham: Duke University Press.
    • Alexander, Leigh. 2014. “‘Gamers’ Don’t Have to Be Your Audience. ‘Gamers’ Are Over.” Gamasutra, August 28. http://www.gamasutra.com/view/news/224400/Gamers_dont_have_to_be_your_audience_Gamers_are_over.php.
    • Bailey, Moya. 2015. “#transform(ing)DH Writing and Research: An Autoethnography of Digital Humanities and Feminist Ethics.” Digital Humanities Quarterly 9, no. 2.
    • Beaudette, Philippe. 2015. “Civility, Wikipedia, and the Conversation on Gamergate.” Wikimedia Blog. January 27. http://blog.wikimedia.org/2015/01/27/civility-wikipedia-Gamergate/.
    • Bokhari, Allum, and Milo Yiannopoulos. 2015. “Entertainment Industry Says ‘No More’ to Social Justice Warriors.” Breitbart. July 20. http://www.breitbart.com/big-hollywood/2015/07/20/enough-entire-entertainment-industry-says-no-more-to-social-justice-warriors/.
    • Browne, Kath. 2009. “Womyn’s Separatist Spaces: Rethinking Spaces of Difference and Exclusion.” Transactions of the Institute of British Geographers, New Series, 34 (4): 541–56.
    • Castronova, Edward. 2007. Synthetic Worlds: The Business and Culture of Online Games. Chicago: University of Chicago Press.
    • Chess, Shira, and Adrienne Shaw. 2015. “A Conspiracy of Fishes, Or, How We Learned to Stop Worrying About #Gamergate and Embrace Hegemonic Masculinity.” Journal of Broadcasting & Electronic Media 59, no. 1: 208–20.
    • Coleman, Beth. 2011. Hello Avatar: Rise of the Networked Generation. Cambridge, MA: MIT Press.
    • Coleman, E. Gabriella. 2014. Hacker, Hoaxer, Whistleblower, Spy: The Many Faces of Anonymous. Brooklyn, NY: Verso.
    • Consalvo, Mia. 2009. “There Is No Magic Circle.” Games and Culture 4, no. 4: 408–17.
    • Daniels, Shonte. 2014. “Gaming Was Never a Safe Space for Women.” RH Reality Check. November 4. http://rhrealitycheck.org/article/2014/11/10/gaming-never-safe-space-women/.
    • Dibbell, Julian. 1998. “A Rape in Cyberspace.” http://www.juliandibbell.com/articles/a-rape-in-cyberspace/.
    • Donadey, Anne. 2009. “Negotiating Tensions: Teaching about Race in a Graduate Feminist Classroom.” In Feminist Pedagogy: Looking back to Move Forward, edited by Robbin Crabtree, David Alan Sapp, and Adela C. Licona, 209–29. Baltimore, MD: Johns Hopkins University Press.
    • Esmay, Dean. 2014. “Keeping up with #Gamergate.” A Voice for Men. October 16. https://lockerdome.com/7754206970916417.
    • Esposito, Roberto. 2010. Communitas: The Origin and Destiny of Community. Stanford, Calif.: Stanford University Press.
    • ———. 2011. Immunitas: The Protection and Negation of Life. Cambridge; Malden MA: Polity.
    • Fox, D. L., and C. Fleischer. 2004. “Beginning Words: Toward ‘Brave Spaces’ in English Education.” English Education. 37, no. 1: 3–4.
    • Fron, Janine, Tracy Fullerton, Jacquelyn Ford Morie, and Celia Pearce. 2007. “The Hegemony of Play.” In Proceedings, DiGRA: Situated Play, Tokyo, September 24-27, 2007, 309–18. Tokyo, Japan. http://www.digra.org/dl/db/07312.31224.pdf.
    • “Gamergate Controversy.” 2015. Wikipedia, the Free Encyclopedia. https://en.wikipedia.org/w/index.php?title=Gamergate_controversy&oldid=682713753.
    • Golding, Dan. 2014. “The End of Gamers.” Dan Golding. August 28. http://dangolding.tumblr.com/post/95985875943/the-end-of-gamers.
    • Halberstam, Jack. 2014. “You Are Triggering Me! The Neo-Liberal Rhetoric of Harm, Danger and Trauma.” Bully Bloggers. July 5. https://bullybloggers.wordpress.com/2014/07/05/you-are-triggering-me-the-neo-liberal-rhetoric-of-harm-danger-and-trauma/.
    • Huizinga, Johan. 2014. Homo Ludens: A Study of the Play-Element in Culture. Mansfield Centre, CT: Martino Fine Books.
    • “IGDA Developer Satisfaction Survey Summary Report Available – International Game Developers Association (IGDA).” 2015. https://www.igda.org/news/179436/IGDA-Developer-Satisfaction-Survey-Summary-Report-Available.htm (accessed September 23, 2015).
    • Jacobs-Huey, Lanita. 2006. From the Kitchen to the Parlor Language and Becoming in African American Women’s Hair Care. Oxford, UK, and New York, NY: Oxford University Press.
    • Koebler, Jason. 2015. “Dear Gamergate: Please Stop Stealing Our Shit.” Motherboard. http://motherboard.vice.com/read/dear-Gamergate-please-stop-stealing-our-shit (accessed September 24, 2015).
    • Levmore, Saul, and Martha Craven Nussbaum. 2010. The Offensive Internet: Speech, Privacy, and Reputation. Cambridge, MA: Harvard University Press.
    • Losh, Elizabeth. 2009. “Regulating Violence in Virtual Worlds: Theorizing Just War and Defining War Crimes in World of Warcraft.” Pacific Coast Philology 44, no. 2: 159–72.
    • MSMPlan. 2015. “The Flaws in Adrienne Shaw’s Paper on Gamergate and Conspiracy Theories.” Medium. March 18. https://medium.com/@MSMPlan/the-flaws-in-adrienne-shaw-s-paper-on-Gamergate-and-conspiracy-theories-7fc91df43bc.
    • Nakamura, Lisa. 2012. “Queer Female of Color: The Highest Difficulty Setting There Is? Gaming Rhetoric as Gender Capital.” Ada: A Journal of Gender, New Media & Technology 1, no. 1. http://adanewmedia.org/2012/11/issue1-nakamura/
    • Negroponte, Nicholas. 1995. Being Digital. New York: Knopf.
    • Plunkett, Luke. 2014. “We Might Be Witnessing The ‘Death of An Identity.’” Kotaku, August 28. http://kotaku.com/we-might-be-witnessing-the-death-of-an-identity-1628203079.
    • Quinn, Zoe. 2015. “August Never Ends.” Quinnspiracy Blog. January 11. http://ohdeargodbees.tumblr.com/post/107838639074/august-never-ends.
    • Salter, Anastasia. 2016. “Code before Content? Brogrammer Culture in Games and Electronic Literature.” presented at the Electronic Literature Organization, University of Victoria, June 10.
    • Sargon of Akkad. 2014. A Conspiracy Within Gaming #Gamergate #NotYourShield. https://www.youtube.com/watch?v=yJyU7RSvs_s.
    • Sasaki, Betty. 2002. “Toward a Pedagogy of Coalition.” In Twenty-First-Century Feminist Classrooms: Pedagogies of Identity and Difference, edited by Amie A. Macdonald and Susan Sánchez-Casal, 31–57. New York, NY: Palgrave Macmillan.
    • Shield Project. 2014. #NotYourShield – We Are Gamers. https://www.youtube.com/watch?v=SYqBdCmDR0M#t=81.
    • Shulevitz, Judith. 2015. “In College and Hiding From Scary Ideas.” The New York Times, March 21. http://www.nytimes.com/2015/03/22/opinion/sunday/judith-shulevitz-hiding-from-scary-ideas.html.
    • “So I Decided to Email Jimbo…” 2015. Reddit. Accessed September 25. https://www.reddit.com/r/KotakuInAction/comments/2pphuo/so_i_decided_to_email_jimbo/cmyzva7?context=3.
    • Solorzano, Daniel, Miguel Ceja, and Tara Yosso. 2000. “Critical Race Theory, Racial Microaggressions, and Campus Racial Climate: The Experiences of African American College Students.” The Journal of Negro Education 69, no. 1/2: 60–73.
    • “The Birth of Vivian.” 2015. http://i.imgur.com/FdqKFwu.jpg (accessed September 27, 2015).
    • Wales, Jimmy. 2014. “I Have an Idea for pro #Gamergate Folks of Good Will. Go to http://Gamergate.wikia.com/Proposed_Wikipedia_Entry … and Write What You Think Is an Appropriate Article.” Microblog. @jimmy_wales. November 12. https://twitter.com/jimmy_wales/status/532624325694992385?ref_src=twsrc%5Etfw.
    • Wernimont, Jacqueline. 2015. “A ‘Conversation’ about Violence against Women Online (with Images, Tweets) · Jwernimo.” Storify. https://storify.com/jwernimo/a-conversation-about-violence-against-women-online (accessed September 23, 2015).
    • Yiannopoulos, Milo. 2014a. “Gamergate: Angry Feminists, Unethical Journalists Are the Ones Not Welcome in the Gaming Community.” Breitbart. September 14. http://www.breitbart.com/big-hollywood/2014/09/15/the-Gamergate-movement-is-making-terrific-progress-don-t-stop-now/.
    • ———. 2014b. “The Authoritarian Left Was on Course to Win the Culture Wars… Then Along Came #Gamergate.” Breitbart. November 12. http://www.breitbart.com/london/2014/11/12/the-authoritarian-left-was-on-course-to-win-the-culture-wars-then-along-came-Gamergate/.
  • Zachary Loeb – What Technology Do We Really Need? – A Critique of the 2016 Personal Democracy Forum

    Zachary Loeb – What Technology Do We Really Need? – A Critique of the 2016 Personal Democracy Forum

    by Zachary Loeb

    ~

    Technological optimism is a dish best served from a stage. Particularly if it’s a bright stage in front of a receptive and comfortably seated audience, especially if the person standing before the assembled group is delivering carefully rehearsed comments paired with compelling visuals, and most importantly if the stage is home to a revolving set of speakers who take turns outdoing each other in inspirational aplomb. At such an event, even occasional moments of mild pessimism – or a rogue speaker who uses their fifteen minutes to frown more than smile – serve to only heighten the overall buoyant tenor of the gathering. From TED talks to the launching of the latest gizmo by a major company, the person on a stage singing the praises of technology has become a familiar cultural motif. And it is a trope that was alive and drawing from that well at the 2016 Personal Democracy Forum, the theme of which was “The Tech We Need.”

    Over the course of two days some three-dozen speakers and a similar number of panelists gathered to opine on the ways in which technology is changing democracy to a rapt and appreciative audience. The commentary largely aligned with the sanguine spirit animating the founding manifesto of the Personal Democracy Forum (PDF) – which frames the Internet as a potent force set to dramatically remake and revitalize democratic society. As the manifesto boldly decrees, “the realization of ‘Personal Democracy,’ where everyone is a full participant, is coming” and it is coming thanks to the Internet. The two days of PDF 2016 consisted of a steady flow of intelligent, highly renowned, well-meaning speakers expounding on the conference’s theme to an audience largely made up of bright caring individuals committed to answering that call. To attend an event like PDF and not feel moved, uplifted or inspired by the speakers would be a testament to an empathic failing. How can one not be moved? But when one’s eyes are glistening and when one’s heart is pounding it is worth being wary of the ideology in which one is being baptized.

    To critique an event like the Personal Democracy Forum – particularly after having actually attended it – is something of a challenge. After all, the event is truly filled with genuine people delivering (mostly) inspiring talks. There is something contagious about optimism, especially when it presents itself as measured optimism. And besides, who wants to be the jerk grousing and grumbling after an activist has just earned a standing ovation? Who wants to cross their arms and scoff that the criticism being offered is precisely the type that serves to shore up the system being criticized? Pessimists don’t often find themselves invited to the after party. Thus, insofar as the following comments – and those that have already been made – may seem prickly and pessimistic it is not meant as an attack upon any particular speaker or attendee. Many of those speakers truly were inspiring (and that is meant sincerely), many speakers really did deliver important comments (that is also meant sincerely), and the goal here is not to question the intentions of PDF’s founders or organizers. Yet prominent events like PDF are integral to shaping the societal discussions surrounding technology – and therefore it is essential to be willing to go beyond the inspirational moments and ask: what is really being said here?

    For events like PDF do serve to advance an ideology, whether they like it or not. And it is worth considering what that ideology means, even if it forces one to wipe the smile from one’s lips. And when it comes to PDF much of its ideology can be discovered simply by dissecting the theme for the 2016 conference: “The Tech We Need.”

    “The Tech”

    What do you (yes, you) think of when you hear the word technology? After all, it is a term that encompasses a great deal, which is one of the reasons why Leo Marx (1997) was compelled to describe technology as a “hazardous concept.” Eyeglasses are technology, but so too is Google Glass. A hammer is technology, and so too is a smart phone. In other words, when somebody says “technology is X” or “technology does Q” or “technology will result in R” it is worth pondering whether technology really is, does or results in those things, or if what is being discussed is really a particular type of technology in a particular context. Granted, technology remains a useful term, it is certainly a convenient shorthand (one which very many people [including me] are guilty of occasionally deploying), but in throwing the term technology about so casually it is easy to obfuscate as much as one clarifies. At PDF it seemed as though a sentence was not complete unless it included a noun, a verb and the word technology – or “tech.” Yet what was meant by “tech” at PDF almost always meant the Internet or a device linked to the Internet – and qualifying this by saying “almost” is perhaps overly generous.

    Thus the Internet (as such), web browsers, smart phones, VR, social networks, server farms, encryption, other social networks, apps, and websites all wound up being pleasantly melted together into “technology.” When “technology” encompasses so much a funny thing begins to happen – people speak effusively about “technology” and only name specific elements when they want to single something out for criticism. When technology is so all encompassing who can possibly criticize technology? And what would it mean to criticize technology when it isn’t clear what is actually meant by the term? Yes, yes, Facebook may be worthy of mockery and smart phones can be used for surveillance but insofar as the discussion is not about the Internet but “technology” on what grounds can one say: “this stuff is rubbish”? For even if it is clear that the term “technology” is being used in a way that focuses on the Internet if one starts to seriously go after technology than one will inevitably be confronted with the question “but aren’t hammers also technology?” In short, when a group talks about “the tech” but by “the tech” only means the Internet and the variety of devices tethered to it, what happens is that the Internet appears as being synonymous with technology. It isn’t just a branch or an example of technology, it is technology! Or to put this in sharper relief: at a conference about “the tech we need” held in the US in 2016 how can one avoid talking about the technology that is needed in the form of water pipes that don’t poison people? The answer: by making it so that the term “technology” does not apply to such things.

    The problem is that when “technology” is used to only mean one set of things it muddles the boundaries of what those things are, and what exists outside of them. And while it does this it allows people to confidently place trust in a big category, “technology,” whereas they would probably have been more circumspect if they were just being asked to place trust in smart phones. After all, “the Internet will save us” doesn’t have quite the same seductive sway as “technology will save us” – even if the belief is usually put more eloquently than that. When somebody says “technology will save us” people can think of things like solar panels and vaccines – even if the only technology actually being discussed is the Internet. Here, though, it is also vital to approach the question of “the tech” with some historically grounded modesty in mind. For the belief that technology is changing the world and fundamentally altering democracy is nothing new. The history of technology (as an academic field) is filled with texts describing how a new tool was perceived as changing everything – from the compass to the telegraph to the phonograph to the locomotive to the [insert whatever piece of technology you (the reader) can think of]. And such inventions were often accompanied by an, often earnest, belief that these inventions would improve everything for the better! Claims that the Internet will save us, invoke déjà vu for those with a familiarity with the history of technology. Carolyn Marvin’s masterful study When Old Technologies Were New (1988) examines the way in which early electrical communications methods were seen at the time of their introduction, and near the book’s end she writes:

    Predictions that strife would cease in a world of plenty created by electrical technology were clichés breathed by the influential with conviction. For impatient experts, centuries of war and struggle testified to the failure of political efforts to solve human problems. The cycle of resentment that fueled political history could perhaps be halted only in a world of electrical abundance, where greed could not impede distributive justice. (206)

    Switch out the words ”electrical technology” for “Internet technology” and the above sentences could apply to the present (and the PDF forum) without further alterations. After all, PDF was certainly a gathering of “the influential” and of “impatient experts.”

    And whenever “tech” and democracy are invoked in the same sentence it is worth pondering whether the tech is itself democratic, or whether it is simply being claimed that the tech can be used for democratic purposes. Lewis Mumford wrote at length about the difference between what he termed “democratic” and “authoritarian” technics – in his estimation “democratic” systems were small scale and manageable by individuals, whereas “authoritarian” technics represented massive systems of interlocking elements where no individual could truly assert control. While Mumford did not live to write about the Internet, his work makes it very clear that he did not consider computer technologies to belong to the “democratic” lineage. Thus, to follow from Mumford, the Internet appears as a wonderful example of an “authoritarian” technic (it is massive, environmentally destructive, turns users into cogs, runs on surveillance, cannot be controlled locally, etc…) – what PDF argues for is that this authoritarian technology can be used democratically. There is an interesting argument there, and it is one with some merit. Yet such a discussion cannot even occur in the confusing morass that one finds oneself in when “the tech” just means the Internet.

    Indeed, by meaning “the Internet” but saying “the tech” groups like PDF (consciously or not) pull a bait and switch whereby a genuine consideration of what “the tech we need” simply becomes a consideration of “the Internet we need.”

    “We”

    Attendees to the PDF conference received a conference booklet upon registration; it featured introductory remarks, a code of conduct, advertisements from sponsors, and a schedule. It also featured a fantastically jarring joke created through the wonders of, perhaps accidental, juxtaposition; however, to appreciate the joke one needed to open the booklet so as to be able to see the front and back cover simultaneously. Here is what that looked like:

    Personal Democracy Forum (2016)

    Get it?

    Hilarious.

    The cover says “The Tech We Need” emblazoned in blue over the faces of the conference speakers, and the back is an advertisement for Microsoft stating: “the future is what we make it.” One almost hopes that the layout was intentional. For, who the heck is the “we” being discussed? Is it the same “we”? Are you included in that “we”? And this is a question that can be asked of each of those covers independently of the other: when PDF says “we” who is included and who is excluded? When Microsoft says “we” who is included and who is excluded? Of course, this gets muddled even more when you consider that Microsoft was the “presenting sponsor” for PDF and that many of the speakers at PDF have funding ties to Microsoft. The reason this is so darkly humorous is that there is certainly an argument to be made that “the tech we need” has no place for mega-corporations like Microsoft, while at the same time the booklet assures that “the future is what we [Microsoft] make it.” In short: the future is what corporations like Microsoft will make it…which might be very different from the kind of tech we need.

    In considering the “we” of PDF it is worth restating that this is a gathering of well-meaning individuals who largely seem to want to approach the idea of “we” with as much inclusivity as possible. Yet defining a “we” is always fraught, speaking for a “we” is always dangerous, and insofar as one can think of PDF with any kind of “we” (or “us”) in mind the only version of the group that really emerges is one that leans heavily towards describing the group actually present at the event. And while one can certainly speak about the level (or lack) of diversity at the PDF event – the “we” who came together at PDF is not particularly representative of the world. This was also brought into interesting relief in some other amusing ways: throughout the event one heard numerous variations of the comment “we all have smart phones” – but this did not even really capture the “we” of PDF. While walking down the stairs to a session one day I clearly saw a man (wearing a conference attendee badge) fiddling with a flip-phone – I suppose he wasn’t included in the “we” of “we all have smart phones.” But I digress.

    One encountered further issues with the “we” when it came to the political content of the forum. While the booklet states, and the hosts repeated over and over, that the event was “non-partisan” such a descriptor is pretty laughable. Those taking to the stage were a procession of people who had cut their teeth working for MoveOn and the activists represented continually self-identified as hailing from the progressive end of the spectrum. The token conservative speaker who stepped onto the stage even made a self-deprecating joke in which she recognized that she was one of only a handful (if that) of Republicans present. So, again, who is missing from this “we”? One can be a committed leftist and genuinely believe that a figure like Donald Trump is a xenophobic demagogue – and still recognize that some of his supporters might have offered a very interesting perspective to the PDF conversation. After all, the Internet (“the tech”) has certainly been used by movements on the right as well – and used quite effectively at that. But this part of a national “we” was conspicuously absent from the forum even if they are not nearly so absent from Twitter, Facebook, or the population of people owning smart phones. Again, it is in no way shape or form an endorsement of anything that Trump has said to point out that when a forum is held to discuss the Internet and democracy that it is worth having the people you disagree with present.

    Another question of the “we” that is worth wrestling with revolves around the way in which events like PDF involve those who offer critical viewpoints. If, as is being argued here, PDF’s basic ideology is that the Internet (“the tech”) is improving people’s lives and will continue to do so (leading towards “personal democracy”) – it is important to note that PDF welcomed several speakers who offered accounts of some of the shortcomings of the Internet. Figures including Sherry Turkle, Kentaro Toyama, Safiya Noble, Kate Crawford, danah boyd, and Douglas Rushkoff all took the stage to deliver some critical points of view – and yet in incorporating such voices into the “we” what occurs is that these critiques function less as genuine retorts and more as safety valves that just blow off a bit of steam. Having Sherry Turkle (not to pick on her) vocally doubt the empathetic potential of the Internet just allows the next speaker (and countless conference attendees) to say “well, I certainly don’t agree with Sherry Turkle.” Nevertheless, one of the best ways to inoculate yourself against the charge of unthinking optimism is to periodically turn the microphone over to a critic. But perhaps the most important things that such critics say are the ways in which they wind up qualifying their comments – thus Turkle says “I’m not anti-technology,” Toyama disparages Facebook only to immediately add “I love Facebook,” and fears regarding the threat posed by AI get laughed off as the paranoia of today’s “apex predators” (rich white men) being concerned that they will lose their spot at the top of the food chain. The environmental costs of the cloud are raised, the biased nature of algorithms is exposed – but these points are couched against a backdrop that says to the assembled technologists “do better” not “the Internet is a corporately controlled surveillance mall, and it’s overrated.” The heresies that are permitted are those that point out the rough edges that need to be rounded so that the pill can be swallowed. To return to the previous paragraph, this is not to say that PDF needs to invite John Zerzan or Chellis Glendinning to speak…but one thing that would certainly expose the weaknesses of the PDF “we” is to solicit viewpoints that genuinely come from outside of that “we.” Granted, PDF is more TED talk than FRED talk.

    And of course, and most importantly, one must think of the “we” that goes totally unheard. Yes, comments were made about the environmental cost of the cloud and passing phrases recognized mining – but PDF’s “we” seems to mainly refer to a “we” defined as those who use the Internet and Internet connected devices. Miners, those assembling high-tech devices, e-waste recyclers, and the other victims of those processes are only a hazy phantom presence. They are mentioned in passing, but not ever included fully in the “we.” PDF’s “the tech we need” is for a “we” that loves the Internet and just wants it to be even better and perhaps a bit nicer, while Microsoft’s we in “the future is what we make it” is a “we” that is committed to staying profitable. But amidst such statements there is an even larger group saying: “we are not being included.” That unheard “we” being the same “we” from the classic IWW song “we have fed you all for a thousand years” (Green et al 2016). And as the second line of that song rings out “and you hail us still unfed.”

    “Need”

    When one looks out upon the world it is almost impossible not to be struck by how much is needed. People need homes, people need –not just to be tolerated – but accepted, people need food, people need peace, people need stability, people need the ability to love without being subject to oppression, people need to be free from bigotry and xenophobia, people need…this list could continue with a litany of despair until we all don sackcloth. But do people need VR headsets? Do people need Facebook or Twitter? Do those in the possession of still-functioning high-tech devices need to trade them in every eighteen months? Of course it is important to note that technology does have an important role in meeting people’s needs – after all “shelter” refers to all sorts of technology. Yet, when PDF talks about “the tech we need” the “need” is shaded by what is meant by “the tech” and as was previously discussed that really means “the Internet.” Therefore it is fair to ask, do people really “need” an iPhone with a slightly larger screen? Do people really need Uber? Do people really need to be able to download five million songs in thirty seconds? While human history is a tale of horror it requires a funny kind of simplistic hubris to think that World War II could have been prevented if only everybody had been connected on Facebook (to be fair, nobody at PDF was making this argument). Are today’s “needs” (and they are great) really a result of a lack of technology? It seems that we already have much of the tech that is required to meet today’s needs, and we don’t even require new ways to distribute it. Or, to put it clearly at the risk of being grotesque: people in your city are not currently going hungry because they lack the proper app.

    The question of “need” flows from both the notion of “the tech” and “we” – and as was previously mentioned it would be easy to put forth a compelling argument that “the tech we need” involves water pipes that don’t poison people with lead, but such an argument is not made when “the tech” means the Internet and when the “we” has already reached the top of Maslow’s hierarchy of needs. If one takes a more expansive view of “the tech” and “we” than the range of what is needed changes accordingly. This issue – the way “tech” “we” and “need” intersect – is hardly a new concern. It is what prompted Ivan Illich (1973) to write, in Tools for Conviviality, that:

    People need new tools to work with rather than tools that ‘work’ for them. They need technology to make the most of the energy and imagination each has, rather than more well-programmed energy slaves. (10)

    Granted, it is certainly fair to retort “but who is the ‘we’ referred to by Illich” or “why can’t the Internet be the type of tool that Illich is writing about” – but here Illich’s response would be in line with the earlier referral to Mumford. Namely: accusations of technological determinism aside, maybe it’s fair to say that some technologies are oversold, and maybe the occasional emphasis on the way that the Internet helps activists serves as a patina that distracts from what is ultimately an environmentally destructive surveillance system. Is the person tethered to their smart phone being served by that device – or are they serving it? Or, to allow Illich to reply with his own words:

    As the power of machines increases, the role of persons more and more decreases to that of mere consumers. (11)

    Mindfulness apps, cameras on phones that can be used to film oppression, new ways of downloading music, programs for raising money online, platforms for connecting people on a political campaign – the user is empowered as a citizen but this empowerment tends to involve needing the proper apps. And therefore that citizen needs the proper device to run that app, and a good wi-fi connection, and… the list goes on. Under the ideology captured in the PDF’s “the tech we need” to participate in democracy becomes bound up with “to consume the latest in Internet innovation.” Every need can be met, provided that it is the type of need, which the Internet can meet. Thus the old canard “to the person with a hammer every problem looks like a nail” finds its modern equivalent in “to the person with a smart phone and a good wi-fi connection, every problem looks like one that can be solved by using the Internet.” But as for needs? Freedom from xenophobia and oppression are real needs – undoubtedly – but the Internet has done a great deal to disseminate xenophobia and prop up oppressive regimes. Continuing to double down on the Internet seems like doing the same thing “we” have been doing and expecting different results because finally there’s an “app for that!”

    It is, again, quite clear that those assembled at PDF came together with well-meaning attitudes, but as Simone Weil (2010) put it:

    Intentions, by themselves, are not of any great importance, save when their aim is directly evil, for to do evil the necessary means are always within easy reach. But good intentions only count when accompanied by the corresponding means for putting them into effect. (180)

    The ideology present at PDF emphasizes that the Internet is precisely “the means” for the realization of its attendees’ good intentions. And those who took to the stage spoke rousingly of using Facebook, Twitter, smart phones, and new apps for all manner of positive effects – but hanging in the background (sometimes more clearly than at other times) is the fact that these systems also track their users’ every move and can be used just as easily by those with very different ideas as to what “positive effects” look like. The issue of “need” is therefore ultimately a matter not simply of need but of “ends” – but in framing things in terms of “the tech we need” what is missed is the more difficult question of what “ends” do we seek. Instead “the tech we need” subtly shifts the discussion towards one of “means.” But, as Jacques Ellul, recognized the emphasis on means – especially technological ones – can just serve to confuse the discussion of ends. As he wrote:

    It must always be stressed that our civilization is one of means…the means determine the ends, by assigning us ends that can be attained and eliminating those considered unrealistic because our means do not correspond to them. At the same time, the means corrupt the ends. We live at the opposite end of the formula that ‘the ends justify the means.’ We should understand that our enormous present means shape the ends we pursue. (Ellul 2004, 238)

    The Internet and the raft of devices and platforms associated with it are a set of “enormous present means” – and in celebrating these “means” the ends begin to vanish. It ceases to be a situation where the Internet is the mean to a particular end, and instead the Internet becomes the means by which one continues to use the Internet so as to correct the current problems with the Internet so that the Internet can finally achieve the… it is a snake eating its own tail.

    And its own tale.

    Conclusion: The New York Ideology

    In 1995, Richard Barbrook and Andy Cameron penned an influential article that described what they called “The Californian Ideology” which they characterized as

    promiscuously combin[ing] the free-wheeling spirit of the hippies and the entrepreneurial zeal of the yuppies. This amalgamation of opposites has been achieved through a profound faith in the emancipatory potential of the new information technologies. In the digital utopia, everybody will be both hip and rich. (Barbrook and Cameron 2001, 364)

    As the placing of a state’s name in the title of the ideology suggests, Barbrook and Cameron were setting out to describe the viewpoint that was underneath the firms that were (at that time) nascent in Silicon Valley. They sought to describe the mixture of hip futurism and libertarian politics that worked wonderfully in the boardroom, even if there was now somebody in the boardroom wearing a Hawaiian print shirt – or perhaps jeans and a hoodie. As companies like Google and Facebook have grown the “Californian Ideology” has been disseminated widely, and though such companies periodically issued proclamations about not being evil and claimed that connecting the world was their goal they maintained their utopian confidence in the “independence of cyberspace” while directing a distasteful gaze towards the “dinosaurs” of representative democracy that would dare to question their zeal. And though it is a more recent player in the game, one is hard-pressed to find a better example than Uber of the fact that this ideology is alive and well.

    The Personal Democracy Forum is not advancing the Californian Ideology. And though the event may have featured a speaker who suggested that the assembled “we” think of the “founding fathers” as start-up founders – the forum continually returned to the questions of democracy. While the Personal Democracy Forum shares the “faith in the emancipatory potential of the new information technologies” with Silicon Valley startups it seems less “free-wheeling” and more skeptical of “entrepreneurial zeal.” In other words, whereas Barbrook and Cameron spoke of “The Californian Ideology” what PDF makes clear is that there is also a “New York Ideology.” Wherein the ideological hallmark is an embrace of the positive potential of new information technologies tempered by the belief that such potential can best be reached by taming the excesses of unregulated capitalism. Where the Californian Ideology says “libertarian” the New York Ideology says “liberation.” Where the Californian Ideology celebrates capital the New York Ideology celebrates the power found in a high-tech enhanced capitol. The New York Ideology balances the excessive optimism of the Californian Ideology by acknowledging the existence of criticism, and proceeds to neutralize this criticism by making it part and parcel of the celebration of the Internet’s potential. The New York Ideology seeks to correct the hubris of the Californian Ideology by pointing out that it is precisely this hubris that turns many away from the faith in the “emancipatory potential.” If the Californian Ideology is broadcast from the stage at the newest product unveiling or celebratory conference, than the New York Ideology is disseminated from conferences like PDF and the occasional skeptical TED talk. The New York Ideology may be preferable to the Californian Ideology in a thousand ways – but ultimately it is the ideology that manifests itself in the “we” one encounters in the slogan “the tech we need.”

    Or, to put it simply, whereas the Californian Ideology is “wealth meaning,” the New York Ideology is “well-meaning.”

    Of course, it is odd and unfair to speak of either ideology as “Californian” or “New York.” California is filled with Californians who do not share in that ideology, and New York is filled with New Yorkers who do not share in that ideology either. Yet to dub what one encounters at PDF to be “The New York Ideology” is to indicate the way in which current discussions around the Internet are not solely being framed by “The Californian Ideology” but also by a parallel position wherein faith in Internet enabled solutions puts aside its libertarian sneer to adopt a democratic smile. One could just as easily call the New York Ideology the “Tech On Stage Ideology” or the “Civic Tech Ideology” – perhaps it would be better to refer to the Californian Ideology as the SV Ideology (silicon valley) and the New York Ideology as the CV ideology (civic tech). But if the Californian Ideology refers to the tech campus in Silicon Valley than the New York Ideology refers to the foundation based in New York – that may very well be getting much of its funding from the corporations that call Silicon Valley home. While Uber sticks with the Californian Ideology, companies like Facebook have begun transitioning to the New York Ideology so that they can have their panoptic technology and their playgrounds too. Whilst new tech companies emerging in New York (like Kickstarter and Etsy) make positive proclamations about ethics and democracy by making it seem that ethics and democracy are just more consumption choices that one picks from the list of downloadable apps.

    The Personal Democracy Forum is a fascinating event. It is filled with intelligent individuals who speak of democracy with unimpeachable sincerity, and activists who really have been able to use the Internet to advance their causes. But despite all of this, the ideological emphasis on “the tech we need” remains based upon a quizzical notion of “need,” a problematic concept of “we,” and a reductive definition of “tech.” For statements like “the tech we need” are not value neutral – and even if the surface ethics are moving and inspirational, sometimes a problematic ideology is most easily disseminated when it takes care to dispense with ideologues. And though the New York Ideology is much more subtle than the Californian Ideology – and makes space for some critical voices – it remains a vehicle for disseminating an optimistic faith that a technologically enhanced Moses shall lead us into the high-tech promised land.

    The 2016 Personal Democracy Forum put forth an inspirational and moving vision of “the tech we need.”

    But when it comes to promises of technological salvation, isn’t it about time that “we” stopped getting our hopes up?

    Coda

    I confess, I am hardly free of my own ideological biases. And I recognize that everything written here may simply be dismissed of by those who find it hypocritical that I composed such remarks on a computer and then posted them online. But I would say that the more we find ourselves using technology the more careful we must be that we do not allow ourselves to be used by that technology.

    And thus, I shall simply conclude by once more citing a dead, but prescient, pessimist:

    I have no illusions that my arguments will convince anyone. (Ellul 1994, 248)

    _____

    Zachary Loeb is a writer, activist, librarian, and terrible accordion player. He earned his MSIS from the University of Texas at Austin, an MA from the Media, Culture, and Communications department at NYU, and is currently working towards a PhD in the History and Sociology of Science department at the University of Pennsylvania. His research areas include media refusal and resistance to technology, ideologies that develop in response to technological change, and the ways in which technology factors into ethical philosophy – particularly in regards of the way in which Jewish philosophers have written about ethics and technology. Using the moniker “The Luddbrarian,” Loeb writes at the blog Librarian Shipwreck, where an earlier version of this post first appeared, and is a frequent contributor to The b2 Review Digital Studies section.

    Back to the essay
    _____

    Works Cited

    • Barbrook, Richard and Andy Cameron. 2001. “The Californian Ideology.” In Peter Ludlow, ed., Crypto Anarchy, Cyberstates and Pirate Utopias. Cambridge: MIT Press. 363-387.
    • Ellul, Jacques. 2004. The Political Illusion. Eugene, OR: Wipf and Stock.
    • Ellul, Jacques. 1994. A Critique of the New Commonplaces. Eugene, OR: Wipf and Stock.
    • Green, Archie, David Roediger, Franklin Rosemont, and Salvatore Salerno. 2016. The Big Red Songbook: 250+ IWW Songs! Oakland, CA: PM Press.
    • Illich, Ivan. 1973. Tools for Conviviality. New York: Harper and Row.
    • Marvin, Carolyn. 1988. When Old Technologies Were New: Thinking About Electric Communication in the Late Nineteenth Century. New York: Oxford University Press.
    • Marx, Leo. 1997. “‘Technology’: The Emergence of a Hazardous Concept.” Social Research 64:3 (Fall). 965-988.
    • Mumford, Lewis. 1964. “Authoritarian and Democratic Technics.” in Technology and Culture, 5:1 (Winter). 1-8.
    • Weil, Simone. 2010. The Need for Roots. London: Routledge.
  • Bradley J. Fest – The Function of Videogame Criticism

    Bradley J. Fest – The Function of Videogame Criticism

    a review of Ian Bogost, How to Talk about Videogames (University of Minnesota Press, 2015)

    by Bradley J. Fest

    ~

    Over the past two decades or so, the study of videogames has emerged as a rigorous, exciting, and transforming field. During this time there have been a few notable trends in game studies (which is generally the name applied to the study of video and computer games). The first wave, beginning roughly in the mid-1990s, was characterized by wide-ranging debates between scholars and players about what they were actually studying, what aspects of videogames were most fundamental to the medium.[1] Like arguments about whether editing or mise-en-scène was more crucial to the meaning-making of film, the early, sometimes heated conversations in the field were primarily concerned with questions of form. Scholars debated between two perspectives known as narratology and ludology, and asked whether narrative or play was more theoretically important for understanding what makes videogames unique.[2] By the middle of the 2000s, however, this debate appeared to be settled (as perhaps ultimately unproductive and distracting—i.e., obviously both narrative and play are important). Over the past decade, a second wave of scholars has emerged who have moved on to more technical, theoretical concerns, on the one hand, and more social and political issues, on the other (frequently at the same time). Writers such as Patrick Crogan, Nick Dyer-Witherford, Alexander R. Galloway, Patrick Jagoda, Lisa Nakamura, Greig de Peuter, Adrienne Shaw, McKenzie Wark, and many, many others write about how issues such as control and empire, race and class, gender and sexuality, labor and gamification, networks and the national security state, action and procedure can pertain to videogames.[3] Indeed, from a wide sampling of contemporary writing about games, it appears that the old anxieties regarding the seriousness of its object have been put to rest. Of course games are important. They are becoming a dominant cultural medium; they make billions of dollars; they are important political allegories for life in the twenty-first century; they are transforming social space along with labor practices; and, after what many consider a renaissance in independent game development over the past decade, some of them are becoming quite good.

    Ian Bogost has been one of the most prominent voices in this second wave of game criticism. A media scholar, game designer, philosopher, historian, and professor of interactive computing at the Georgia Institute of Technology, Bogost has published a number of influential books. His first, Unit Operations: An Approach to Videogame Criticism (2006), places videogames within a broader theoretical framework of comparative media studies, emphasizing that games deserve to be approached on their own terms, not only because they are worthy of attention in and of themselves but also because of what they can show us about the ways other media operate. Bogost argues that “any medium—poetic, literary, cinematic, computational—can be read as a configurative system, an arrangement of discrete, interlocking units of expressive meaning. I call these general instances of procedural expression, unit operations” (2006, 9). His second book, Persuasive Games: The Expressive Power of Videogames (2007), extends his emphasis on the material, discrete processes of games, arguing that they can and do make arguments; that is, games are rhetorical, and they are rhetorical by virtue of what they and their operator can do, their procedures: games make arguments through “procedural rhetoric.”[4] The publication of Persuasive Games in particular—which he promoted with an appearance on The Colbert Report (2005–14)—saw Bogost emerge as a powerful voice in the broad cohort of second wave writers and scholars.

    But I feel that the publication of Bogost’s most recent book, How to Talk about Videogames (2015), might very well end up signaling the beginning of a third phase of videogame criticism. If the first task of game criticism was to formally define its object, and the second wave of game studies involved asking what games can and do say about the world, the third phase might see critics reflecting on their own processes and procedures, thinking, not necessarily about what videogames are and do, but about what videogame criticism is and does. How to Talk about Videogames is a book that frequently poses the (now quite old) question: what is the function of criticism at the present time? In an industry dominated by multinational media megaconglomerates, what should the role of (academic) game criticism be? What can a handful of researchers and scholars possibly do or say in the face of such a massive, implacable, profit-driven industry, where every announcement about future games further stokes its rabid fan base of slobbering, ravening hordes to spend hundreds of dollars and thousands of hours consuming a form known for its spectacular violence, ubiquitous misogyny, and myopic tribalism? What is the point of writing about games when the videogame industry appears to happily carry on as if nothing is being said at all, impervious to any conversation that people may be having about its products beyond what “fans” demand?

    To read the introduction and conclusion of Bogost’s most recent book, one might think that, suggestions about their viability aside, both the videogame industry and the critical writing surrounding it are in serious crisis, and the matter of the cultural status of the videogame has hardly been put to rest. As a scholar, critic, and designer who has been fairly consistent in positively exploring what digital games can do, what they can uniquely accomplish as a process-based medium, it is striking, at least to this reviewer, that Bogost begins by anxiously admitting,

    whenever I write criticism of videogames, someone strongly invested in games as a hobby always asks the question “is this parody?” as if only a miscreant or a comedian or a psychopath would bother to invest the time and deliberateness in even thinking, let alone writing about videogames with the seriousness that random, anonymous Internet users have already used to write about toasters, let alone deliberate intellectuals about film or literature! (Bogost 2015, xi–xii)

    Bogost calls this kind of attention to the status of his critical endeavor in a number of places in How to Talk about Videogames. The book shows him involved in that untimely activity of silently but implicitly assessing his body of work, reflectively approaching his critical task with cautious trepidation. In a variety of moments from the opening and closing of the book, games and criticism are put into serious question. Videogames are puerile, an “empty diversion” (182), and without value; “games are grotesque. . . . [they] are gross, revolting, heaps of arbitrary anguish” (1); “games are stupid” (9); “that there could be a game criticism [seems] unlikely and even preposterous” (181). In How to Talk about Videogames, Bogost, at least in some ways, is giving up his previous fight over whether or not videogames are serious aesthetic objects worthy of the same kind of hermeneutic attention given to more established art forms.[5] If games are predominantly treated as “perversion, excess” (183), a symptom of “permanent adolescence” (180), as unserious, wasteful, unproductive, violently sadistic entertainments—perhaps there is a reason. How to Talk about Videogames shows Bogost turning an intellectual corner toward a decidedly ironic sense of his role as a critic and the worthiness of his critical object.

    Compare Bogost’s current pessimism with the optimism of his previous volume, How to Do Things with Videogames (2011), to which How to Talk about Videogames functions as a kind of sequel or companion. In this earlier book, he is rather more affirmative about the future of the videogame industry (and, by proxy, videogame criticism):

    What if we allowed that videogames have many possible goals and purposes, each of which couples with many possible aesthetics and designs to create many possible player experiences, none of which bears any necessary relationship to the commercial videogame industry as we currently know it. The more games can do, the more the general public will become accepting of, and interested in, the medium in general. (Bogost 2011, 153)

    2011’s How to Do Things with Videogames aims to bring to the table things that previous popular and scholarly approaches to videogames had ignored in order to show all the other ways that videogames operate, what they are capable of beyond mere mimetic simulation or entertaining distraction, and how game criticism might allow their audiences to expand beyond the province of the “gamer” to mirror the diversified audiences of other media. Individual chapters are devoted to how videogames produce empathy and inspire reverence; they can be vehicles for electioneering and promotion; games can relax, titillate, and habituate; they can be work. Practicing what he calls “media microecology,” a critical method that “seeks to reveal the impact of a medium’s properties on society . . . through a more specialized, focused attention . . . digging deep into one dark, unexplored corner of a media ecosystem” (2011, 7), Bogost argues that game criticism should be attentive to more than simply narrative or play. The debates that dominated the early days of critical game studies, in this regard, only account for a rather limited view of what games can do. Appearing at a time when many were arguing that the medium was beginning to reach aesthetic maturity, Bogost’s 2011 book sounds a note of hope and promise for the future of game studies and the many unexplored possibilities for game design.

    How to Talk about Videogames

    I cannot really overstate, however, the ways in which How to Talk about Videogames, published four years later, shows Bogost reversing tack, questioning his entire enterprise.[6] Even with the appearance of such a serious, well-received game as Gone Home (2013)—to which he devotes a particularly scathing chapter about what the celebration of an ostensibly adolescent game tells us about contemporaneity—this is a book that repeatedly emphasizes the cultural ghetto in which videogames reside. Criticism devoted exclusively to this form risks being “subsistence criticism. . . . God save us from a future of game critics, gnawing on scraps like the zombies that fester in our objects of study” (188). Despite previous claims about videogames “[helping] us expose and interrogate the ways we engage the world in general, not just the ways that computational systems structure or limit that experience” (Bogost 2006, 40), How to Talk about Videogames is, at first glance, a book that raises the question of not only how videogames should be talked about, but whether they have anything to say in the first place.

    But it is difficult to gauge the seriousness of Bogost’s skepticism and reluctance given a book filled with twenty short essays of highly readable, informative, and often compelling criticism. (The disappointingly short essay, “The Blue Shell Is Everything That’s Wrong with America”—in which he writes: “This is the Blue Shell of collapse, the Blue Shell of financial hubris, the Blue Shell of the New Gilded Age” [26]—particularly stands out in the way that it reads an important if overlooked aspect of a popular game in terms of larger social issues.) For it is, really, somewhat unthinkable that someone who has written seven books on the subject would arrive at the conclusion that “videogames are a lot like toasters. . . . Like a toaster, a game is both appliance and hearth, both instrument and aesthetic, both gadget and fetish. It’s preposterous to do game criticism, like it’s preposterous to do toaster criticism” (ix and xii).[7] Bogost’s point here is rhetorical, erring on the side of hyperbole in order to emphasize how videogames are primarily process-based—that they work and function like toasters perhaps more than they affect and move like films or novels (a claim with which I imagine many would disagree), and that there is something preposterous in writing criticism about a process-based technology. A decade after emphasizing videogames’ procedurality in Unit Operations, this is a way for him to restate and reemphasize these important claims for the more popular audience intended for How to Talk about Videogames. Games involve actions, which make them different from other media that can be more passively absorbed. This is why videogames are often written about in reviews “full of technical details and thorough testing and final, definitive scores delivered on improbably precise numerical scales” (ix). Bogost is clear. He is not a reviewer. He is not assessing games’ ability to “satisfy our need for leisure [as] their only function.” He is a critic and the critic’s activity, even if his object resembles a toaster, is different.

    But though it is apparent why games might require a different kind of criticism than other media, what remains unclear is what Bogost believes the role of the critic ought to be. He says, contradicting the conclusion of How to Do Things with Videogames, that “criticism is not conducted to improve the work or the medium, to win over those who otherwise would turn up their noses at it. . . . Rather, it is conducted to get to the bottom of something, to grasp its form, context, function, meaning, and capacities” (xii). This seems like somewhat of a mistake, and a mistake that ignores both the history of criticism and Bogost’s own practice as a critic. Yes, of course criticism should investigate its object, but even Matthew Arnold, who emphasized “disinterestedness . . . keeping aloof from . . . ‘the practical view of things,’” also understood that such an approach could establish “a current of fresh and true ideas” (Arnold 1993 [1864], 37 and 49). No matter how disinterested, criticism can change the ways that art and the world are conceived and thought about. Indeed, only a sentence later it is difficult to discern what precisely Bogost believes the function of videogame criticism to be if not for improving the work, the medium, the world, if not for establishing a current from which new ideas might emerge. He writes that criticism can “venture so far from ordinariness of a subject that the terrain underfoot gives way from manicured path to wilderness, so far that the words that we would spin tousle the hair of madness. And then, to preserve that wilderness and its madness, such that both the works and our reflections on them become imbricated with one another and carried forward into the future where others might find them anew” (xii; more on this in a moment). It is clear that Bogost understands the mode of the critic to be disinterested and objective, to answer ‘the question ‘What is even going on here?’” (x), but it remains unclear why such an activity would even be necessary or worthwhile, and indeed, there is enough in the book that points to criticism being a futile, unnecessary, parodic, parasitic, preposterous endeavor with no real purpose or outcome. In other words, he may say how to talk about videogames, but not why anyone would ever really want to do so.

    I have at least partially convinced myself that Bogost’s claims about videogames being more like toasters than other art forms, along with the statements above regarding the disreputable nature of videogames, are meant as rhetorical provocations, ironic salvos to inspire from others more interesting, rigorous, thoughtful, and complex critical writing, both of the popular and academic stripe. I also understand that, as he did in Unit Operations, Bogost balks at the idea of a critical practice wholly devoted to videogames alone: “the era of fields and disciplines ha[s] ended. The era of critical communities ha[s] ended. And the very idea of game criticism risks Balkanizing games writing from other writing, severing it from the rivers and fields that would sustain it” (187). But even given such an understanding, it is unclear who precisely is suggesting that videogame criticism should be a hermetically sealed niche cut off from the rest of the critical tradition. It is also unclear why videogame criticism is so preposterous, why writing it—even if a critic’s task is limited to getting “to the bottom of something”—is so divorced from the current of other works of cultural criticism. And finally, given what are, at the end of the day, some very good short essays on games that deserve a thoughtful readership, it is unclear why Bogost has framed his activity in such a negatively self-aware fashion.

    So, rather than pursue a discussion about the relative merits and faults of Bogost’s critical self-reflexivity, I think it worth asking what changed between his 2011 and 2015 books, what took him from being a cheerleader—albeit a reticent, tempered, and disinterested one—to questioning the very value of videogame criticism itself. Why does he change from thinking about the various possibilities for doing things with videogames to thinking that “entering a games retail outlet is a lot like entering a sex shop or a liquor store . . . game shops are still vaguely unseemly” (182)?[8] I suspect that such events as 2014’s Gamergate—when independent game designer Zoe Quinn, critic Anita Sarkeesian, and others were threatened and harassed for their feminist views—the generally execrable level of discourse found on internet comments pages, and the questionable cultural identity of the “gamer,” probably account for some of Bogost’s malaise.[9] Indeed, most of the essays found in How to Talk about Videogames initially appeared online, largely in The Atlantic (where he is an editor) and Gamasutra, and, I have to imagine, suffered for it in their comments sections. With this change in audience and platform, it seems to follow that the opening and closing of How to Talk about Videogames reflect a general exhaustion with the level of discourse from fans, companies, and internet trolls. How can criticism possibly thrive or have an impact in a community that so frequently demonstrates its intolerance and rage toward other modes of thinking and being that might upset its worldview and sense of cultural identity? How does one talk to those who will not listen?

    And if these questions perhaps sound particularly apt today—that the “gamer” might bear an awfully striking resemblance to other headline-grabbing individuals and groups dominating the public discussion in the months after the publication of Bogost’s book, namely Donald J. Trump and his supporters—they should. I agree with Bogost that it can be difficult to see the value of criticism at a time when many United States citizens appear, at least on the surface, to be actively choosing to be uncritical. (As Philip Mirowski argues, the promotion of “ignorance [is] the lynchpin in the neoliberal project” [2013, 96].) Given such a discursive landscape, what is the purpose of writing, even in Bogost’s admirably clear (yet at times maddeningly spare) prose, if no amount of stylistic precision or rhetorical complexity—let alone a mastery of basic facts—can influence one’s audience? How to Talk about Videogames is framed as a response to the anti-intellectual atmosphere of the middle of the second decade of the twenty-first century, and it is an understandably despairing one. As such, it is not surprising that Bogost concludes that criticism has no role to play in improving the medium (or perhaps the world) beyond mere phenomenological encounter and description given the social fabric of life in the 2010s. In a time of vocally racist demagoguery, an era witnessing a rising tide of reactionary nationalism in the US and around the world, a period during which it often seems like no words of any kind can have any rhetorical effect at all—procedurally or otherwise—perhaps the best response is to be quiet. But I also think that this is to misunderstand the function of critical thought, regardless of what its object might be.

    To be sure, videogame creators have probably not yet produced a Citizen Kane (1941), and videogame criticism has not yet produced a work like Erich Auerbach’s Mimesis (1946). I am unconvinced, however, that such future accomplishments remain out of reach, that videogames are barred from profound aesthetic expression, and that writing about games preclude the heights attained by previous criticism simply because of some ill-defined aspect of the medium which prevents it from ever aspiring to anything beyond mere craft. Is a study of the Metal Gear series (1987–2015) similar to Roland Barthes’s S/Z (1970) really all that preposterous? Is Mario forever denied his own Samuel Johnson simply because he is composed of code rather than words? For if anything is unclear about Bogost’s book, it is what precisely prohibits videogames from having the effects and impacts of other art forms, why they are restricted to the realm of toasters, incapable of anything beyond adolescent poiesis. Indeed, Bogost’s informative and incisive discussion about Ms. Pac-Man (1981), his thought-provoking interpretation of Mountain (2014), or the many moments of accomplished criticism in his previous books—for example, his masterful discussion of the “figure of fascination” in Unit Operations—betray such claims.[10]

    Matthew Arnold once famously suggested that creativity and criticism were intimately linked, and I believe it might be worthwhile to remember this for the future of videogame criticism:

    It is the business of the critical power . . . “in all branches of knowledge, theology, philosophy, history, art, science, to see the object as in itself it really is.” Thus it tends, at last, to make an intellectual situation of which the creative power can profitably avail itself. It tends to establish an order of ideas, if not absolutely true, yet true by comparison with that which it displaces; to make the best ideas prevail. Presently these new ideas reach society, the touch of truth is the touch of life, and there is a stir and growth everywhere; out of this stir and growth come the creative epochs of literature. (Arnold 1993 [1864], 29)

    In other words, criticism has a vital role to play in the development of an art form, especially if an art form is experiencing contraction or stagnation. Whatever disagreements I might have with Arnold, I too believe that criticism and creativity are indissolubly linked, and further, that criticism has the power to shape and transform the world. Bogost says that “being a critic is not an enjoyable job . . . criticism is not pleasurable” (x). But I suspect that there may still be many who share Arnold’s view of criticism as a creative activity, and maybe the problem is not that videogame criticism is akin to preposterous toaster criticism, but that the function of videogame criticism at the present time is to expand its own sense of what it is doing, of what it is capable, of how and why it is written. When Bogost says he wants “words that . . . would . . . tousle the hair of madness,” why not write in such a fashion (Bogost’s controlled style rarely approaches madness), expanding criticism beyond mere phenomenological summary at best or zombified parasitism at worst. Consider, for instance, Jonathan Arac: “Criticism is literary writing that begins from previous literary writing. . . . There need not be a literary avant-garde for criticism to flourish; in some cases criticism itself plays a leading cultural role” (1989, 7). If we are to take seriously Bogost’s point about how the overwhelmingly positive reaction to Gone Home reveals the aesthetic and political impoverishment of the medium, then it is disappointing to see someone so well-positioned to take a leading cultural role in shaping the conversation about how videogames might change or transform surrendering the field.

    Forget analogies. What if videogame criticism were to begin not from comparing games to toasters but from previous writing, from the history of criticism, from literature and theory, from theories of art and architecture and music, from rhetoric and communication, from poetry? For, given the complex mediations present in even the simplest games—i.e., games not only involve play and narrative, but raise concerns about mimesis, music, sound, spatiality, sociality, procedurality, interface effects, et cetera—it increasingly makes less and less sense to divorce or sequester games from other forms of cultural study or to think that videogames are so unique that game studies requires its own critical modality. If Bogost implores game critics not to limit themselves to a strictly bound, niche field uninformed by other spheres of social and cultural inquiry, if game studies is to go forward into a metacritical third wave where it can become interested in what makes videogames different from other forms and self-reflexively aware of the variety of established and interconnecting modes of cultural criticism from which the field can only benefit, then thinking about the function of criticism historically should guide how and why games are written about at the present time.

    Before concluding, I should also note that something else perhaps changed between 2011 and 2015, namely, Bogost’s alignment with the philosophical movements of speculative realism and object-oriented ontology. In 2012, he published Alien Phenomenology, or What It’s Like to Be a Thing, a book that picks up some of the more theoretical aspects of Unit Operations and draws upon the work of Graham Harman and other anti-correlationists to pursue a flat ontology, arguing that the job of the philosopher “is to amplify the black noise of objects to make the resonant frequencies of the stuffs inside them hum in credibly satisfying ways. Our job is to write the speculative fictions of their processes, their unit operations” (Bogost 2012, 34). Rather than continue pursuing an anthropocentric, correlationist philosophy that can only think about objects in relation to human consciousness, Bogost claims that “the answer to correlationism is not the rejection of any correlate but the acknowledgment of endless ones, all self-absorbed, obsessed by givenness rather than by turpitude” (78). He suggests that philosophy should extend the possibility of phenomenological encounter to all objects, to all units, in his parlance; let phenomenology be alien and weird; let toasters encounter tables, refrigerators, books, climate change, Pittsburgh, Higgs boson particles, the 2016 Electronic Entertainment Expo, bagels, et cetera.[11]

    Though this is not the venue to pursue a broader discussion of Bogost’s philosophical writing, I mention his speculative turn because it seems important for understanding his changing attitudes about criticism. That is, as Graham Harman’s 2012 essay, “The Well-Wrought Broken Hammer,” negatively demonstrates, it is unclear what a flat ontology has to say, if anything, about art, what such a philosophy can bring to critical, hermeneutic activity.[12] Indeed, regardless of where one stands with regard to object-oriented ontology and other speculative realisms, what these philosophies might offer to critics seems to be one of the more vexing and polarizing intellectual questions of our time. Hermeneutics may very well prove inescapably “correlationist,” and, indeed, no matter how disinterested, historical. It is an open question whether or not one can ground a coherent and worthwhile critical practice upon a flat ontology. I am tempted to suspect not. I also suspect that the current trends in continental philosophy, at the end of the day, may not be really interested in criticism as such, and perhaps that is not really such a big deal. Criticism, theory, and philosophy are not synonymous activities nor must they be. (The question about criticism vis-à-vis alien phenomenology also appears to have motivated the Object Lessons series that Bogost edits.) This is all to say, rather than ground videogame criticism in what may very well turn out to be an intellectual fad whose possibilities for writing worthwhile criticism remain somewhat dubious, perhaps there may be more ripe currents and streams—namely, the history of criticism—that can inform how we write about videogames. Criticism may be steered by keeping in view many polestars; let us not be overly swayed by what, for now, burns brightest. For an area of humanistic inquiry that is still very much emerging, it seems a mistake to assume it can and should be nothing more than toaster criticism.

    In this review I have purposefully made few claims about the state of videogames. This is partly because I do not feel that any more work needs to be done to justify writing about the medium. It is also partly because I feel that any broad statement about the form would be an overgeneralization at this point. There are too many games being made in too many places by too many different people for any all-encompassing statement about the state of videogame art to be all that coherent. (In this, I think Bogost’s sense of the need for a media microecology of videogames is still apropos.) But I will say that the state of videogame criticism—and, strangely enough, particularly the academic kind—is one of the few places where humanistic inquiry seems, at least to me, to be growing and expanding rather than contracting or ossifying. Such a generally positive and optimistic statement about a field of the humanities may not adhere to present conceptions about academic activity (indeed, it might even be unfashionable!), which seem to more generally despair about the humanities, and rightfully so. Admitting that some modes of criticism might be, at least in some ways, exhausted, would be an important caveat, especially given how the past few years have seen a considerable amount of reflection about contemporary modes of academic criticism—e.g., Rita Felski’s The Limits of Critique (2015) or Eric Hayot’s “Academic Writing, I Love You. Really, I Do” (2014). But I think that, given how the anti-intellectual miasma that has long been present in US life has intensified in recent years, creeping into seemingly every discourse, one of the really useful functions of videogame criticism may very well be its potential ability to allow reflection on the function of criticism itself in the twenty-first century. If one of the most prominent videogame critics is calling his activity “preposterous” and his object “adolescent,” this should be a cause for alarm, for such claims cannot but help to perpetuate present views about the worthlessness of the humanities. So, I would like to modestly suggest that, rather than look to toasters and widgets to inform how we talk about videogames, let us look to critics and what they have written. Edward W. Said once wrote: “for in its essence the intellectual life—and I speak here mainly about the social sciences and the humanities—is about the freedom to be critical: criticism is intellectual life and, while the academic precinct contains a great deal in it, its spirit is intellectual and critical, and neither reverential nor patriotic” (1994, 11). If one can approach videogames—of all things!—in such a spirit, perhaps other spheres of human activity can rediscover their critical spirit as well.

    _____

    Bradley J. Fest will begin teaching writing this fall at Carnegie Mellon University. His work has appeared or is forthcoming in boundary 2 (interviews here and here), Critical Quarterly, Critique, David Foster Wallace and “The Long Thing” (Bloomsbury, 2014), First Person Scholar, The Silence of Fallout (Cambridge Scholars, 2013), Studies in the Novel, and Wide Screen. He is also the author of a volume of poetry, The Rocking Chair (Blue Sketch, 2015), and a chapbook, “The Shape of Things,” was selected as finalist for the 2015 Tomaž Šalamun Prize and is forthcoming in Verse. Recent poems have appeared in Empty Mirror, PELT, PLINTH, TXTOBJX, and Small Po(r)tions. He previously reviewed Alexander R. Galloway’s The Interface Effect for The b2 Review “Digital Studies.”

    Back to the essay
    _____

    NOTES

    [1] On some of the first wave controversies, see Aarseth (2001).

    [2] For a representative sample of essays and books in the narratology versus ludology debate from the early days of academic videogame criticism, see Murray (1997 and 2004), Aarseth (1997, 2003, and 2004), Juul (2001), and Frasca (2003).

    [3] For representative texts, see Crogan (2011), Dyer-Witherford and Peuter (2009), Galloway (2006a and 2006b), Jagoda (2013 and 2016), Nakamura (2009), Shaw (2014), and Wark (2007). My claims about the vitality of the field of game studies are largely a result of having read these and other critics. There have also been a handful of interesting “videogame memoirs” published recently. See Bissell (2010) and Clune (2015).

    [4] Bogost defines procedurality as follows: “Procedural representation takes a different form than written or spoken representation. Procedural representation explains processes with other processes. . . . [It] is a form of symbolic expression that uses process rather than language” (2007, 9). For my own discussion of proceduralism, particularly with regard to The Stanley Parable (2013) and postmodern metafiction, see Fest (forthcoming 2016).

    [5] For instance, in the concluding chapter of Unit Operations, Bogost writes powerfully and convincingly about the need for a comparative videogame criticism in conversation with other forms of cultural criticism, arguing that “a structural change in our thinking must take place for videogames to thrive, both commercially and culturally” (2006, 179). It appears that the lack of any structural change in the nonetheless wildly thriving—at least financially—videogame industry has given Bogost serious pause.

    [6] Indeed, at one point he even questions the justification for the book in the first place: “The truth is, a book like this one is doomed to relatively modest sales and an even more modest readership, despite the generous support of the university press that publishes it and despite the fact that I am fortunate enough to have a greater reach than the average game critic” (Bogost 2015, 185). It is unclear why the limited reach of his writing might be so worrisome to Bogost given that, historically, the audience for, say, poetry criticism has never been all that large.

    [7] In addition to those previously mentioned, Bogost has also published Racing the Beam: The Atari Video Computer System (2009) and, with Simon Ferrari and Bobby Schweizer, Newsgames: Journalism at Play (2010). Also forthcoming is Play Anything: The Pleasure of Limits, the Uses of Boredom, and the Secret of Games (2016).

    [8] This is, to be sure, a somewhat confusing point. Are not record stores, book stores, and video stores (if such things still exist), along with tea shops, shoe stores, and clothing stores “retail establishment[s] devoted to a singular practice” (Bogost 2015, 182–83)? Are all such establishments unseemly because of the same logic? What makes a game store any different?

    [9] For a brief overview of Gamergate, see Winfield (2014). For a more detailed discussion of both the cultural and technological underpinnings of Gamergate, with a particular emphasis on the relationship between the algorithmic governance of sites such as Reddit or 4chan and online misogyny and harassment, see Massanari’s (2015) important essay. For links to a number of other articles and essays on gaming and feminism, see Ligman (2014) and The New Inquiry (2014). For essays about contemporary “gamer” culture, see Williams (2014) and Frase (2014). On gamers, Bogost writes in a chapter titled “The End of Gamers” from his previous book: “as videogames broaden in appeal, being a ‘gamer’ will actually become less common, if being a gamer means consuming games as one’s primary media diet or identifying with videogames as a primary part of one’s identity” (2011, 154).

    [10] See Bogost (2006, 73–89). Also, to be fair, Bogost devotes a paragraph of the introduction of How to Talk about Videogames to the considerable affective properties of videogames, but concludes the paragraph by saying that games are “Wagnerian Gesamtkunstwerk-flavored chewing gum” (Bogost 2015, ix), which, I feel, considerably undercuts whatever aesthetic value he had just ascribed to them.

    [11] In Alien Phenomenology Bogost calls such lists “Latour litanies” (2012, 38) and discusses this stylistic aspect of object-oriented ontology at some length in the chapter, “Ontography” (35–59).

    [12] See Harman (2012). Bogost addresses such concerns in the conclusion of Alien Phenomenology, responding to criticism about his study of the Atari 2600: “The platform studies project is an example of alien phenomenology. Yet our efforts to draw attention to hardware and software objects have been met with myriad accusations of human erasure: technological determinism most frequently, but many other fears and outrages about ‘ignoring’ or ‘conflating’ or ‘reducing,’ or otherwise doing violence to ‘the cultural aspects’ of things. This is a myth” (2012, 132).

    Back to the essay

    WORKS CITED

    • Aarseth, Espen. 1997. Cybertext: Perspectives on Ergodic Literature. Baltimore: Johns Hopkins University Press.
    • ———. 2001. “Computer Game Studies, Year One.” Game Studies 1, no. 1. http://gamestudies.org/0101/editorial.html.
    • ———. 2003. “Playing Research: Methodological Approaches to Game Analysis.” Game Approaches: Papers from spilforskning.dk Conference, August 28–29. http://hypertext.rmit.edu.au/dac/papers/Aarseth.pdf.
    • ———. 2004. “Genre Trouble: Narrativism and the Art of Simulation.” In First Person: New Media as Story, Performance, and Game, edited by Noah Wardrip-Fruin and Pat Harrigan, 45–55. Cambridge, MA: MIT Press.
    • Arac, Jonathan. 1989. Critical Genealogies: Historical Situations for Postmodern Literary Studies. New York: Columbia University Press.
    • Arnold, Matthew. 1993 (1864). “The Function of Criticism at the Present Time.” In Culture and Anarchy and Other Writings, edited by Stefan Collini, 26–51. New York: Cambridge University Press.
    • Bissell, Tom. 2010. Extra Lives: Why Video Games Matter. New York: Pantheon.
    • Bogost, Ian. 2006. Unit Operations: An Approach to Videogame Criticism. Cambridge, MA:MIT Press.
    • ———. 2007. Persuasive Games: The Expressive Power of Videogame Criticism. Cambridge, MA: MIT Press.
    • ———. 2009. Racing the Beam: The Atari Video Computer System. Cambridge, MA: MIT
    • Press.
    • ———. 2011. How to Do Things with Videogames. Minneapolis: University of Minnesota Press.
    • ———. 2012. Alien Phenomenology, or What It’s Like to Be a Thing. Minneapolis: University of Minnesota Press.
    • ———. 2015. How to Talk about Videogames. Minneapolis: University of Minnesota Press.
    • ———. Forthcoming 2016. Play Anything: The Pleasure of Limits, the Uses of Boredom, and the Secret of Games. New York: Basic Books.
    • Bogost, Ian, Simon Ferrari, and Bobby Schweizer. 2010. Newsgames: Journalism at Play.
    • Cambridge, MA: MIT Press.
    • Clune, Michael W. 2015. Gamelife: A Memoir. New York: Farrar, Straus and Giroux.
    • Crogan, Patrick. 2011. Gameplay Mode: War, Simulation, and Tehnoculture. Minneapolis: University of Minnesota Press.
    • Dyer-Witherford, Nick, and Greig de Peuter. 2009. Games of Empire: Global Capitalism and Video Games. Minneapolis: University of Minnesota Press.
    • Felski, Rita. 2015. The Limits of Critique. Chicago: University of Chicago Press.
    • Fest, Bradley J. Forthcoming 2016. “Metaproceduralism: The Stanley Parable and the Legacies of Postmodern Metafiction.” “Videogame Adaptation,” edited by Kevin M. Flanagan, special issue, Wide Screen.
    • Frasca, Gonzalo. 2003. “Simulation versus Narrative: Introduction to Ludology.” In The Video Game Theory Reader, edited by Mark J. P. Wolf and Bernard Perron, 221–36. New York: Routledge.
    • Frase, Peter. 2014.  “Gamer’s Revanche.” Peter Frase (blog), September 3. http://www.peterfrase.com/2014/09/gamers-revanche/.
    • Galloway, Alexander R. 2006a. “Warcraft and Utopia.” Ctheory.net, February 16. http://www.ctheory.net/articles.aspx?id=507.
    • ———. 2006b. Gaming: Essays on Algorithmic Culture. Minneapolis: University of Minnesota Press.
    • Harman, Graham. 2012. “The Well-Wrought Broken Hammer: Object-Oriented Literary Criticism.” New Literary History 43, no. 2: 183–203.
    • Hayot, Eric. 2014. “Academic Writing, I Love You. Really, I Do.” Critical Inquiry 41, no. 1: 53–77.
    • Jagoda, Patrick. 2013. “Gamification and Other Forms of Play.” boundary 2 40, no. 2: 113–44.
    • ———. 2016. Network Aesthetics. Chicago: University of Chicago Press.
    • Juul, Jesper. 2001. “Games Telling Stories? A Brief Note on Games and Narratives.” Game Studies 1, no. 1. http://www.gamestudies.org/0101/juul-gts/.
    • Ligman, Chris. 2014. “August 31st.” Critical Distance, August 31. http://www.critical-distance.com/2014/08/31/august-31st/.
    • Massanari, Adrienne . 2015. “#Gamergate and The Fappening: How Reddit’s Algorithm, Governance, and Culture Support Toxic Technocultures.” New Media & Society, OnlineFirst, October 9.
    • Mirowski, Philip. 2013. Never Let a Serious Crisis Go to Waste: How Neoliberalism Survived the Financial Meltdown. New York: Verso.
    • Murray, Janet. 1997. Hamlet on the Holodeck: The Future of Narrative in Cyberspace. Cambridge, MA: MIT Press.
    • ———. 2004. “From Game-Story to Cyberdrama.” In First Person: New Media as Story, Performance, and Game, edited by Noah Wardrip-Fruin and Pat Harrigan, 1–11. Cambridge, MA: MIT Press.
    • Nakamura, Lisa. 2009. “Don’t Hate the Player, Hate the Game: The Racialization of Labor in World of Warcraft.” Critical Studies in Media Communication 26, no. 2: 128–44.
    • The New Inquiry. 2014. “TNI Syllabus: Gaming and Feminism.” New Inquiry, September 2. http://thenewinquiry.com/features/tni-syllabus-gaming-and-feminism/.
    • Said, Edward W. 1994. “Identity, Authority, and Freedom: The Potentate and the Traveler.” boundary 2 21, no. 3: 1–18.
    • Shaw, Adrienne. 2014. Gaming at the Edge: Sexuality and Gender at the Margins of Gamer Culture. Minneapolis: University of Minnesota Press.
    • Wark, McKenzie. 2007. Gamer Theory. Cambridge, MA: Harvard University Press.
    • Williams, Ian. “Death to the Gamer.” Jacobin, September 9. https://www.jacobinmag.com/2014/09/death-to-the-gamer/.
    • Winfield, Nick. 2014. “Feminist Critics of Video Games Facing Threats in ‘GamerGate’ Campaign.” New York Times, October 15. http://www.nytimes.com/2014/10/16/technology/gamergate-women-video-game-threats-anita-sarkeesian.html.

    Back to the essay

  • Audrey Watters – Public Education Is Not Responsible for Tech’s Diversity Problem

    Audrey Watters – Public Education Is Not Responsible for Tech’s Diversity Problem

    By Audrey Watters

    ~

    On July 14, Facebook released its latest “diversity report,” claiming that it has “shown progress” in hiring a more diverse staff. Roughly 90% of its US employees are white or Asian; 83% of those in technical positions at the company are men. (That’s about a 1% improvement from last year’s stats.) Black people still make up just 2% of the workforce at Facebook, and 1% of the technical staff. Those are the same percentages as 2015, when Facebook boasted that it had hired 7 Black people. “Progress.”

    In this year’s report, Facebook blamed the public education system its inability to hire more people of color. I mean, whose fault could it be?! Surely not Facebook’s! To address its diversity problems, Facebook said it would give $15 million to Code.org in order to expand CS education, news that was dutifully reported by the ed-tech press without any skepticism about Facebook’s claims about its hiring practices or about the availability of diverse tech talent.

    The “pipeline” problem, writes Dare Obasanjo, is a “big lie.” “The reality is that tech companies shape the ethnic make up of their employees based on what schools & cities they choose to hire from and where they locate engineering offices.” There is diverse technical talent, ready to be hired; the tech sector, blinded by white, male privilege, does not recognize it, does not see it. See the hashtag #FBNoExcuses which features more smart POC in tech than work at Facebook and Twitter combined, I bet.

    Facebook’s decision to “blame schools” is pretty familiar schtick by now, I suppose, but it’s still fairly noteworthy coming from a company whose founder and CEO is increasingly active in ed-tech investing. More broadly, Silicon Valley continues to try to shape the future of education – mostly by defining that future as an “engineering” or “platform” problem and then selling schools and parents and students a product in return. As the tech industry utterly fails to address diversity within its own ranks, what can we expect from its vision for ed-tech?!

    My fear: ed-tech will ignore inequalities. Ed-tech will expand inequalities. Ed-tech will, as Edsurge demonstrated this week, simply co-opt the words of people of color in order to continue to sell its products to schools. (José Vilson has more to say about this particular appropriation in this week’s #educolor newsletter.)

    And/or: ed-tech will, as I argued this week in the keynote I delivered at the Digital Pedagogy Institute in PEI, confuse consumption with “innovation.” “Gotta catch ’em all” may be the perfect slogan for consumer capitalism; but it’s hardly a mantra I’m comfortable chanting to push for education transformation. You cannot buy your way to progress.

    All of the “Pokémon GO will revolutionize education” claims have made me incredibly angry, even though it’s a claim that’s made about every single new product that ed-tech’s early adopters find exciting (and clickbait-worthy). I realize there are many folks who seem to find a great deal of enjoyment in the mobile game. Hoorah. But there are some significant issues with the game’s security, privacy, its Terms of Service, its business model, and its crowd-sourced data model – a data model that reflects the demographics of those who played an early version of the game and one that means that there are far fewer “pokestops” in Black neighborhoods. All this matters for Pokémon GO; all this matters for ed-tech.

    Pokémon GO.
    Pokémon GO

    Pokémon GO is just the latest example of digital redlining, re-inscribing racist material policies and practices into new, digital spaces. So when ed-tech leaders suggest that we shouldn’t criticize Pokémon GO, I despair. I really do. Who is served by being silent!? Who is served by enforced enthusiasm? How does ed-tech, which has its own problems with diversity, serve to re-inscribe racist policies and practices because its loudest proponents have little interest in examining their own privileges, unless, as José points out, it gets them clicks?

    Sigh.
    _____

    Audrey Watters is a writer who focuses on education technology – the relationship between politics, pedagogy, business, culture, and ed-tech. She has worked in the education field for over 15 years: teaching, researching, organizing, and project-managing. Although she was two chapters into her dissertation (on a topic completely unrelated to ed-tech), she decided to abandon academia, and she now happily fulfills the one job recommended to her by a junior high aptitude test: freelance writer. Her stories have appeared on NPR/KQED’s education technology blog MindShift, in the data section of O’Reilly Radar, on Inside Higher Ed, in The School Library Journal, in The Atlantic, on ReadWriteWeb, and Edutopia. She is the author of the recent book The Monsters of Education Technology (Smashwords, 2014) and working on a book called Teaching Machines. She maintains the widely-read Hack Education blog, and writes frequently for The b2 Review Digital Studies magazine on digital technology and education.

    Back to the essay