b2o: boundary 2 online

Category: Digital Studies

Reviews and analysis of scholarly books about digital technology and culture, as well as of articles, legal proceedings, videos, social media, digital humanities projects, and other emerging digital forms, offered from a humanist perspective, in which our primary intellectual commitment is to the deeply embedded texts, figures, themes, and politics that constitute human culture, regardless of the medium in which they occur.

  • All Hitherto Existing Social Media

    All Hitherto Existing Social Media

    Social Media: A Critical Introduction (Sage, 2013)a review of Christian Fuchs, Social Media: A Critical Introduction
    by Zachary Loeb
    ~
    Legion are the books and articles describing the social media that has come before. Yet the tracts focusing on Friendster, LiveJournal, or MySpace now appear as throwbacks, nostalgically immortalizing the internet that was and is now gone. On the cusp of the next great amoeba-like expansion of the internet (wearable technology and the “internet of things”) it is a challenging task to analyze social media as a concept while recognizing that the platforms being focused upon—regardless of how permanent they seem—may go the way of Friendster by the end of the month. Granted, social media (and the companies whose monikers act as convenient shorthand for it) is an important topic today. Those living in highly digitized societies can hardly avoid the tendrils of social media (even if a person does not use a particular platform it may still be tracking them), but this does not mean that any of us fully understand these platforms, let alone have a critical conception of them. It is into this confused and confusing territory that Christian Fuchs steps with his Social Media: A Critical Introduction.

    It is a book ostensibly targeted at students. Though when it comes to social media—as Fuchs makes clear—everybody has quite a bit to learn.

    By deploying an analysis couched in Marxist and Critical Theory, Fuchs aims not simply to describe social media as it appears today, but to consider its hidden functions and biases, and along the way to describe what social media could become. The goal of Fuchs’s book is to provide readers—the target audience is students, after all—with the critical tools and proper questions with which to approach social media. While Fuchs devotes much of the book to discussing specific platforms (Google, Facebook, Twitter, WikiLeaks, Wikipedia), these case studies are used to establish a larger theoretical framework which can be applied to social media beyond these examples. Affirming the continued usefulness of Marxist and Frankfurt School critiques, Fuchs defines the aim of his text as being “to engage with the different forms of sociality on the internet in the context of society” (6) and emphasizes that the “critical” questions to be asked are those that “are concerned with questions of power” (7).

    Thus a critical analysis of social media demands a careful accounting of the power structures involved not just in specific platforms, but in the larger society as a whole. So though Fuchs regularly returns to the examples of the Arab Spring and the Occupy Movement, he emphasizes that the narratives that dub these “Twitter revolutions” often come from a rather non-critical and generally pro-capitalist perspective that fail to embed adequately uses of digital technology in their larger contexts.

    Social media is portrayed as an example, like other media, of “techno-social systems” (37) wherein the online platforms may receive the most attention but where the, oft-ignored, layer of material technologies is equally important. Social media, in Fuchs’s estimation, developed and expanded with the growth of “Web 2.0” and functions as part of the rebranding effort that revitalized (made safe for investments) the internet after the initial dot.com bubble. As Fuchs puts it, “the talk about novelty was aimed at attracting novel capital investments” (33). What makes social media a topic of such interest—and invested with so much hope and dread—is the degree to which social media users are considered as active creators instead of simply consumers of this content (Fuchs follows much recent scholarship and industry marketing in using the term “prosumers” to describe this phenomenon; the term originates from the 1970s business-friendly futurology of Alvin Toffler’s The Third Wave). Social media, in Fuchs’s description, represents a shift in the way that value is generated through labor, and as a result an alteration in the way that large capitalist firms appropriate surplus value from workers. The social media user is not laboring in a factory, but with every tap of the button they are performing work from which value (and profit) is skimmed.

    Without disavowing the hope that social media (and by extension the internet) has liberating potential, Fuchs emphasizes that such hopes often function as a way of hiding profit motives and capitalist ideologies. It is not that social media cannot potentially lead to “participatory democracy” but that “participatory culture” does not necessarily have much to do with democracy. Indeed, as Fuchs humorously notes: “participatory culture is a rather harmless concept mainly created by white boys with toys who love their toys” (58). This “love their toys” sentiment is part of the ideology that undergirds much of the optimism around social media—which allows for complex political occurrences (such as the Arab Spring) to be reduced to events that can be credited to software platforms.

    What Fuchs demonstrates at multiple junctures is the importance of recognizing that the usage of a given communication tool by a social movement does not mean that this tool brought about the movement: intersecting social, political and economic factors are the causes of social movements. In seeking to provide a “critical introduction” to social media, Fuchs rejects arguments that he sees as not suitably critical (including those of Henry Jenkins and Manuel Castells), arguments that at best have been insufficient and at worst have been advertisements masquerading as scholarship.

    Though the time people spend on social media is often portrayed as “fun” or “creative,” Fuchs recasts these tasks as work in order to demonstrate how that time is exploited by the owners of social media platforms. By clicking on links, writing comments, performing web searches, sending tweets, uploading videos, and posting on Facebook, social media users are performing unpaid labor that generates a product (in the form of information about users) that can then be sold to advertisers and data aggregators; this sale generates profits for the platform owner which do not accrue back to the original user. Though social media users are granted “free” access to a service, it is their labor on that platform that makes the platform have any value—Facebook and Twitter would not have a commodity to sell to advertisers if they did not have millions of users working for them for free. As Fuchs describes it, “the outsourcing of work to consumers is a general tendency of contemporary capitalism” (111).

    screen shot of Karl Marx Community Facebook Page
    screen shot of a Karl Marx Community Page on Facebook

    While miners of raw materials and workers in assembly plants are still brutally exploited—and this unseen exploitation forms a critical part of the economic base of computer technology—the exploitation of social media users is given a gloss of “fun” and “creativity.” Fuchs does not suggest that social media use is fully akin to working in a factory, but that users carry the factory with them at all times (a smart phone, for example) and are creating surplus value as long as they are interacting with social media. Instead of being a post-work utopia, Fuchs emphasizes that “the existence of the internet in its current dominant capitalist form is based on various forms of labour” (121) and the enrichment of internet firms is reliant upon the exploitation of those various forms of labor—central amongst these being the social media user.

    Fuchs considers five specific platforms in detail so as to illustrate not simply the current state of affairs but also to point towards possible alternatives. Fuchs analyzes Google, Facebook, Twitter, WikiLeaks and Wikipedia as case studies of trends to encourage and trends of which to take wary notice. In his analysis of the three corporate platforms (Google, Facebook and Twitter) Fuchs emphasizes the ways in which these social media companies (and the moguls who run them) have become wealthy and powerful by extracting value from the labor of users and by subjecting users to constant surveillance. The corporate platforms give Fuchs the opportunity to consider various social media issues in sharper relief: labor and monopolization in terms of Google, surveillance and privacy issues with Facebook, the potential for an online public sphere and Twitter. Despite his criticisms, Fuchs does not dismiss the value and utility of what these platforms offer, as is captured in his claim that “Google is at the same time the best and the worst thing that has ever happened on the internet” (147). The corporate platforms’ successes are owed at least partly to their delivering desirable functions to users. The corrective for which Fuchs argues is increased democratic control of these platforms—for the labor to be compensated and for privacy to pertain to individual humans instead of to businesses’ proprietary methods of control. Indeed, one cannot get far with a “participatory culture” unless there is a similarly robust “participatory democracy,” and part of Fuchs’s goal is to show that these are not at all the same.

    WikiLeaks and Wikipedia both serve as real examples that demonstrate the potential of an “alternative” internet for Fuchs. Though these Wiki platforms are not ideal they contain within themselves the seeds for their own adaptive development (“WikiLeaks is its own alternative”—232), and serve for Fuchs as proof that the internet can move in a direction akin to a “commons.” As Fuchs puts it, “the primary political task for concerned citizens should therefore be to resist the commodification of everything and to strive for democratizing the economy and the internet” (248), a goal he sees as at least partly realized in Wikipedia.

    While the outlines of the internet’s future may seem to have been written already, Fuchs’s book is an argument in favor of the view that the code can still be altered. A different future relies upon confronting the reality of the online world as it currently is and recognizing that the battles waged for control of the internet are proxy battles in the conflict between capitalism and an alternative approach. In the conclusion of the book Fuchs eloquently condenses his view and the argument that follows from it in two simple sentences: “A just society is a classless society. A just internet is a classless internet” (257). It is a sentiment likely to spark an invigorating discussion, be it in a classroom, at a kitchen table, or in a café.

    * * *

    While Social Media: A Critical Introduction is clearly intended as a text book (each chapter ends with a “recommended readings and exercises” section), it is written in an impassioned and engaging style that will appeal to anyone who would like to see a critical gaze turned towards social media. Fuchs structures his book so that his arguments will remain relevant even if some of the platforms about which he writes vanish. Even the chapters in which Fuchs focuses on a specific platform are filled with larger arguments that transcend that platform. Indeed one of the primary strengths of Social Media is that Fuchs skillfully uses the familiar examples of social media platforms as a way of introducing the reader to complex theories and thinkers (from Marx to Habermas).

    Whereas Fuchs accuses some other scholars of subtly hiding their ideological agendas, no such argument can be made regarding Fuchs himself. Social Media is a Marxist critique of the major online platforms—not simply because Fuchs deploys Marx (and other Marxist theorists) to construct his arguments, but because of his assumption that the desirable alternative for the internet is part and parcel of a desirable alternative to capitalism. Such a sentiment can be found at several points throughout the book, but is made particularly evident by lines such as these from the book’s conclusion: “There seem to be only two options today: (a) continuance and intensification of the 200-year-old barbarity of capitalism or (b) socialism” (259)—it is a rather stark choice. It is precisely due to Fuchs’s willingness to stake out, and stick to, such political positions that this text is so effective.

    And yet, it is the very allegiance to such positions that also presents something of a problem. While much has been written of late—in the popular press in addition to by scholars—regarding issues of privacy and surveillance, Fuchs’s arguments about the need to consider users as exploited workers will likely strike many readers as new, and thus worthwhile in their novelty if nothing else. Granted, to fully go along with Fuchs’s critique requires readers to already be in agreement or at least relatively sympathetic with Fuchs political and ethical positions. This is particularly true as Fuchs excels at making an argument about media and technology, but devotes significantly fewer pages to ethical argumentation.

    The lines (quoted earlier) “A just society is a classless society. A just internet is a classless internet” (257) serve as much as a provocation as a conclusion. For those who ascribe to a similar notion of “a just society” Fuchs book will likely function as an important guide to thinking about the internet; however, to those whose vision of “a just society” is fundamentally different from his, Fuchs’s book may be less than convincing. Social Media does not present a complete argument about how one defines a “just society.” Indeed, the danger may be that Fuchs’s statements in praise of a “classless society” may lead to some dismissing his arguments regarding the way in which the internet has replicated a “class society.” Likewise, it is easy to imagine a retort being offered that the new platforms of “the sharing economy” represent the birth of this “classless society” (though it is easy to imagine Fuchs pointing out, as have other critics from the left, that the “sharing economy” is simply more advertising lingo being used to hide the same old capitalist relations). This represents something of a peculiar challenge when it comes to Social Media, as the political commitment of the book is simultaneously what makes it so effective and that which threatens the book’s potential political efficacy.

    Thus Social Media presents something of a conundrum: how effective is a critical introduction if its conclusion offers a heads-and-tails choice between “barbarity of capitalism or…socialism”? Such a choice feels slightly as though Fuchs is begging the question. While it is curious that Fuchs does not draw upon critical theorists’ writings about the culture industry, the main issues with Social Media seem to be reflections of this black-and-white choice. Thus it is something of a missed chance that Fuchs does not draw upon some of the more serious critics of technology (such as Ellul or Mumford)—whose hard edged skepticism would nevertheless likely not accept Fuchs’s Marxist orientation. Such thinkers might provide a very different perspective on the choice between “capitalism” and “socialism”—arguing that “technique” or “the megamachine” can function quite effectively in either. Though Fuchs draws heavily upon thinkers in the Marxist tradition it may be that another set of insights and critiques might have been gained by bringing in other critics of technology (Hans Jonas, Peter Kropotkin, Albert Borgmann)—especially as some of these thinkers had warned that Marxism may overvalue the technological as much as capitalism does. This is not to argue in favor of any of these particular theorists, but to suggest that Fuchs’s claims would have been strengthened by devoting more time to considering the views of those who were critical of technology, capitalism and of Marxism. Social Media does an excellent job of confronting the ideological forces on its right flank; it could have benefited from at least acknowledging the critics to its left.

    Two other areas that remain somewhat troubling are in regards to Fuchs’s treatment of Wiki platforms and of the materiality of technology. The optimism with which Fuchs approaches WikiLeaks and Wikipedia is understandable given the dourness with which he approaches the corporate platforms, and yet his hopes for them seem somewhat exaggerated. Fuchs claims “Wikipedians are prototypical contemporary communists” (243), partially to suggest that many people are already engaged in commons based online activities and yet it is an argument that he simultaneously undermines by admitting (importantly) the fact that Wikipedia’s editor base is hardly representative of all of the platform’s users (it’s back to the “white boys with toys who love their toys”), and some have alleged that putatively structureless models of organization like Wikipedia’s actually encourage oligarchical forms of order. Which is itself not to say anything about the role that editing “bots” play on the platform or the degree to which Wikipedia is reliant upon corporate platforms (like Google) for promotion. Similarly, without ignoring its value, the example of WikiLeaks seems odd at a moment when the organization seems primarily engaged in a rearguard self-defense whilst the leaks that have generated the most interest of late has been made to journalists at traditional news sources (Edward Snowden’s leaks to Glenn Greenwald, who was writing for The Guardian when the leaks began).

    The further challenge—and this is one that Fuchs is not alone in contending with—is the trouble posed by the materiality of technology. An important aspect of Social Media is that Fuchs considers the often-unseen exploitation and repression upon which the internet relies: miners, laborers who build devices, those who recycle or live among toxic e-waste. Yet these workers seem to disappear from the arguments in the later part of the book, which in turn raises the following question: even if every social media platform were to be transformed into a non-profit commons-based platform that resists surveillance, manipulation, and the exploitation of its users, is such a platform genuinely just if to use it one must rely on devices whose minerals were mined in warzones, assembled in sweatshops, and which will eventually go to an early grave in a toxic dump? What good is a “classless (digital) society” without a “classless world?” Perhaps the question of a “capitalist internet” is itself a distraction from the fact that the “capitalist internet” is what one gets from capitalist technology. Granted, given Fuchs’s larger argument it may be fair to infer that he would portray “capitalist technology” as part of the problem. Yet, if the statement “a just society is a classless society” is to be genuinely meaningful than this must extend not just to those who use a social media platform but to all of those involved from the miner to the manufacturer to the programmer to the user to the recycler. To pose the matter as a question, can there be participatory (digital) democracy that relies on serious exploitation of labor and resources?

    Social Media: A Critical Introduction provides exactly what its title promises—a critical introduction. Fuchs has constructed an engaging and interesting text that shows the continuing validity of older theories and skillfully demonstrates the way in which the seeming newness of the internet is itself simply a new face on an old system. While Fuchs has constructed an argument that resolutely holds its position it is from a stance that one does not encounter often enough in debates around social media and which will provide readers with a range of new questions with which to wrestle.

    It remains unclear in what ways social media will develop in the future, but Christian Fuchs’s book will be an important tool for interpreting these changes—even if what is in store is more “barbarity.”
    _____

    Zachary Loeb is a writer, activist, librarian, and terrible accordion player. He earned his MSIS from the University of Texas at Austin, and is currently working towards an MA in the Media, Culture, and Communications department at NYU. His research areas include media refusal and resistance to technology, ethical implications of technology, alternative forms of technology, and libraries as models of resistance. Using the moniker “The Luddbrarian” Loeb writes at the blog librarianshipwreck. He previously reviewed The People’s Platform by Astra Taylor for boundary2.org.
    Back to the essay

  • From the Decision to the Digital

    From the Decision to the Digital

    Laruelle: Against the Digital

    a review of Alexander R. Galloway, Laruelle: Against the Digital

    by Andrew Culp

    ~
    Alexander R. Galloway’s forthcoming Laruelle: Against the Digital is a welcome and original entry in the discussion of French theorist François Laruelle’s thought. The book is at once both pedagogical and creative: it succinctly summarizes important aspects of Laruelle’s substantial oeuvre by placing his thought within the more familiar terrain of popular philosophies of difference (most notably the work of Gilles Deleuze and Alain Badiou) and creatively extends Laruelle’s work through a series of fourteen axioms.

    The book is a bridge between current Anglophone scholarship on Laruelle, which largely treats Laruelle’s non-standard philosophy through an extension of problematics common to contemporary continental philosophy (Mullarkey 2006, Mullarkey and Smith 2012, Smith 2013, Gangle 2013, Kolozova 2014), and such scholarship’s maturation, which blazes new territory because it takes thought to be “an exercise in perpetual innovation” (Brassier 2003, 25). As such, Laruelle: Against the Digital stands out from other scholarship in that it is not primarily a work of exposition or application of the axioms laid out by Laruelle. This approach is apparent from the beginning, where Galloway declares that he is not a foot soldier in Laruelle’s army and he does not proceed by way of Laurelle’s “non-philosophical” method (a method so thoroughly abstract that Laruelle appears to be the inheritor of French rationalism, though in his terminology, philosophy should remain only as “raw material” to carry thinking beyond philosophy’s image of thought). The significance of Galloway’s Laruelle is that he instead produces his own axioms, which follow from non-philosophy but are of his own design, and takes aim at a different target: the digital.

    The Laruellian Kernel

    Are philosophers no better than creationists? Philosophers may claim to hate irrationalist leaps of faith, but Laruelle locates such leaps precisely in philosophers’ own narcissistic origin stories. This argument follows from Chapter One of Galloway’s Laruelle, which outlines how all philosophy begins with the world as ‘fact.’ For example: the atomists begin with change, Kant with empirical judgment, and Fichte with the principle of identity. And because facts do not speak for themselves, philosophy elects for itself a second task — after establishing what ‘is’ — inventing a form of thought to reflect on the world. Philosophy thus arises out of a brash entitlement: the world exists to be thought. Galloway reminds us of this through Gottfried Leibniz, who tells us that “everything in the world happens for a specific reason” (and it is the job of philosophers to identify it), and Alfred North Whitehead, who alternatively says, “no actual entity, then no reason” (so it is up to philosophers to find one).

    For Laruelle, various philosophies are but variations on a single approach that first begins by positing how the world presents itself, and second determines the mode of thought that is the appropriate response. Between the two halves, Laruelle finds a grand division: appearance/presence, essence/instance, Being/beings. Laruelle’s key claim is that philosophy cannot think the division itself. The consequence is that such a division is tantamount to cheating, as it wills thought into being through an original thoughtless act. This act of thoughtlessly splitting of the world in half is what Laruelle calls “the philosophical decision.”

    Philosophy need not wait for Laruelle to be demoted, as it has already done this for itself; no longer the queen of the sciences, philosophy seems superfluous to the most harrowing realities of contemporary life. The recent focus on Laruelle did indeed come from a reinvigoration of philosophy that goes under the name ‘speculative realism.’ Certainly there are affinities between Laruelle and these philosophers — the early case was built by Ray Brassier, who emphasizes that Laruelle earnestly adopts an anti-correlationalist position similar to the one suggested by Quentin Meillassoux and distances himself from postmodern constructivism as much as other realists, all by positing the One as the Real. It is on the issue of philosophy, however, that Laruelle is most at odds with the irascible thinkers of speculative realism, for non-philosophy is not a revolt against philosophy nor is it a patronizing correction of how others see reality. 1 Galloway argues that non-philosophy should be considered materialist. He attributes to Laruelle a mix of empiricism, realism, and materialism but qualifies non-philosophy’s approach to the real as not a matter of the givenness of empirical reality but of lived experience (vécu) (Galloway, Laruelle, 24-25). The point of non-philosophy is to withdraw from philosophy by short-circuiting the attempt to reflect on what supposedly exists. To be clear: such withdrawal is not an anti-philosophy. Non-philosophy suspends philosophy, but also raids it for its own rigorous pursuit: an axiomatic investigation of the generic. 2

    From Decision to Digital

    A sharp focus on the concept of “the digital” is Galloway’s main contribution — a concept not in the forefront of Laruelle’s work, but of great interest to all of us today. Drawing from non-philosophy’s basic insight, Galloway’s goal in Laruelle is to demonstrate the “special connection” shared by philosophy and digital (15). Galloway asks his readers to consider a withdrawal from digitality that is parallel to the non-philosophical withdrawal from philosophy.

    Just as Laruelle discovered the original division to which philosophy must remain silent, Galloway finds that the digital is the “basic distinction that makes it a possible to make any distinction at all” (Laruelle, 26). Certainly the digital-analog opposition survives this reworking, but not as one might assume. Gone are the usual notions of online-offline, new-old, stepwise-continuous variation, etc. To maintain these definitions presupposes the digital, or as Galloway defines it, “the capacity to divide things and make distinctions between them” (26). Non-philosophy’s analogy for the digital thus becomes the processes of distinction and decision themselves.

    The dialectic is where Galloway provocatively traces the history of digitality. This is because he argues that digitality is “not so much 0 and 1” but “1 and 2” (Galloway, Laruelle, 26). Drawing on Marxist definitions of the dialectical process, he defines the movement from one to two as analysis, while the movement from two to one is synthesis (26-27). In this way, Laruelle can say that, “Hegel is dead, but he lives on inside the electric calculator” (Introduction aux sciences génériques, 28, qtd in Galloway, Laruelle, 32). Playing Badiou and Deleuze off of each other, as he does throughout the book, Galloway subsequently outlines the political stakes between them — with Badiou establishing clear reference points through the argument that analysis is for leftists and synthesis for reactionaries, and Deleuze as a progenitor of non-philosophy still too tied to the world of difference but shrewd enough to have a Spinozist distaste for both movements of the dialectic (Laruelle, 27-30). Galloway looks to Laruelle to get beyond Badiou’s analytic leftism and Deleuze’s “Spinozist grand compromise” (30). His proposal is a withdrawal in the name of indecision that demands abstention from digitality’s attempt to “encode and simulate anything whatsoever in the universe” (31).

    Insufficiency

    Insufficiency is the idea into which Galloway sharpens the stakes of non-philosophy. In doing so, he does to Laruelle what Deleuze does to Spinoza. While Deleuze refashions philosophy into the pursuit of adequate knowledge, the eminently practical task of understanding the conditions of chance encounters enough to gain the capacity to influence them, Galloway makes non-philosophy into the labor of inadequacy, a mode of thought that embraces the event of creation through a withdrawal from decision. If Deleuze turns Spinoza into a pragmatist, then Galloway turns Laruelle into a nihilist.

    There are echoes of Massimo Cacciari, Giorgio Agamben, and Afro-pessimism in Galloway’s Laruelle. This is because he uses nihilism’s marriage of withdrawal, opacity, and darkness as his orientation to politics, ethics, and aesthetics. From Cacciari, Galloway borrows a politics of non-compromise. But while the Italian Autonomist Marxist milieu of which Cacciari’s negative thought is characteristic emphasizes subjectivity, non-philosophy takes the subject to be one of philosophy’s dirty sins and makes no place for it. Yet Galloway is not shy about bringing up examples, such as Bartleby, Occupy, and other figures of non-action. Though as in Agamben, Galloway’s figures only gain significance in their insufficiency. “The more I am anonymous, the more I am present” Galloway repeats from Tiqqun to axiomatically argue the centrality of opacity (233-236). There is also a strange affinity between Galloway and Afro-pessimists, who both oppose the integrationist tendencies of representational systems ultimately premised on the exclusion, exploitation, and elimination of blackness. In spite of potential differences, they both define blackness as absolute foreclosure to being; from which, Galloway is determined to “channel that great saint of radical blackness, Toussaint Louveture,” in order to bring about a “cataclysm of human color” through the “blanket totality of black” that “renders color invalid” and brings about “a new uchromia, a new color utopia rooted in the generic black universe” (188-189). What remains an open question is: how does such a formulation of the generic depart from the philosophy of difference’s becoming-minor, whereby the liberation must first pass through the figures of the woman, the fugitive, and the foreigner?

    Actually Existing Digitality

    One could read Laruelle not as urging thought to become more practical, but to become less so. Evidence for such a claim comes in his retreat to dense abstract writing and a strong insistence against providing examples. Each is an effect of non-philosophy’s approach, which is both rigorous and generic. Although possibly justified, there are those who stylistically object to Laruelle for taking too many liberties with his prose; most considerations tend make up for such flights of fancy by putting non-philosophy in communication with more familiar philosophies of difference (Mullarkey 2006; Kolozova 2014). Yet the strangeness of the non-philosophical method is not a stylistic choice intended to encourage reflection. Non-philosophy is quite explicitly not a philosophy of difference — Laruelle’s landmark Philosophies of Difference is an indictment of Hegel, Heidegger, Nietzsche, Derrida, and Deleuze. To this end, non-philosophy does not seek to promote thought through marginality, Otherness, or any other form of alterity.

    Readers who have henceforth been frustrated with non-philosophy’s impenetrability may be more attracted to the second part of Galloway’s Laruelle. In part two, Galloway addresses actually existing digitality, such as computers and capitalism. This part also includes a contribution to the ethical turn, which is premised on a geometrically neat set of axioms whereby ethics is the One and politics is the division of the One into two. He develops each chapter through numerous examples, many of them concrete, that helps fold non-philosophical terms into discussions with long-established significance. For instance, Galloway makes his way through a chapter on art and utopia with the help of James Turrell’s light art, Laruelle’s Concept of Non-Photography, and August von Briesen’s automatic drawing (194-218). The book is over three hundred-pages long, so most readers will probably appreciate the brevity of many of the chapters in part two. The chapters are short enough to be impressionistic while implying that treatments as fully rigorous as what non-philosophy often demands may be much longer.

    Questions

    While his diagrammatical thinking is very clear, I find it more difficult to determine during Galloway’s philosophical expositions whether he is embracing or criticizing a concept. The difficulty of such determinations is compounded by the ambivalence of the non-philosophical method, which adopts philosophy as its raw material while simultaneously declaring that philosophical concepts are insufficient. My second fear is that while Galloway is quite adept at wielding his reworked concept of ‘the digital,’ his own trademark rigor may be lost when taken up by less judicious scholars. In particular, his attack on digitality could form the footnote for a disingenuous defense of everything analog.

    There is also something deeper at stake: What if we are in the age of non-representation? From the modernists to Rancière and Occupy, we have copious examples of non-representational aesthetics and politics. But perhaps all previous philosophy has only gestured at non-representational thought, and non-philosophy is the first to realize this goal. If so, then a fundamental objection could be raised about both Galloway’s Laruelle and non-philosophy in general: is non-philosophy properly non-thinking or is it just plain not thinking? Galloway’s axiomatic approach is a refreshing counterpoint to Laruelle’s routine circumlocution. Yet a number of the key concepts that non-philosophy provides are still frustratingly elusive. Unlike the targets of Laruelle’s criticism, Derrida and Deleuze, non-philosophy strives to avoid the obscuring effects of aporia and paradox — so is its own use of opacity simply playing coy, or to be understood purely as a statement that the emperor has no clothes? While I am intrigued by anexact concepts such as ‘the prevent,’ and I understand the basic critique of the standard model of philosophy, I am still not sure what non-philosophy does. Perhaps that is an unfair question given the sterility of the One. But as Hardt and Negri remind us in the epigraph to Empire, “every tool is a weapon if you hold it right.” We now know that non-philosophy cuts — what remains to be seen, is where and how deeply.
    _____

    Andrew Culp is a Visiting Assistant Professor of Rhetoric Studies at Whitman College. He specializes in cultural-communicative theories of power, the politics of emerging media, and gendered responses to urbanization. In his current project, Escape, he explores the apathy, distraction, and cultural exhaustion born from the 24/7 demands of an ‘always-on’ media-driven society. His work has appeared Radical Philosophy, Angelaki, Affinities, and other venues.

    _____

    Notes

    1. There are two qualifications worth mentioning: first, Laruelle presents non-philosophy as a scientific enterprise. There is little proximity between non-philosophy’s scientific approach and other sciences, such as techno-science, big science, scientific modernity, modern rationality, or the scientific method. Perhaps it is closest to Althusser’s science, but some more detailed specification of this point would be welcome.
    Back to the essay

    2. Galloway lays out the non-philosophy of generic immanence, The One, in Chapter Two of Laruelle. Though important, Galloway’s main contribution is not a summation of Laruelle’s version of immanence and thus not the focus of this review. Substantial summaries of this sort are already available, including Mullarkey 2006, and Smith 2013.
    Back to the essay

    Bibliography

    Brassier, Ray (2003) “Axiomatic Heresy: The Non-Philosophy of François Laruelle,” Radical Philosophy 121.
    Gangle, Rocco (2013) François Laruelle’s Philosophies of Difference (Edinburgh, UK: Edinburgh University Press).
    Kolozova, Katerina (2014) Cut of the Real (New York, USA: Columbia University Press).
    Hardt, Michael and Antonio Negri (2000) Empire (Cambridge, MA: Harvard University Press).
    Laruelle, François (2010/1986) Philosophies of Difference (London, UK and New York, USA: Continuum).
    Laruelle, François (2011) Concept of Non-Photography (Falmouth, UK: Urbanomic).
    Mullarkey, John (2006) Post-Continental Philosophy (London, UK: Continuum).
    Mullarkey, John and Anthony Paul Smith (eds) (2012) Laruelle and Non-Philosophy (Edinburgh, UK: Edinburgh University Press).
    Smith, Anthony Paul (2013) A Non-Philosophical Theory of Nature (New York, USA: Palgrave Macmillan).

  • The Eversion of the Digital Humanities

    The Eversion of the Digital Humanities

    image
    by Brian Lennon

    on The Emergence of the Digital Humanities by Steven E. Jones

    1

    Steven E. Jones begins his Introduction to The Emergence of the Digital Humanities (Routledge, 2014) with an anecdote concerning a speaking engagement at the Illinois Institute of Technology in Chicago. “[M]y hosts from the Humanities department,” Jones tells us,

    had also arranged for me to drop in to see the fabrication and rapid-prototyping lab, the Idea Shop at the University Technology Park. In one empty room we looked into, with schematic drawings on the walls, a large tabletop machine jumped to life and began whirring, as an arm with a router moved into position. A minute later, a student emerged from an adjacent room and adjusted something on the keyboard and monitor attached by an extension arm to the frame for the router, then examined an intricately milled block of wood on the table. Next door, someone was demonstrating finely machined parts in various materials, but mostly plastic, wheels within bearings, for example, hot off the 3D printer….

    What exactly, again, was my interest as a humanist in taking this tour, one of my hosts politely asked?1

    It is left almost entirely to more or less clear implication, here, that Jones’s humanities department hosts had arranged the expedition at his request, and mainly or even only to oblige a visitor’s unusual curiosity, which we are encouraged to believe his hosts (if “politely”) found mystifying. Any reader of this book must ask herself, first, if she believes this can really have occurred as reported: and if the answer to that question is yes, if such a genuinely unlikely and unusual scenario — the presumably full-time, salaried employees of an Institute of Technology left baffled by a visitor’s remarkable curiosity about their employer’s very raison d’être — warrants any generalization at all. For that is how Jones proceeds: by generalization, first of all from a strained and improbably dramatic attempt at defamiliarization, in the apparent confidence that this anecdote illuminating the spirit of the digital humanities will charm — whom, exactly?

    It must be said that Jones’s history of “digital humanities” is refreshingly direct and initially, at least, free of obfuscation, linking the emergence of what it denotes to events in roughly the decade preceding the book’s publication, though his reading of those events is tendentious. It was the “chastened” retrenchment after the dot-com bubble in 2000, Jones suggests (rather, just for example, than the bubble’s continued inflation by other means) that produced the modesty of companies like our beloved Facebook and Twitter, along with their modest social networking platform-products, as well as the profound modesty of Google Inc. initiatives like Google Books (“a development of particular interest to humanists,” we are told2) and Google Maps. Jones is clearer-headed when it comes to the disciplinary history of “digital humanities” as a rebaptism of humanities computing and thus — though he doesn’t put it this way — a catachrestic asseveration of traditional (imperial-nationalist) philology like its predecessor:

    It’s my premise that what sets DH apart from other forms of media studies, say, or other approaches to the cultural theory of computing, ultimately comes through its roots in (often text-based) humanities computing, which always had a kind of mixed-reality focus on physical artifacts and archives.3

    Jones is also clear-headed on the usage history of “digital humanities” as a phrase in the English language, linking it to moments of consolidation marked by Blackwell’s Companion to Digital Humanities, the establishment of the National Endowment for the Humanities Office for the Digital Humanities, and higher-education journalism covering the annual Modern Language Association of America conventions. It is perhaps this sensitivity to “digital humanities” as a phrase whose roots lie not in original scholarship or cultural criticism itself (as was still the case with “deconstruction” or “postmodernism,” even at their most shopworn) but in the dependent, even parasitic domains of reference publishing, grant-making, and journalism that leads Jones to declare “digital humanities” a “fork” of humanities computing, rather than a Kuhnian paradigm shift marking otherwise insoluble structural conflict in an intellectual discpline.

    At least at first. Having suggested it, Jones then discards the metaphor drawn from the tree structures of software version control, turning to “another set of metaphors” describing the digital humanities as having emerged not “out of the primordial soup” but “into the spotlight” (Jones, 5). We are left to guess at the provenance of this second metaphor, but its purpose is clear: to construe the digital humanities, both phenomenally and phenomenologically, as the product of a “shift in focus, driven […] by a new set of contexts, generating attention to a range of new activities” (5).

    Change; shift; new, new, new. Not a branch or a fork, not even a trunk: we’re now in the ecoverse of history and historical time, in its collision with the present. The appearance and circulation of the English-language phrase “digital humanities” can be documented — that is one of the things that professors of English like Jones do especially well, when they care to. But “changes in the culture,” much more broadly, within only the last ten years or so? No scholar in any discipline is particularly well trained, well positioned, or even well suited to diagnosing those; and scholars in English studies won’t be at the top of anyone’s list. Indeed, Jones very quickly appeals to “author William Gibson” for help, settling on the emergence of the digital humanities as a response to what Gibson called “the eversion of cyberspace,” in its ostensibly post-panopticist colonization of the physical world.6 It makes for a rather inarticulate and self-deflating statement of argument, in which on its first appearance eversion, ambiguously, appears to denote the response as much as its condition or object:

    My thesis is simple: I think that the cultural response to changes in technology, the eversion, provides an essential context for understanding the emergence of DH as a new field of study in the new millennium.7

    Jones offers weak support for the grandiose claim that “we can roughly date the watershed moment when the preponderant collective perception changed to 2004–2008″ (21). Second Life “peaked,” we are told, while World of Warcraft “was taking off”; Nintendo introduced the Wii; then Facebook “came into its own,” and was joined by Twitter and Foursquare, then Apple’s iPhone. Even then (and setting aside the question of whether such benchmarking is acceptable evidence), for the most part Jones’s argument, such as it is, is that something is happening because we are talking about something happening.

    But who are we? Jones’s is the typical deference of the scholar to the creative artist, unwilling to challenge the latter’s utter dependence on meme engineering, at least where someone like Gibson is concerned; and Jones’s subsequent turn to the work of a scholar like N. Katherine Hayles on the history of cybernetics comes too late to amend the impression that the order of things here is marked first by gadgets, memes, and conversations about gadgets and memes, and only subsequently by ideas and arguments about ideas. The generally unflattering company among whom Hayles is placed (Clay Shirky, Nathan Jurgenson) does little to move us out of the shallows, and Jones’s profoundly limited range of literary reference, even within a profoundly narrowed frame — it’s Gibson, Gibson, Gibson all the time, with the usual cameos by Bruce Sterling and Neal Stephenson — doesn’t help either.

    Jones does have one problem with the digital humanities: it ignores games. “My own interest in games met with resistance from some anonymous peer reviewers for the program for the DH 2013 conference, for example,” he tells us (33). “[T]he digital humanities, at least in some quarters, has been somewhat slow to embrace the study of games” (59). “The digital humanities could do worse than look to games” (36). And so on: there is genuine resentment here.

    But nobody wants to give a hater a slice of the pie, and a Roman peace mandates that such resentment be sublated if it is to be, as we say, taken seriously. And so in a magical resolution of that tension, the digital humanities turns out to be constituted by what it accidentally ignores or actively rejects, in this case — a solution that sweeps antagonism under the rug as we do in any other proper family. “[C]omputer-based video games embody procedures and structures that speak to the fundamental concerns of the digital humanities” (33). “Contemporary video games offer vital examples of digital humanities in practice” (59). If gaming “sounds like what I’ve been describing as the agenda of the digital humanities, it’s no accident” (144).

    Some will applaud Jones’s niceness on this count. It may strike others as desperately friendly, a lingering under a big tent as provisional as any other tent, someday to be replaced by a building, if not by nothing. Few of us will deny recognition to Second Life, World of Warcraft, Wii, Facebook, Twitter, etc. as cultural presences, at least for now. But Jones’s book is also marked by slighter and less sensibly chosen benchmarks, less sensibly chosen because Jones’s treatment of them, in a book whose ambition is to preach to the choir, simply imputes their cultural presence. Such brute force argument drives the pathos that Jones surely feels, as a scholar — in the recognition that among modern institutions, it is only scholarship and the law that preserve any memory at all — into a kind of melancholic unconscious, from whence his objects return to embarrass him. “[A]s I write this,” we read, “QR codes show no signs yet of fading away” (41). Quod erat demonstrandum.

    And it is just there, in such a melancholic unconscious, that the triumphalism of the book’s title, and the “emergence of the digital humanities” that it purports to mark, claim, or force into recognition, straightforwardly gives itself away. For the digital humanities will pass away, and rather than being absorbed into the current order of things, as digital humanities enthusiasts like to believe happened to “high theory” (it didn’t happen), the digital humanities seems more likely, at this point, to end as a blank anachronism, overwritten by the next conjuncture in line with its own critical mass of prognostications.

    2

    To be sure, who could deny the fact of significant “changes in the culture” since 2000, in the United States at least, and at regular intervals: 2001, 2008, 2013…? Warfare — military in character, but when that won’t do, economic; of any interval, but especially when prolonged and deliberately open-ended; of any intensity, but especially when flagrantly extrajudicial and opportunistically, indeed sadistically asymmetrical — will do that to you. No one who sets out to historicize the historical present can afford to ignore the facts of present history, at the very least — but the fact is that Jones finds such facts unworthy of comment, and in that sense, for all its pretense to worldliness, The Emergence of the Digital Humanities is an entirely typical product of the so-called ivory tower, wherein arcane and plain speech alike are crafted to euphemize and thus redirect and defuse the conflicts of the university with other social institutions, especially those other institutions who command the university to do this or do that. To take the ambiguity of Jones’s thesis statement (as quoted above) at its word: what if the cultural response that Jones asks us to imagine, here, is indeed and itself the “eversion” of the digital humanities, in one of the metaphorical senses he doesn’t quite consider: an autotomy or self-amputation that, as McLuhan so enjoyed suggesting in so many different ways, serves to deflect the fact of the world as a whole?

    There are few moments of outright ignorance in The Emergence of the Digital Humanities — how could there be, in the security of such a narrow channel?6 Still, pace Jones’s basic assumption here (it is not quite an argument), we might understand the emergence of the digital humanities as the emergence of a conversation that is not about something — cultural change, etc. — as much as it is an attempt to avoid conversing about something: to avoid discussing such cultural change in its most salient and obvious flesh-and-concrete manifestations. “DH is, of course, a socially constructed phenomenon,” Jones tells us (7) — yet “the social,” here, is limited to what Jones himself selects, and selectively indeed. “This is not a question of technological determinism,” he insists. “It’s a matter of recognizing that DH emerged, not in isolation, but as part of larger changes in the culture at large and that culture’s technological infrastructure” (8). Yet the largeness of those larger changes is smaller than any truly reasonable reader, reading any history of the past decade, might have reason to expect. How pleasant that such historical change was “intertwined with culture, creativity, and commerce” (8) — not brutality, bootlicking, and bank fraud. Not even the modest and rather opportunistic gloom of Gibson’s 2010 New York Times op-ed entitled “Google’s Earth” finds its way into Jones’s discourse, despite the extended treatment that Gibson’s “eversion” gets here.

    From our most ostensibly traditional scholarly colleagues, toiling away in their genuine and genuinely book-dusty modesty, we don’t expect much respect for the present moment (which is why they often surprise us). But The Emergence of the Digital Humanities is, at least in ambition, a book about cultural change over the last decade. And such historiographic elision is substantive — enough so to warrant impatient response. While one might not want to say that nothing good can have emerged from the cultural change of the period in question, it would be infantile to deny that conditions have been unpropitious in the extreme, possibly as unpropitious as they have ever been, in U.S. postwar history — and that claims for the value of what emerges into institutionality and institutionalization, under such conditions, deserve extra care and, indeed defense in advance, if one wants not to invite a reasonably caustic skepticism.

    When Jones does engage in such defense, it is weakly argued. To construe the emergence of the digital humanities as non-meaninglessly concurrent with the emergence of yet another wave of mass educational automation (in the MOOC hype that crested in 2013), for example, is wrong not because Jones can demonstrate that their concurrence is the concurrence of two entirely segregated genealogies — one rooted in Silicon Valley ideology and product marketing, say, and one utterly and completely uncaused and untouched by it — but because to observe their concurrence is “particularly galling” to many self-identified DH practitioners (11). Well, excuse me for galling you! “DH practitioners I know,” Jones informs us, “are well aware of [the] complications and complicities” of emergence in an age of precarious labor, “and they’re often busy answering, complicating, and resisting such opportunistic and simplistic views” (10). Argumentative non sequitur aside, that sounds like a lot of work undertaken in self-defense — more than anyone really ought to have to do, if they’re near to the right side of history. Finally, “those outside DH,” Jones opines in an attempt at counter-critique, “often underestimate the theoretical sophistication of many in computing,” who “know better than many of their humanist critics that their science is provisional and contingent” (10): a statement that will only earn Jones super-demerits from those of such humanist critics — they are more numerous than the likes of Jones ever seem to suspect — who came to the humanities with scientific and/or technical aptitudes, sometimes with extensive educational and/or professional training and experience, and whose “sometimes world-weary and condescending skepticism” (10) is sometimes very well-informed and well-justified indeed, and certain to outlive Jones’s winded jabs at it.

    Jones is especially clumsy in confronting the charge that the digital humanities is marked by a forgetting or evasion of the commitment to cultural criticism foregrounded by other, older and now explicitly competing formations, like so-called new media studies. Citing the suggestion by “media scholar Nick Montfort” that “work in the digital humanities is usually considered to be the digitization and analysis of pre-digital cultural artifacts, not the investigation of contemporary computational media,” Jones remarks that “Montfort’s own work […] seems to me to belie the distinction,”7 as if Montfort — or anyone making such a statement — were simply deluded about his own work, or about his experience of a social economy of intellectual attention under identifiably specific social and historical conditions, or else merely expressing pain at being excluded from a social space to which he desired admission, rather than objecting on principle to a secessionist act of imagination.8

    3

    Jones tells us that he doesn’t “mean to gloss over the uneven distribution of [network] technologies around the world, or the serious social and political problems associated with manufacturing and discarding the devices and maintaining the server farms and cell towers on which the network depends” — but he goes ahead and does it anyway, and without apology or evident regret. “[I]t’s not my topic in this book,” we are told, “and I’ve deliberately restricted my focus to the already-networked world” (3). The message is clear: this is a book for readers who will accept such circumscription, in what they read and contemplate. Perhaps this is what marks the emergence of the digital humanities, in the re-emergence of license for restrictive intellectual ambition and a generally restrictive purview: a bracketing of the world that was increasingly discredited, and discredited with increasing ferocity, just by the way, in the academic humanities in the course of the three decades preceding the first Silicon Valley bubble. Jones suggests that “it can be too easy to assume a qualitative hierarchical difference in the impact of networked technology, too easy to extend the deeper biases of privilege into binary theories of the global ‘digital divide’” (4), and one wonders what authority to grant to such a pronouncement when articulated by someone who admits he is not interested, at least in this book, in thinking about how an — how any — other half lives. It’s the latter, not the former, that is the easy choice here. (Against a single, entirely inconsequential squib in Computer Business Review entitled “Report: Global Digital Divide Getting Worse,” an almost obnoxiously perfunctory footnote pits “a United Nations Telecoms Agency report” from 2012. This is not scholarship.)

    Thus it is that, read closely, the demand for finitude in the one capacity in which we are non-mortal — in thought and intellectual ambition — and the more or less cheerful imagination of an implied reader satisfied by such finitude, become passive microaggressions aimed at another mode of the production of knowledge, whose expansive focus on a theoretical totality of social antagonism (what Jones calls “hierarchical difference”) and justice (what he calls “binary theories”) makes the author of The Emergence of the Digital Humanities uncomfortable, at least on its pages.

    That’s fine, of course. No: no, it’s not. What I mean to say is that it’s unfair to write as if the author of The Emergence of the Digital Humanities alone bears responsibility for this particular, certainly overdetermined state of affairs. He doesn’t — how could he? But he’s getting no help, either, from most of those who will be more or less pleased by the title of his book, and by its argument, such as it is: because they want to believe they have “emerged” along with it, and with that tension resolved, its discomforts relieved. Jones’s book doesn’t seriously challenge that desire, its (few) hedges and provisos notwithstanding. If that desire is more anxious now than ever, as digital humanities enthusiasts find themselves scrutinized from all sides, it is with good reason.
    _____

    Brian Lennon is Associate Professor of English and Comparative Literature at Pennsylvania State University and the author of In Babel’s Shadow: Multilingual Literatures, Monolingual States (University of Minnesota Press, 2010).
    _____

    notes:
    1. Jones, 1.
    Back to the essay

    2. Jones, 4. “Interest” is presumed to be affirmative, here, marking one elision of the range of humanistic critical and scholarly attitudes toward Google generally and the Google Books project in particular. And of the unequivocally less affirmative “interest” of creative writers as represented by the Authors Guild, just for example, Jones has nothing to say: another elision.
    Back to the essay

    3. Jones, 13.
    Back to the essay

    4. See Gibson.
    Back to the essay

    5. Jones, 5.
    Back to the essay

    6. As eager as any other digital humanities enthusiast to accept Franco Moretti’s legitimation of DH, but apparently incurious about the intellectual formation, career and body of work that led such a big fish to such a small pond, Jones opines that Moretti’s “call for a distant reading” stands “opposed to the close reading that has been central to literary studies since the late nineteenth century” (Jones, 62). “Late nineteenth century” when exactly, and where (and how, and why)? one wonders. But to judge by what Jones sees fit to say by way of explanation — that is, nothing at all — this is mere hearsay.
    Back to the essay

    7. Jones, 5. See also Montfort.
    Back to the essay

    8. As further evidence that Montfort’s statement is a mischaracterization or expresses a misunderstanding, Jones suggests the fact that “[t]he Electronic Literature Organization itself, an important center of gravity for the study of computational media in which Montfort has been instrumental, was for a time housed at the Maryland Institute for Technology in the Humanities (MITH), a preeminent DH center where Matthew Kirschenbaum served as faculty advisor” (Jones, 5–6). The non sequiturs continue: “digital humanities” includes the study of computing and media because “self-identified practitioners doing DH” study computing and media (Jones, 6); the study of computing and media is also “digital humanities” because the study of computing and digital media might be performed at institutions like MITH or George Mason University’s Roy Rosenzweig Center for History and New Media, which are “digital humanities centers” (although the phrase “digital humanities” appears nowhere in their names); “digital humanities” also adequately describes work in “media archaeology” or “media history,” because such work has “continued to influence DH” (Jones, 6); new media studies is a component of the digital humanities because some scholars suggest it is so, and others cannot be heard to object, at least after one has placed one’s fingers in one’s ears; and so on.
    Back to the essay

    (feature image: “Bandeau – Manifeste des Digital Humanities,” uncredited; originally posted on flickr.)

  • The Lenses of Failure

    The Lenses of Failure

    The Art of Failure

    by Nathan Altice

    On Software’s Dark Souls II and Jesper Juul’s The Art of Failure

    ~

    I am speaking to a cat named Sweet Shalquoir. She lounges on a desk in a diminutive house near the center of Majula, a coastal settlement that harbors a small band of itinerant merchants, tradespeople, and mystics. Among Shalquoir’s wares is the Silvercat ring, whose circlet resembles a leaping, blue-eyed cat.

    ‘You’ve seen that gaping hole over there? Well, there’s nasty little vermin down there,’ Shalquoir says, observing my window shopping. ‘Although who you seek is even further below.’ She laughs. She knows her costly ring grants its wearer a cat-like affinity for lengthy drops. I check my inventory. Having just arrived in Majula, I have few souls on hand.

    I turn from Shalquoir and exit the house ringless. True to her word, a yawning chasm opens before me, its perimeter edged in slabbed stonework and crumbling statues but otherwise unmarked and unguarded. One could easily fall in while sprinting from house to house in search of Majula’s residents. Wary of an accidental fall, I nudge toward its edge.

    The pit has a mossy patina, as if it was once a well for giants that now lies parched after drinking centuries of Majula’s sun. Its surface is smooth save for a few distant torches sawing at the dark and several crossbeams that bisect its diameter at uneven intervals. Their configuration forms a makeshift spiral ladder. Corpses are slung across the beams like macabre dolls, warning wanderers fool enough to chase after nasty little vermin. But atop the first corpse gleams a pinprick of ethereal light, both a beacon to guide the first lengthy drop and a promise of immediate reward if one survives.

    Silvercat ring be damned, I think I can make it.

    I position myself parallel to the first crossbeam, eyes fixed on that glimmering point. I jump.

    The Jump

    [Dark Souls II screenshots source: ItsBlueLizardJello via YouTube]

    For a breathless second, I plunge toward the beam. My aim is true—but my body is weak. I collapse, sprawled atop the lashed wooden planks, inches from my coveted jewel. I evaporate into a green vapor as two words appear in the screen’s lower half: ‘YOU DIED.’

    Decisions such as these abound in Dark Souls II, the latest entry in developer From Software’s cult-to-crossover-hit series of games bearing the Souls moniker. The first, Demon’s Souls, debuted on the PlayStation 3 in 2009, attracting players with its understated lore, intricate level design, and relentless difficulty. Spiritual successor Dark Souls followed in 2011 and its direct sequel Dark Souls II released earlier this year.

    Each game adheres to standard medieval fantasy tropes: there are spellcasters, armor-clad knights, parapet-trimmed castles, and a variety of fire-spewing dragons. You select one out of several archetypal character classes (e.g., Cleric, Sorcerer, Swordsman), customize a few appearance options, then explore and fight through a series of interconnected, yet typically non-linear, locations populated by creatures of escalating difficulty. What distinguishes these games from the hundreds of other fantasy games those initial conditions could describe are their melancholy tone and their general disregard for player hand-holding. Your hero begins as little more than a voiceless, fragile husk with minimal direction and fewer resources. Merely surviving takes precedence over rescuing princesses or looting dungeons. The Souls games similarly reveal little about their settings or systems, driving some players to declare them among the worst games ever made while catalyzing others to revisit the game’s environs for hundreds of hours. Vibrant communities have emerged around the Souls series, partly in an effort to document the mechanics From Software purposefully obscures and partly to construct a coherent logic and lore from the scraps and minutiae the game provides.

    Dark Souls II Settings

    Unlike most action games, every encounter in Dark Souls II is potentially deadly, from the lowliest grunts to the largest boss creatures. To further raise the stakes, death has consequences. Slaying foes grants souls, the titular items that fuel both trade and character progression. Spending souls increases your survivability, whether you invest them directly in your character stats (e.g. Vitality) or a more powerful shield. However, dying forfeits any souls you are currently carrying and resets your progress to the last bonfire (i.e., checkpoint) you rested beside. The catch is that dying or resting resets any creatures you have previously slain, giving your quest a moribund, Sisyphean repetition that grinds impatient players to a halt. And once slain, you have one chance to recover your lost souls. A glowing green aura marks the site of your previous bereavement. Touch that mark before you die again and you regain your cache; fail to do so and you lose it forever. You will often fail to do so.

    What many Souls reviewers find refreshing about the game’s difficulty is actually a more forgiving variation of the death mechanics found in early ASCII-based games like Rogue (1980), Hack (1985), and NetHack (1987), wherein ‘permadeath’—i.e., death meant starting the game anew—was a central conceit. And those games were almost direct ‘ports’ of tabletop roleplaying progenitors like Dungeons & Dragons, whose early versions were skewed more toward the gritty realism of pulp literature than the godlike power fantasies of modern roleplaying games. A successful career in D&D meant accumulating enough treasure to eventually retire from dungeon-delving, so one could hire other hapless retainers to loot on your behalf. Death was frequent and expected because dungeons were dangerous places. And unless one’s Dungeon Master was particularly lenient, death was final. A fatal mistake meant re-rolling your character. In this sense, the Souls games stand apart from their videogame peers because of the conservatism of their design. Though countless games ape D&D’s generic fantasy setting and stat-based progress model, few adopt the existential dread of its early forms.

    Dark Souls II’s adherence to opaque systems and traditional difficulty has alienated players unaccustomed to the demands of earlier gaming models. For those repeatedly stymied by the game’s frustrations, several questions arise: Why put forth the effort in a game that feels so antagonistic toward its players? Is there any reward worth the frequent, unforgiving failure? Aren’t games supposed to be fun—and is failing fun?

    YOU DIED

    Games scholar Jesper Juul raises similar questions in The Art of Failure, the second book in MIT’s new Playful Thinking series. His central thesis is that games present players a ‘paradox of failure’: we do not like to fail, yet games perpetually make us do so; weirder still, we seek out games voluntarily, even though the only victory they offer is over a failure that they themselves create. Despite games’ reputation as frivolous fun, they can humiliate and infuriate us. Real emotions are at stake. And, as Juul argues, ‘the paradox of failure is unique in that when you fail in a game, it really means that you were in some way inadequate’ (7). So when my character plunges down the pit in Majula, the developers do not tell me ‘Your character died,’ even though I have named that character. Instead the games remind us, ‘YOU DIED.’ YOU, the player, the one holding the Xbox 360 controller.

    The strength of Juul’s argument is that he does not rely on a single discipline but instead approaches failure via four related ‘lenses’: philosophy, psychology, game design, and fiction (30). Each lens has its own brief chapter and accompanying game examples, and throughout Juul interjects anecdotes from his personal play experience alongside lessons he’s learned co-designing a number of experimental video games. The breadth of examples is wide, ranging from big-budget games like Uncharted 2, Meteos, and Skate 2 to more obscure works like Flywrench, September 12, and Super Real Tennis.

    Juul’s first lens (chapter 2) links up his paradox of failure to a longstanding philosophical quandary known as the ‘paradox of painful art.’ Like video games, art tends to elicit painful emotions from viewers, whether a tragic stage play or a disturbing novel, yet contrary to the notion that we seek to avoid pain, people regularly pursue such art—even enjoy it. Juul provides a summary of positions philosophers have offered to explain this behavior, categorized as follows: deflationary arguments skirt the paradox by claiming that art doesn’t actually cause us pain in the first place; compensatory arguments acknowledge the pain, but claim that the sum of painful vs. pleasant reactions to art yield a net positive; and a-hedonistic arguments deny that humans are solely pleasure-seekers—some of us pursue pain.

    Juul’s commonsense response is that we should not limit human motivation to narrow, atemporal explanations. Instead, a synthesis of categories is possible, because we can successfully manage multiple contradictory desires based on immediate and long-term (i.e., aesthetic) time frames. He writes, ‘Our moment-to-moment desire to avoid unpleasant experiences is at odds with a longer-term aesthetic desire in which we understand failure, tragedy, and general unpleasantness to be necessary for our experience’ (115). In Dark Souls II, I faced a particularly challenging section early on when my character, a sorcerer, was under-powered and under-equipped to face a strong, agile boss known as The Pursuer. I spent close to four hours running the same path to the boss, dying dozens of times, with no net progress.

    Facing the Pursuer

    For Juul, my continued persistence did not betray a masochistic personality flaw (not that I didn’t consider it), nor would he trivialize my frustration (which I certainly felt), nor would he argue that I was eking out more pleasure than pain during my repeated trials (I certainly wasn’t). Instead, I was tolerating immediate failure in pursuit of a distant aesthetic goal, one that would not arrive during that game session—or many sessions to come. And indeed, this is why Juul calls games the ‘art of failure,’ because ‘games hurt us and then induce an urgency to repair our self-image’ (45). I could only overcome the Pursuer if I learned to play better. Juul writes, ‘Failure is integral to the enjoyment of game playing in a way that it is not integral to the enjoyment of learning in general. Games are a perspective on failure and learning as enjoyment, or satisfaction’ (45). Failure is part of what makes a game a game.

    Chapter 3 proceeds to the psychological lens, allowing Juul to review the myriad ways we experience failure emotionally. For many games, the impact can be significant: ‘To play a game is to take an emotional gamble. The higher the stakes, in terms of time investment, public acknowledgement, and personal importance, the higher are the potential losses and rewards’ (57). Failure doesn’t feel good, but again, paradoxically, we must first accept responsibility for our failures in order to then learn from them. ‘Once we accept responsibility,’ Juul writes, ‘failure also concretely pushes us to search for new strategies and learning opportunities in a game’ (116). But why can’t we learn without the painful consequences? Because most of us need prodding to be the best players we can be. In the absence of failure, players will cheese and cheat their way to favorable outcomes (59).

    Juul concludes that games help us grow—‘we come away from any skill-based game changed, wiser, and possessing new skills’ (59)—but his more interesting point is how we buffer the emotional toll of failure by diverting or transforming it. ‘Self-defeating’ players react to failure by lessening their efforts, a laissez-faire attitude that makes failure expected and thus less painful. ‘Spectacular’ failures, on the other hand, elevate negativity to an aesthetic focal point. When I laugh at the quivering pile of polygons clipped halfway through the floor geometry by the Pursuer’s blade, I’m no longer lamenting my own failure but celebrating the game’s.

    Chapter 4 provides a broad view of how games are designed to make us fail and counters much conventional wisdom about prevailing design trends. For instance, many players complain that contemporary games are too easy, that we don’t fail enough, but Juul argues that those players are confusing failure with punishment. Failure is now designed to be more frequent than in the past, but punishment is far less severe. Death in early arcade or console games often meant total failure, resetting your progress to the beginning of the game. Death in Dark Souls II merely forfeits your souls in-hand—any spent souls, found items, gained levels, or cached equipment are permanent. Punishment certainly feels severe when you lose tens of thousands of souls, but the consequences are far less jarring than losing your final life in Ghost ’n’ Goblins.

    Juul outlines three different paths through which games lead us to success or failure—skill, chance, and labor—but notes that his categories are neither exhaustive nor mutually exclusive (75, 82). The first category is likely the most familiar for frequent game players: ‘When we fail in a game of skill, we are therefore marked as deficient in a straightforward way: as lacking the skills required to play the game’ (74). When our skills fail us, we only have ourselves to blame. Chance, however, ‘marks us in a different way…as being on poor terms with the gods, or as simply unlucky, which is still a personal trait that we would rather not have’ (75). With chance in play, failure gains a cosmic significance.

    Labor is one of the newer design paths, characterized by the low-skill, slow-grind style of play frequently maligned in Farmville and its clones, but also found in better-regarded titles like World of Warcraft (and RPGs in general). In these games, failure has its lowest stakes: ‘Lack of success in a game of labor therefore does not mark us as lacking in skill or luck, but at worst as someone lazy (or too busy). For those who are afraid of failure, this is close to an ideal state. For those who think of games as personal struggles for improvement, games of labor are anathema’ (79). Juul’s last point is an important lesson for critics quick to dismiss the ‘click-to-win’ genre outright. For players averse to personal or cosmic failure, games of labor are a welcome respite.

    Juul’s final lens (chapter 5) examines fictional failure. ‘Most video games,’ he writes, ‘represent our failures and successes by letting our performance be mirrored by a protagonist (or society, etc.) in the game’s fictional world. When we are unhappy to have failed, a fictional character is also unhappy’ (117). Beginning with this conventional case, Juul then discusses games that subvert or challenge the presumed alignment of player/character interests, asking whether games can be tragic or present situations where character failure might be the desired outcome. While Juul concedes that ‘the self-destruction of the protagonist remains awkward,’ complicity—a sense of player regret when facing a character’s repugnant actions—offers a ‘better variation’ of game tragedy (117). Juul argues that complicity is unique to games, an experience that is ‘more personal and stronger than simply witnessing a fictional character performing the same actions’ (113). When I nudge my character into Majula’s pit, I’m no longer a witness—I’m a participant.

    The Art of Failure’s final chapter focuses the prior lens’ viewpoints on failure into a humanistic concluding point: ‘Failure forces us to reconsider what we are doing, to learn. Failure connects us personally to the events in the game; it proves that we matter, that the world does no simply continue regardless of our actions’ (122). For those who already accept games as a meaningful, expressive medium, Juul’s conclusion may be unsurprising. But this kind of thoughtful optimism is also part of the book’s strength. Juul’s writing is approachable and jargon-free, and the Playful Thinking series’ focus on depth, readability, and pocket-size volumes makes The Art of Failure an ideal book to pass along to friends and colleagues that might question your ‘frivolous’ videogame hobby—or, more importantly, justify why you often spend hours swearing at the screen while purportedly in pursuit of ‘fun.’

    The final chapter also offers a tantalizingly brief analysis of how Juul’s lenses might refract outward, beyond games, to culture at large. Specifically targeting the now-widespread corporate practice of gamification, wherein game design principles are applied as motivators and performance measures for non-leisure activities (usually work), Juul reminds us that the technique often fails because workplace performance goals ‘rarely measure what they are supposed to measure’ (120). Games are ideal for performance measurement because of their peculiar teleology: ‘The value system that the goal of a game creates is not an artificial measure of the value of the player’s performance; the goal is what creates the value in the first place by assigning values to the possible outcomes of a game’ (121). This kind of pushback against digital idealism is an important reminder that games ‘are not a pixie dust of motivation to be sprinkled on any subject’ (10), and Juul leaves a lot of room for further development of his thesis beyond the narrow scope of videogames.

    For the converted, The Art of Failure provides cross-disciplinary insights into many of our unexamined play habits. While playing Dark Souls II, I frequently thought of Juul’s triumvirate of design paths. Dark Souls II is an exemplary hybrid—though much of your success is skill-based, chance and labor play significant roles. The algorithmic systems that govern item drops or boss attacks can often sway one’s fortunes toward success or failure, as many speedrunners would attest. And for as much ink is spilt about Dark Souls II being a ‘hardcore’ game with ‘old-school’ challenge, success can also be won through skill-less labor. Summoning high-level allies to clear difficult paths or simply investing hours grinding souls to level your character are both viable supplements for chance and skill.

    But what of games that do not fit these paths? How do they contend with failure? There is a rich tradition of experimental or independent artgames, notgames, game poems, and the like that are designed with no path to failure. Standout examples like Proteus, Dys4ia, and Your Lover Has Turned Into a Flock of Birds require no skills beyond operating a keyboard or mouse, do not rely on chance, and require little time investment. Unsurprisingly, games like these are often targeted as ‘non-games,’ and Juul’s analysis leaves little room for games that skirt these borderlines. There is a subtext in The Art of Failure that draws distinctions between ‘good’ and ‘bad’ design. Early on, Juul writes that ‘(good) games are designed such that they give us a fair chance’ (7) and ‘for something to be a good game, and a game at all, we expect resistance and the possibility of failure’ (12).

    There are essentialist, formalist assumptions guiding Juul’s thesis, leading him to privilege games’ ‘unique’ qualities at the risk of further marginalizing genres, creators, and hybrid play practices that already operate at the margins. To argue that complicity is unique to games or that games are the art of failure is to make an unwarranted leap into medium specificity and draw borderlines that need not be drawn. Certainly other media can draw us into complicity, a path well-trodden in cinema’s exploration of voyeurism (Rear Window, Blow-Up) and extreme horror (Saw, Hostel). Can’t games simply be particularly strong at complicity, rather than its sole purveyor?

    I’m similarly unconvinced that games are the quintessential art of failure. Critics often contend that video games are unique as a medium in that they require a certain skill threshold to complete. While it is true that finishing Super Mario Bros. is different than watching the entirety of The Godfather, we can use Juul’s own multi-path model to understand how we might fail at other media. The latter example certainly requires more labor—one can play dozens of Super Mario runs during The Godfather’s 175-minute runtime. Further, watching a film lauded as one of history’s greatest carries unique expectations that many viewers may fail to satisfy, from the societal pressure to agree on its quality to the comprehensive faculties necessary to follow its narrative. Different failures arise from different media—I’ve failed reading Infinite Jest more than I’ve failed completing Dark Souls II. And any visit to a museum will teach you that many people feel as though they fail at modern art. Tackling Dark Souls II’s Pursuer or Barnett Newman’s Onement, I can be equally daunting.

    When scholars ask, as Juul does, what games can do, they must be careful that by doing so they do not also police what games can be. Failure is a compelling lens through which to examine our relationship to play, but we needn’t valorize it as the only means to count as a game.
    _____


    Nathan Altice is an instructor of sound and game design at Virginia Commonwealth University and author of the platform study of the NES/Famicom, I AM ERROR (MIT, 2015). He writes at metopal.com and burns bridges at @circuitlions.

  • The People’s Platform by Astra Taylor

    The People’s Platform by Astra Taylor

    image

    Or is it? : Astra Taylor’s The People’s Platform

    Review by Zachary Loeb

    ~

    Imagine not using the Internet for twenty-four hours.

    Really: no Internet from dawn to dawn.

    Take a moment to think through the wide range of devices you would have to turn off and services you would have to avoid to succeed in such a challenge. While a single day without going online may not represent too outlandish an ordeal such an endeavor would still require some social and economic gymnastics. From the way we communicate with friends to the way we order food to the way we turn in assignments for school or complete tasks in our jobs – our lives have become thoroughly entangled with the Internet. Whether its power and control are overt or subtle the Internet has come to wield an impressive amount of influence over our lives.

    All of which should serve to raise a discomforting question – so, who is in control of the Internet? Is the Internet a fantastically democratic space that puts the power back in the hands of people? Is the Internet a sly mechanism for vesting more power in the hands of the already powerful, whilst distracting people with a steady stream of kitschy content and discounted consumerism? Or, is the Internet a space relying on levels of oft-unseen material infrastructures with a range of positive and negative potentialities? These are the questions that Astra Taylor attempts to untangle in her book The People’s Platform: Taking Back Power and Culture in the Digital Age (Metropolitan Books, 2014). It is the rare example of a book where the title itself forms a thesis statement of sorts: the Internet was and can be a platform for the people but this potential has been perverted, and thus there needs to be a “taking back” of power (and culture).

    At the outset Taylor locates her critique in the space between the fawning of the “techno-optimists” and the grousing of the “techno-skeptics.” Far from trying to assume a “neutral” stance, Taylor couches her discussion of the “techno” by stepping back to consider the social, political, and economic forces that shape the “techno” reality that inspires optimism and skepticism. Taylor, therefore, does not build her argument upon a discussion of the Internet as such but builds her argument around a discussion of the Internet as it is and as it could be. Unfortunately the “as it currently is” of this “new media” evinces that: “Corporate power and the quest for profit are as fundamental to new media as old.” (8)

    Thus Taylor sets up the conundrum of the Internet – it is at once a media platform with a great deal of democratic potential, and yet this potential has been continually appropriated for bureaucratic, technocratic, and indeed plutocratic purposes.

    Over the course of The People’s Platform Taylor moves from one aspect of the Internet (and its related material infrastructures) to another – touching upon a range of issues from the Internet’s history, to copyright and the way it has undermined “cultural creators” ability to earn a living, the way the Internet persuades and controls, across the issues of journalism and e-waste, to the ways in which the Internet can replicate the misogyny and racism of the offline world.

    With her background as a documentary filmmaker (she directed the film The Examined Life [which is excellent]) Taylor is skilled in cutting deftly from one topic to the next, though this particular experience also gives her cause to dwell at length upon the matter of how culture is created and supported in the digital age. Indeed as a maker of independent films Taylor is particularly attuned to the challenges of making culturally valuable content in a time when free copies spread rapidly on-line. Here too Taylor demonstrates the link to larger economic forces – there are still highly successful “stars” and occasional stories of “from nowhere” success, but the result is largely that those attempting to eke out a nominal subsistence find it increasingly challenging to do so.

    As the Internet becomes the principle means of dissemination of material “cultural creators” find themselves bound to a system wherein the ultimate remuneration rarely accrues back to them. Likewise the rash of profit-driven mergers and shifting revenue streams has resulted in a steady erosion of the journalistic field. It is not – as Taylor argues – that there is a lack of committed “cultural creators” and journalists working today, it is that they are finding it increasingly difficult to sustain their efforts. The Internet, as Taylor describes it, is certainly making many people enormously wealthy but those made wealthy are more likely to be platform owners (think Google or Facebook) than those who fill those platforms with the informational content that makes them valuable.

    Though the Internet may have its roots in massive public investment and though the value of the Internet is a result of the labor of Internet users (example: Facebook makes money by selling advertisements based on the work you put it in on your profile), the Internet as it is now is often less of an alternative to society than it is a replication. The biases of the offline world are replicated in the digital realm, as Taylor puts it:

    “While the Internet offers marginalized groups powerful and potentially world-changing opportunities to meet and act together, new technologies also magnify inequality, reinforcing elements of the old order. Networks do not eradicate power: they distribute it in different ways, shuffling hierarchies and producing new mechanisms of exclusion.” (108)

    Thus, the Internet – often under the guise of promoting anonymity – can be a site for an explosion of misogyny, racism, classism, and an elitism blossoming from a “more-technologically-skilled-than-thou” position. There are certainly many “marginalized groups” and individuals trying to use the Internet to battle their historical silencing, but for every social justice minded video there is a comment section seething with the grunts of trolls. Meanwhile behind this all stand the same wealthy corporate interests that enjoyed privileged positions before the rise of the Internet. These corporate forces can wield the power they gain from the Internet to steer and persuade Internet users in such a way that the “curated experience” of the Internet is increasingly another way of saying, “what a major corporation thinks you (should) want.”

    image

    Breaking through the ethereal airs of the Internet, Taylor also grounds her argument in the material realities of the digital realm. While it is true that more and more people are increasingly online, Taylor emphasizes that there are still many without access and that the high-speed access enjoyed by some is not had by one and all. Furthermore, all of this access, all of these fanciful devices, all of these democratic dreams are reliant upon a physical infrastructure shot through with dangerous mining conditions, wretched laboring facilities, and toxic dumps where discarded devices eventually go to decay. Those who are able to enjoy the Internet as a positive feature in their day to day life are rarely the same people who worked in the mines, the assembly plants, or who will have to live on the land that has been blighted by e-waste.

    While Taylor refuses to ignore the many downsides associated with the Internet age she remains fixed on its positive potential. The book concludes without offering a simplistic list of solutions but nevertheless ends with a sense that those who care about the Internet’s non-corporate potential need to work to build a “sustainable digital future” (183). Though there are certainly powerful interests profiting from the current state of the Internet the fact remains that (in a historical sense) the Internet is rather young, and there is still time to challenge the shape it is taking. Considering what needs to be done, Taylor notes: “The solutions we need require collective, political action.” (218)

    It is a suggestion that carries a sentiment that people can band together to reassert control over the online commons that are steadily being enclosed by corporate interests. By considering the Internet as a public utility (a point being discussed at the moment in regards to Net Neutrality) and by focusing on democratic values instead of financial values – it may be possible for people to reverse (or at least slow) the corporate wave which is washing over the Internet.

    After all, the Internet is the result of massive public investment, why is it that it has been delivered into corporate hands? Ultimately, Taylor concludes (in a chapter titled “In Defense of the Commons: A Manifesto for Sustainable Culture”) that if people want the Internet to be a “people’s platform” that they will have to organize and fight for it (“collective, political”). In a time when the Internet is an important feature of society, it makes a difference if the Internet is an open “people’s platform” or a highly (if subtly) controlled corporate theme park. “The People’s Platform” requires people who care to raise their voices…such as the people who have read Astra Taylor’s book, perhaps.

    * * * * *

    With The People’s Platform Astra Taylor has made an effective and interesting contribution to the discussion around the nature of the Internet and its future. By emphasizing a political and economic critique she is able to pull the Internet away from a utopian fantasy in order to analyze it in terms of the competing forces that have shaped (and continue to shape) it. The perspective that Taylor brings, as a documentary filmmaker, allows her to drop the journalistic façade of objectivity in order to genuinely and forcefully engage with issues pertaining to the compensation of cultural creators in the age of digital dissemination. Whilst the sections that Taylor writes on the level of misogyny one encounters online and the section on e-waste make this book particularly noteworthy. Though each chapter of The People’s Platform could likely be extended into an entire book, it is in their interconnections that Taylor is able to demonstrate the layers of interconnected issues that are making such a mess of the Internet today. For the problem facing the online realm is not just corporate control – it is a slew of issues that need to be recognized in total (and in their interconnected nature) if any type of response is to be mounted.

    Though The People’s Platform is ostensibly about a conflict regarding the future of the Internet, the book is itself a site of conflicting sentiments. Though Taylor – at the outset – aims to avoid aligning herself with the “cheerleaders of progress” or “the prophets of doom” (4) the book that emerges is one that is in the stands of the “cheerleaders of progress” (even if with slight misgivings about being in those stands). The book’s title suggests that even with all of the problems associated with the Internet it still represents something promising, something worth fighting to “take back.” It is a point that is particularly troublesome to consider after Taylor’s description of labor conditions and e-waste. For one of the main questions that emerges towards the end of Taylor’s book – though it is not one she directly poses – makes problematic the book’s title, that question being: which “people” are being described in “the people’s platform?”

    image

    It may be tempting to answer such a question with a simplistic “well, all of the people” yet such a response is inadequate in light of the way that Taylor’s book clearly discusses the layers of control and dominance one finds surrounding the Internet. Can the Internet be “the people’s platform” for writers, journalists, documentary filmmakers, and activists with access to digital tools? Sure. But what of those described in the e-waste chapter – people living in oppressive conditions and toiling in factories where building digital devices puts them at risk of cancer or disassembling such devices poisons them and their families? Those people count as well, but those upon whom “the people’s platform” is built seem to be crushed beneath it, not able to get on top of it – to stand on “the people’s platform” is to stand on the hunched shoulders of others. It is true that Taylor takes this into account in emphasizing that something needs to be done to recognize and rectify this matter – but insofar as the material tools “the people” use to reach the Internet are built upon the repression and oppression of other people, it sours the very notion of the Internet as “the people’s platform.”

    This in turn raises another question: what would a genuine “people’s platform” look like? In the conclusion to the book Taylor attempts to answer this question by arguing for political action and increased democratic control over the Internet; however, one can easily imagine classifying the Internet as a “public utility” without doing anything to change the laboring conditions of those who build devices. Indeed, the darkly amusing element of The People’s Platform is that Taylor answers this question brilliantly on the second page of her book and then spends the following two hundred and thirty pages ignoring this answer.

    Taylor begins The People’s Platform with an anecdote about her youth in the pre-Internet (or pre-high speed Internet) era, wherein she recalls working on a small personally assembled magazine (a “zine”) which she would then have printed and distribute to friends and a variety of local shops. Looking back upon her time making zines, Taylor writes:
    “Today any kid with a smartphone and a message has the potential to reach more people with the push of a button that I did during two years of self-publishing.” (2)

    These lines from Taylor come only a sentence after she considers how her access to easy photocopying (for her zine) made it easier for her than it had been for earlier would-be publishers. Indeed, Taylor recalls:

    “a veteran political organizer told me how he and his friends had to sell blood in order to raise the funds to buy a mimeograph machine so they could make a newsletter in the early sixties.” (2)

    There are a few subtle moments in the above lines (from the second page of Taylor’s book) that say far more about a “people’s platform” than they let on. It is true that a smartphone gives a person “the potential to reach more people” but as the rest of Taylor’s book makes clear – it is not necessarily the case that people really do “reach more people” online. There are certainly wild success stories, but for “any kid” their reach with their smartphone may not be much greater than the number of people reachable with a photocopied zine. Furthermore, the zine audience might have been more engaged and receptive than the idle scanner of Tweets or Facebook updates – the smartphone may deliver more potential but actually achieve less.

    Nevertheless, the key aspects is Taylor’s comment about the “veteran political organizer” – this organizer (“and his friends”) were able to “buy a mimeograph machine so they could make a newsletter.” Is this different from buying a laptop computer, Internet access, and a domain name? Actually? Yes. Yes, it is. For once those newsletter makers bought the mimeograph machine they were in control of it – they did not need to worry about its Terms of Service changing, about pop-up advertisements, about their movements being tracked through the device, about the NSA having installed a convenient backdoor – and frankly there’s a good chance that the mimeograph machine they purchased had a much longer life than any laptop they would purchase today. Again – they bought and were able to control the means for disseminating their message, one cannot truly buy all of the means necessary for disseminating an online message (when one includes cable, ISP providers, etc…).

    The case of the mimeograph machine and the Internet is the question of what types of technologies represent genuine people’s platforms and which result in potential “people’s platforms” (note the quotation marks)? This is not to say that mimeograph machines are perfect (after all somebody did build that machine) but when considering technology in a democratic sense it is important to puzzle over whether or not (to borrow Lewis Mumford’s terminology) the tool itself is “authoritarian” or “democratic.” The way the Internet appears in Taylor’s book – with its massive infrastructure, propensity for centralized control, material reality built upon toxic materials – should at the very least make one question to what extent the Internet is genuinely a democratic “people’s” tool. Or, whether or not it is simply such a tool for those who are able to enjoy the bulk of the benefits and a minimum of the downsides. Taylor clearly does not want to be accused of being a “prophet of doom” – or of being a prophet for profit – but the sad result is that she jumps over the genuine people’s platform she describes on the second page in favor of building an argument for a platform that, by book’s end, seems to hardly be one for “the people” in any but a narrow sense of “the people.”

    The People’s Platform: Taking Back Power and Culture in the Digital Age is a well written, solidly researched, and effectively argued book that raises many valuable questions. The book offers no simplistic panaceas but instead forces the reader to think through the issues – oftentimes by forcing them to confront uncomfortable facts about digital technologies (such as e-waste). As Taylor uncovers and discusses issue after bias after challenge regarding the Internet the question that haunts her text is whether or not the platform she is describing – the Internet – is really worthy of being called “The People’s Platform”? If so, to which “people” does this apply?

    The People’s Platform is well worth reading – but it is not the end of the conversation. It is the beginning of the conversation.

    And it is a conversation that is desperately needed.

    __

    The People’s Platform: Taking Back Power and Culture in the Digital Age
    by Astra Taylor
    Metropolitan Books, 2014

    __

    Zachary Loeb is a writer, activist, librarian, and terrible accordion player. He earned his MSIS from the University of Texas at Austin, and is currently working towards an MA in the Media, Culture, and Communications department at NYU. His research areas include media refusal and resistance to technology, ethical implications of technology, alternative forms of technology, and libraries as models of resistance. Using the moniker “The Luddbrarian” Loeb writes at the blog librarianshipwreck, which is where this review originally appeared.

  • The Digital Turn

    The Digital Turn

    800px-Culture_d'amandiers

    David Golumbia and The b2 Review look to digital culture

    ~
    I am pleased and honored to have been asked by the editors of boundary 2 to inaugurate a new section on digital culture for The b2 Review.

    The editors asked me to write a couple of sentences for the print journal to indicate the direction the new section will take, which I’ve included here:

    In the new section of the b2 Review, we’ll be bringing the same level of critical intelligence and insight—and some of the same voices—to the study of digital culture that boundary 2 has long brought to other areas of literary and cultural studies. Our main focus will be on scholarly books about digital technology and culture, but we will also branch out to articles, legal proceedings, videos, social media, digital humanities projects, and other emerging digital forms.

    While some might think it late in the day for boundary 2 to be joining the game of digital cultural criticism, I take the time lag between the moment at which thoroughgoing digitization became an unavoidable reality (sometime during the 1990s) and the first of the major literary studies journals to dedicate part of itself to digital culture as indicative of a welcome and necessary caution with regard to the breathless enthusiasm of digital utopianism. As humanists our primary intellectual commitment is to the deeply embedded texts, figures, and themes that constitute human culture, and precisely the intensity and thoroughgoing nature of the putative digital revolution must give somebody pause—and if not humanists, who?

    Today, the most overt mark of the digital in humanities scholarship goes by the name Digital Humanities, but it remains notable how little interaction there is between the rest of literary studies and that which comes under the DH rubric. That lack of interaction goes in both directions: DH scholars rarely cite or engage directly with the work the rest of us do, and the rest of literary studies rarely cites DH work, especially when DH is taken in its “narrow” or most heavily quantitative form. The enterprises seem, at times, to be entirely at odds, and the rhetoric of the digital enthusiasts who populate DH does little to forestall this impression. Indeed, my own membership in the field of DH has long been a vexed question, despite being one of the first English professors in the country to be hired to a position for which the primary specialization was explicitly indicated as Digital Humanities (at the University of Virginia in 2003), and despite being a humanist whose primary area is “digital studies,” and the inability of scholars “to be” or “not to be” members of a field in which they work is one of the several ways that DH does not resemble other developments in the always-changing world of literary studies.

    800px-054_Culture_de_fraises_en_hauteur_et_sous_serre_à_Plougastel

    Earlier this month, along with my colleague Jennifer Rhee, I organized a symposium called Critical Approaches to Digital Humanities sponsored by the MATX PhD program at Virginia Commonwealth University, where Prof. Rhee and I teach in the English Department. One of the conference participants, Fiona Barnett of Duke and HASTAC, prepared a Storify version of the Twitter activity at the symposium that provides some sense of the proceedings. While it followed on the heels and was continuous with panels such as the ‘Dark Side of the Digital Humanities’ at the 2013 MLA Annual Convention, and several at recent American Studies Association Conventions, among others, this was to our knowledge the first standalone DH event that resembled other humanities conferences as they are conducted today. Issues of race, class, gender, sexuality, and ability were primary; cultural representation and its relation to (or lack of relation to) identity politics was of primary concern; close reading of texts both likely and unlikely figured prominently; the presenters were diverse along several different axes. This arose not out of deliberate planning so much as organically from the speakers whose work spoke to the questions we wanted to raise.

    I mention the symposium to draw attention to what I think it represents, and what the launching of a digital culture section by boundary 2 also represents: the considered turning of the great ship of humanistic study toward the digital. For too long enthusiasts alone have been able to stake out this territory and claim special and even exclusive insight with regard to the digital, following typical “hacker” or cyberlibertarian assertions about the irrelevance of any work that does not proceed directly out of knowledge of the computer. That such claims could even be taken seriously has, I think, produced a kind of stunned silence on the part of many humanists, because it is both so confrontational and so antithetical to the remit of the literary humanities from comparative philology to the New Criticism to deconstruction, feminism and queer theory. That the core of the literary humanities as represented by so august an institution as boundary 2 should turn its attention there both validates the sense of digital enthusiasts of the medium’s importance, but should also provoke them toward a responsibility toward the project and history of the humanities that, so far, many of them have treated with a disregard that at times might be characterized as cavalier.

    -David Golumbia

    Browse All Digital Studies Reviews