boundary 2

Tag: cultural studies

  • Brian Hughes — Thriving from Exile: Toward a Materialist Analysis of the Alt-Right

    Brian Hughes — Thriving from Exile: Toward a Materialist Analysis of the Alt-Right

    Brian Hughes

    Introduction: Postmortem

    Between the years 2015-18—when the (so-called) “alt-right” first exploded to prominence in the public eye—media coverage and academic scrutiny of this loose knit far-right coalition approached the topic almost exclusively from the perspectives of ethnography, culture/discursive mapping and ideological historiography. And, indeed, circumstances demanded such approaches. Countless readers were taken off-guard by the sudden wave of antisemitic internet trolls and polo-clad neo-fascists whom they now saw marching in the streets. Only methods such as these were capable of operating with the necessary speed to orient the public to a grotesque new movement that appeared to enjoy the ear of the president himself.

    But in that haste, something was neglected. To date, a rigorous, comprehensive materialist analysis of the alt-right and its origins has yet to be seen. Of course, the great challenge of historical materialism is that it demands detail—facts pertaining to the realities of finance, technological affordances, regulation of capital and labor under the law, stacks and flows of raw currency, and so on. And such detail cannot be developed without ample time for researchers to acquire and organize it, or for readers to absorb it. But time was in short supply as the alt-right made its transition from a mostly virtual media phenomenon to a political movement characterized by public demonstrations, entry into the halls of American power, and, very quickly, murder. And so, the “culturalist” approach rightly predominated.

    It should go without saying that such an absence of materialist analysis has left us only partially equipped to recognize, let alone oppose, future movements owing their origins to conditions similar to those of the alt-right. Today, at least in the opinion of some commentators, the alt-right proper may be a spent force (Weill 2018; McCoy 2018; Barrouquere 2018). But its legacy lives on in even more extreme ideologies and movements. These new forces of the far-right are emerging according to patterns startlingly similar to those which birthed the alt-right. It is essential that we study them in light of the relations of capital to productive labor and technology.

    Unfortunately, the convergence of crises that menace the present day, spanning from the rise of a new populist authoritarianism to climate catastrophe and beyond, are defined precisely by an urgency that would seem to preclude the production of rigorous dialectical works. This essay nevertheless advocates for such an impossible approach—indeed, insists upon the necessity of this tedious, time-consuming work. Toward that end, this essay will indicate some approaches that such a fact-driven, dialectical method might take. It will identify key economic antagonisms and moments of technological revolution, which set into place the conditions necessary for the emergence of a proto-alt-right media ecosystem, and eventually the alt-right itself. It will indicate how similar patterns of antagonism and technological change are contributing to the emergence of newer, yet-more radical and dangerous far right fringe movements today. And while these are, at best, trailheads to a more detailed and rigorous analysis, perhaps it will at least serve as postmortem for a moment that has since grown into a crisis. Perhaps in its very failure to fully answer its own mandate, this essay will succeed in stressing the urgency of such an undertaking.

    Gaps and Surfeits: Reviewing the Culturalist Literature

    To be sure, many fine works of political economy addressing this era of far-right ascendancy are being written. But while indispensable, these do not address the alt-right per se. The journal Critical Sociology recently published its symposium “Neoliberalism and the Far Right,” a concise set of articles describing the “organic or constitutive pathologies or contradictions within the political economy of neoliberalism that, in many respects, dates back to the emergence of this distinct ideo-political framework in the 1930s,” and (so the symposium’s participants argue) produced the conditions that have led us to our current moment of authoritarian populism (Kiely and Saull 2018, 821). The Monthly Review continues to publish exemplary works of materialist political economy, such as Michael Joseph Roberto’s 2017 piece, The Origins of American Fascism. In it, he seeks to recruit the works of key theorists of 20th Century fascism (Baran, Sweezy, Haider, Corey, Magil and Stevens) for the needs of today (Roberto 2017). As in the Critical Sociology symposium, this work insists upon a recognition of historical continuity. In steep contrast to the exceptional or atavistic treatment that characterizes so much popular coverage and analysis of President Trump (Robin 2017), Roberto’s insistence upon a sense of historical continuity will be essential to a project of materialist analysis of the alt-right.

    Unfortunately, these works, and others like them, leave the alt-right itself untouched, or at best tangential, to the broader issues of far-right populism, the radicalization of the American white middle class, the legacy of neoliberalism and of its “cleansing [of] state from the consequences of (social) democracy” (Kiely and Saull 2017, 822). Perhaps this is appropriate. For while the alt-right may have seized an outsized share of public attention, it is debatable just how great an influence the movement can realistically claim (Mudde 2018). Indeed, the works of Roberto, Foster, and the Critical Sociology symposiasts indicate that we must not treat the alt-right as a primary stimulus of our country’s current predicament. However, neither is the alt-right reducible to a generic symptom of these same historical forces. While unimaginable outside of the broader historical political-economic context sketched above, the alt-right is a consequence of a subset of productive forces specific to itself. A historical materialist analysis of the alt-right must seek to identify the productive patterns that were unique to the genesis and metastasis of the movement—hence the importance of an initial focus on media and communication technology.

    Major works specifically addressing the alt-right have been largely free of political economic approaches. The most prominent long-form texts on the topic make no claims‚ implicit or otherwise, to performing a materialist analysis of the subject. George Hawley’s Making Sense of the alt-right is a rigorous (if brief) scholarly treatment of the movement, which profiles prominent movement personalities, pivotal moments in the movement’s evolution and metastasis, and highlights the ideological positions that defined the movement over the past decade and a half (Hawley 2018). While Hawley does hazard to identify some causal patterns pertaining to relations between capital and the productive forces that gave birth to the movement, he stops far short of a structural analysis. Mike Wendling’s alt-right from 4chan to the White House is a detailed taxonomy of the cultural and ideological categories that comprise the alt-right. It offers a clear and well-delineated lexicon with which to discuss to alt-right, but it offers effectively no causal analysis for the origins and orientations of the alt-right (Wendling 2018). David Neiwert’s Alt-America: The Rise of the Radical Right in the Age of Trump does attempt to trace origins and contingencies, narrativizing the movement through the political evolution of American conspiracy cultures (Neiwert 2017). Neiwert makes a convincing case for the presence of conspiratorial thinking across American far-right subcultures. And his claim that the alt-right represents an outgrowth of militia and anti-New World Order subcultures is intriguing enough to warrant serious pursuit. Nevertheless, Niewert’s analysis is also primarily cultural, and leaves material explanations largely unmodeled.

    The sole full-length work to focus on the alt-right while claiming to speak from the socialist position is Angela Nagle’s monograph Kill All Normies. The alt-right, Nagle argues, emerged as a force of opposition to what the right characterizes as unchecked “PC-cultural politics” (Nagle 2017, 19) of the online left, a movement which had become preoccupied with toxic identity politics and ideological purges. In what has become one of the book’s most hotly debated passages, Nagle writes that “the key driving force behind [online call-out culture] is about creating scarcity in an environment in which virtue is the currency… the counterforce of which was the anonymous underworld from which the right-wing trolling cultures emerged” (Nagle 2017, 76). That is to say that an exclusionary left-wing culture created the opening for a strategic right-wing backlash. This contention has, in the years following its publication, further exacerbated divisions within the left (Liu 2017, Stewart 2017) while simultaneously provoking attempts to seal these fissures (Weatherby 2017).

    Whether it is or is not accurate, and for all the self-reflection it may have provoked on the left, Nagle’s critique should not be mistaken for a materialist analysis of the alt-right’s origins and modes of self-reproduction. Rather, it would more accurately be described as a cultural ethnography presented via market metaphor. Nagle’s “online economy of virtue” (Nagle 2017, 68) belongs to the realm of political economy only insofar as it is libidinal and “there is as much libidinal intensity in capitalist exchange as in the alleged ‘symbolic’ exchange” (Lyotard 1993, 109). But while such a transposition is no doubt possible, this cannot credibly be claimed as Nagle’s project.

    Nagle’s critique takes place at the level of culture, engaging with culture as experienced and described by those within it. And while this approach contains some shortcomings, so too do all methodologies and critical frameworks. The culturalist approach no doubt offers advantages that other analytic lenses do not. Culturalist approaches like Nagle’s can reveal intra-movement fault lines while charting the expressions of (for example) commodity fetishism in online subculture. This can help us to understand how consumer identity merged with reactionary politics in the Gamergate movement that began in 2014 (Massanari 2017, Salter 2018). It should also be noted that culturalist approaches offer lay readers a compelling entry point into otherwise alien objects of study. When faced with the sudden appearance of a strange and frightening movement like the alt-right, such reader appeal is vital.

    Clearly, we do not lack for well-drawn histories and ethnographies of the alt-right. Nor do we lack for serious political economic treatments of the global authoritarian populist turn. What we lack is a substantive work that will specifically treat the alt-right as the outcome of relations of production at those sites from which the alt-right issued forth.

    Trailheads: Sites of Interest for Material Analysis

    The alt-right was initially a media-oriented phenomenon, existing almost exclusively in the communicative space of Web 2.0 and subsequent Social Web. Since “different ways of financing and organizing cultural production have traceable consequences for the range of discourse, representations, and communicative resources…and for the organization of audience access and use” (Golding and Murdock 2005, 70), a materialist analysis on the origins of the alt-right might well begin with the financial, technological, and productive-relational history of media and communication technology.

    In fact, the alt-right came about through a decades-long intra-right-wing struggle over ownership and access to media and communication technologies—both in the organs of the press and broadcast, and within the space of think-tanks, intellectual societies, and, occasionally, universities. This internecine struggle was augmented by much broader shifts in conditions of ownership and techno-legal regulatory frameworks, which characterized communication technology and media in the late-20th and early 21st centuries.

    Each generation of 20th Century American reactionaries found itself forced to contended with a progressive narrowing of its access to mass media. Lacking access to the organs of conservative ideological commodity production, these groups and individuals would coalesce over the course of decades into a thriving network of clubs, social circles, and publications funded by wealthier members of the marginal far-right. This sequestration effected a process of further ideological radicalization, characterized by risk-shifting and isolation-cohesion (McCauley and Moskalenko 2016)—trends only exacerbated by the need to produce and reproduce a market for far-right ideological content that went mostly unsatisfied by mainstream counterparts. As digital technology (defined in large part by the commercial internet and its laissez-faire regulatory regime) offered new and inexpensive vehicles by which to the reach the public, a new generation of reactionaries came of age, radicalized in an era when now access could be taken for granted.

    Many observers, both within and outside of the alt-right, cite William F. Buckley’s purge of the John Birch Society from the American conservative movement as the beginning of the American far-right’s years on the media fringe (Ashbee 2000). Finding itself out of step with the relatively liberal tenor of the times, Buckley, his National Review magazine, and the conservative movement for which they claimed to speak, pursued not merely a change in image, but a wholesale redrawing of the circumferences of American conservatism. Along with the expulsion of the John Birch Society and its leader Robert Welch, this reorientation involved the rejection of Randian objectivists, along with the explicitly antisemitic Liberty Lobby, and other, smaller concerns (Mintz 1985). Through a campaign of editorial and organizational exclusion, a new, “midcentury American conservatism was self-consciously created to appeal to the mainstream of American philosophical liberalism” (Deneen 2017, 24). Throughout its history National Review never turned a profit and was dependent on Buckley’s ability to “draw on elite social circles for additional donations to the magazine” (Sivek 2008, 267). Therefore, purging the embarrassments of Robert Welch, Ayn Rand, et. al was imperative in order to continue funding American conservatism’s mid-century journal of record. And so, this purge was as much a ruthless financial decision as an ideological one (and indeed, an orthodox dialectical materialism would stress the determining pressure of finance upon ideology).

    Despite the National Review’s considerable influence, it was never the sole gatekeeper of conservative communications. The Buckley purge did not single-handedly create the critical mass necessary for a rival, dissident far-right media ecosystem to coalesce. Buckley’s “no-platforming” strategy succeeded in sanitizing the public face of movement conservatism while disciplining its operatives. But in doing so, it only curtailed the ability of these tendencies to steer conservative politics in the second half of the 20th Century. Birchers continued to operate their own not-inconsiderable media operations via ownership of a vast publishing and distribution infrastructure (Mintz 1985). Meanwhile, Objectivists remained a numerically small but disproportionately influential current within midcentury discourse as a justifying function of unbridled capitalism (Toy 2004). The ideological projects represented by these now-officially fringe groups were merely repressed—not eliminated. While their sequestration from primary economies of ideological media production severely diminished their ability to impact mass politics, it did not end their (small i) ideological projects. These would remain constant, until such time as the conditions of the political economy of media shifted several decades later.

    It was the neoconservative ascension, and concomitant “paleoconservative purges,” of the 1980s, 90s, and 2000s (Berlet 2008, Gottfried 2015), which brought together the primary cohort of individuals, groups, and sources of financing that would constitute the core of the proto-alt-right. Racist ultraconservatives such as Paul Gottfried, Joe Sobran, Patrick Buchanan, William Regnery II, Peter Brimelow, Mel Bradford, and Sam Francis (to name but a few) found themselves, one by one, forced from such organs of the conservative movement as Commentary, the Intercollegiate Institute, and (many times over) the National Review (Williams 2017). As increasing numbers of far-right ideologues and financiers found themselves recast as liabilities within movement conservatism, an alternative right wing at last began to coalesce.

    These newly radioactive writers and politicos sought out new sites at which to produce media commodities. A constellation of paleo-friendly print serials such as Chronicles, Left and Right, and The Rockwell-Rothbard Report, established “an interconnected set of rhetorical pipelines and echo chambers [to] amplify and repeat the messages and…ideology of the group into the mainstream” (Berlet 2008, 580). This paleoconservative alternative media, with its inferior range and capital resources, was well-suited to producing increasingly unapologetic extremist ideological content and reach a small audience. However, this alternative print market proved simply too meagre to deliver the American far-right back into power.

    Again, movement conservatism had succeeded in sanitizing and disciplining itself, throwing its ugliest tendencies to the margins of the market. By century’s end, paleoconservatism seemed a dead letter, dashed apart by internecine ideological conflicts over foreign interventionism and Austrian economics (Ashbee 2000, 82-83). The paleo-purge might even have achieved what the Bircher purge could not, ending paleoconservatism as an ideological project altogether—but for an epochal revolution in markets and technology brought about by the age of mass internet access.

    With the arrival of the internet—specifically Web 2.0 and the blogosphere—several key sites in the paleoconservative diaspora became launching sites for the incipient Alt-Right. The American Conservative, founded in 2002 by Pat Buchanan, Taki Theodoracopulos, and Scott McConnell, was perhaps the most high-profile of these post-paleo print/digital crossovers (Hawley 2017, 57-59). The American Conservative would become a prime site of synergy and metastasis between paleocons and the proto-alt-right. TAC would give future alt-right figurehead Richard Spencer his entré to publishing as an Assistant Editor from 2007-08. When Spencer was fired (purportedly for his extremist beliefs), he found a soft landing at Theodoracopulos’s new endeavor, the blog TakiMag. One year later, Spencer would go on to found AlternativeRight.com, funded by another National Review exile, Peter Brimelow, and by disillusioned paleo-financier William H. Regnery II (ibid).

    By the time that Spencer left Taki’s Magazine in 2010, the era of “Web 2.0” was in full flower, characterized by increasingly inexpensive tools for developing professional-looking websites. However, these cosmetic improvements were in fact symptomatic of a more fundamental change in the power of publishing capital. With the arrival of Web 2.0, control over the relevant means of producing media commodities increasingly migrated to blogging platforms (WordPress), user generated content sites (YouTube), and website building software as a service (Squarespace). This technological shift occurred within the context of a broader financialization of the press, which decimated medium-sized publications, and ushered in an era of precarious, contingent “content production” labor, feeding these new platforms a rush of media industry refugees. While the largest media companies would continue to employ their own web developers, smaller companies and independent content producers quickly adopted these alternatives. This effected a radical reversal of the sale of labor between small media companies and web developers. Whereas in the past, web developers would have sold their labor to media companies, now small media producers sold theirs to an ever-shrinking handful of hosting, publishing, and design platforms, who reaped the surplus value of advertising and data mining.

    The success of this arrangement depended on an unprecedented alienation of labor, even to the extent that small content producers did not recognize the arrangement as such. The (capital-I) Ideological façade of individual empowerment which accompanied the tech-libertarian disruption of Web 2.0 ensured that companies would exercise no oversight save the bare legal minimum. The so-called “safe harbor” protections afforded to digital tech platforms by Section 230 of the Communications Decency Act fostered both the expansive logic of this new mode of capital exploitation as well as its Ideological rationale. Under the statute, interactive computer service providers such as the low-cost blogging platforms upon which the alt-right would be built could not be held accountable for the content or actions of their clients (Balasubramani 2017; Citron and Wittes 2017). As digital economic refugees flooded the new platforms during the years of the great recession, the new wielders of productive capital did not investigate their labor pool too deeply. The dregs of the American conservative movement were no exception to any of these pressures or affordances.

    Spencer seized this opportunity (albeit unwittingly) to launch AlternativeRight.com (Hawley 2017, 57). Now, the American far-right became more eclectic than ever before. At AlternativeRight.com, paleoconservatives like Paul Gottfried and Sam Francis appeared alongside self-proclaimed “manosphere” misogynists like Matt Forney, academic antisemites Kevin MacDonald and Ricardo Duchesne, mainstream libertarians like David Gordon and Thomas Woods, and fringe “right-wing anarchists” Keith Preston and Jack Donovan. To these were added Norse pagan revivalists, heterodox Eastern Rite Christians, Evolan perennialists, and conspiracists of all stripes (Nagle 2017). While many factors contributed to this eclecticism (the biases and affordances of hypertext and Spencer’s intention to create a “big tent” movement, to name just two) market forces underpin them all. Cross-pollination expanded Alternative Right‘s readership, which in turn expanded funding opportunities, which subsequently created new readerships with new demands for representation within the burgeoning proto-alt-right. A similar phenomenon may be glimpsed today in the “alternative influence” networks which knit together far-right networks on user generated content platforms such as YouTube (Lewis 2018).

    In the early 21st Century, when arrangements of productive capital and technological capacities changed so radically, ideological projects that had endured, and even festered, in exile, now returned to reclaim their place in the American conservative movement. What had been sanitized was reinfected; what had been disciplined was now set loose.

    The points of conjunction mentioned so far are only a few of the most obvious sites of inquiry at which a materialist analysis of the alt-right movement might begin. There are many more historical watersheds where technology, capital, and human intention met to produce what ultimately became the alt-right. We may point to the consumer-cultural revolt of #Gamergate, or to strategic courting of online troll groups by Trump consigliere and former executive chairman for Breitbart.com Steve Bannon (Green 2017). The ongoing role of Bitcoin and other cryptocurrencies in financing a now badly damaged alt-right raises a pressing need both for new modalities of digital political economy and their application to the question of far-right extremism (Golumbia 2016). Deeper questions of labor and masculine identity have the potential to unearth entirely new vistas of investigative potential intersecting with gender and cultural theory (Kimmel 2018).

    However, we should not wait for an exhaustive materialist survey and analysis before applying lessons from the history (crudely) sketched above. These very same patterns of repression-exile-metastasis-and-return appear to be reoccurring in microcosm today, as mainstream conservatism has redrawn the boundaries of acceptability. Conservatism under Trump embraces some on its former extremes, while new, semi-disavowed fringes escalate to heights of ever-more spectacular violence. An array of legal and financial pressures force sites such as 8chan toward distributed hosting strategies (Poulson 2019). The same combination of pressures is increasingly forcing far right extremists onto encrypted messaging apps (Glaser 2019). Will these exiles continue their ideological projects in that exile? How might these ideologies blend, mutually provoke, and metastasize? And what unforeseen revolution in the relations of production might one day affect their ascent to power?

    Conclusion: Moving Faster

    The conditions according to which the 20th Century American far-right financed and organized the production of its ideological commodities enabled a denial of its fringes. As each generation of the 20th-Century American far-right was forced to contend with increasingly narrow access to capital and productive means, new logics of producing ideological commodities emerged. With the revolution in technology and relations of labor incited by the internet and Web 2.0, and organized by a techno-libertarian legal regime, these far-right logics metastasized and returned to the broader cultural marketplace in the form of the alt-right.

    To the extent that the mass and momentum of capital and technology might have overwhelmed attempts at strategic intervention during these early periods, the culturalist approach to understanding the alt-right takes on renewed importance. Those periods of exile during which the far-right incubates its ugliest offspring are precisely the points at which culturalist insights might do the most to shape counterstrategy. These factors which incubated the alt-right may have belonged to Neiwert’s conspiracies, Nagle’s subculture wars, or some as-yet-unidentified tendency. During that period of incubation, in which capital, the law, technology, and social pressure converged to isolate and minimize the American far right, it was these sites at which successful intervention might have occurred. Now that the extremist right’s end of exile has laid bare the material causes for its return, political economy is positioned to make a case for intervention appropriate to the present day.

    The materialist analysis of this movement must be written. This analysis should be incorporated with the findings of culturalist study, so that together they can inform both policy and strategies of civil action. The scope of such a project seems large indeed. But perhaps it is only impossible if undertaken in a spirit of retreat or abstract reflection.

    In the short term, the lessons provided by this materialist sketch might help to understand hidden dynamics in the cat-and-mouse game of deplatflorming and reemergence that defines far right activity on the internet today. As the history of American conservatism’s purges seems to indicate, deplatforming does indeed limit the extreme fringes from wielding power and influence but only for so long as they remain pushed to the margins. When these repressed tendencies return, as in the case of the alt-right, we are reminded that synergies and antagonisms of capital, labor, and technology have the power to return these once-exiled fringes back into the world.

    _____

    Brian Hughes is a doctoral candidate and lecturer at the American University School of Communication. His work explores the impact of communication technology on political and religious extremism, terrorism and fringe culture. He is a Doctoral Fellow with the Center for Analysis of the Radical Right.

    Back to the essay

    _____

    Works Cited

     

  • John Pat Leary — Innovation and the Neoliberal Idioms of Development

    John Pat Leary — Innovation and the Neoliberal Idioms of Development

    John Pat Leary

    “Human creativity and human capacity is limitless,” said the Bangladeshi economist Muhammad Yunus to a darkened room full of rapt Austrian elites. The setting was TEDx Vienna, and Yunus’s address bore all the trademark features of TED’s missionary version of technocratic idealism. “We believe passionately in the power of ideas to change attitudes, lives and, ultimately, the world,” goes the TED mission statement, and this philosophy is manifest in the familiar form of Yunus’s talk (TED.com). The lighting was dramatic, the stage sparse, and the speaker alone on stage, with only his transformative ideas for company. The speech ends with the zealous technophilia that, along with the minimalist stagecraft and quaint faith in the old-fashioned power of lectures, defines this peculiar genre. “This is the age where we all have this capacity of technology,” Yunus declares: “The question is, do we have the methodology to use these capacities to address these problems?… The creativity of human beings has to be challenged to address the problems we have made for ourselves. If we do that, we can create a whole new world—we can create a whole new civilization” (Yunus 2012). Yunus’s conviction that now, finally and for the first time, we can solve the world’s most intractable problems, is not itself new. Instead, what TED Talks like this offer is a new twist on the idea of progress we have inherited from the nineteenth century. And with his particular focus on the global South, Yunus riffs on a form of that old faith, which might seem like a relic of the twentieth: “development.” What is new, then, about Yunus’s articulation of these old faiths? It comes from the TED Talk’s combination of prophetic individualism and technophilia: this is the ideology of “innovation.”

    “Innovation”: a ubiquitous word with a slippery meaning. “An innovation is a novelty that sticks,” writes Michael North in Novelty: A History of the New, pointing out the basic ontological problem of the word: if it sticks, it ceases to be a novelty. “Innovation, defined as a widely accepted change,” he writes, “thus turns out to be the enemy of the new, even as it stands for the necessity of the new” (North 2013, 4). Originally a pejorative term for religious heresy, in its common use today “innovation” is used a synonym for what would have once been called, especially in America, “futurity” or “progress.” In a policy paper entitled “A Strategy for American Innovation,” then-President Barack Obama described innovation as an American quality, in which the blessings of Providence are revealed no longer by the acquisition of territory, but rather by the accumulation of knowledge and technologies: “America has long been a nation of innovators. American scientists, engineers and entrepreneurs invented the microchip, created the Internet, invented the smartphone, started the revolution in biotechnology, and sent astronauts to the Moon. And America is just getting started” (National Economic Council and Office of Science and Technology Policy 2015, 10).

    In the Obama administration’s usage, we can see several of the common features of innovation as an economic ideology, some of which are familiar to students of American exceptionalism. First, it is benevolent. Second, it is always “just getting started,” a character of newness constantly being renewed. Third, like “progress” and “development” have been, innovation is a universal, benevolent abstraction made manifest through material, economic accomplishments. But even more than “progress,” which could refer to political and social accomplishments like universal suffrage or the polio vaccine, or “development,” which has had communist and social democratic variants, innovation is inextricable from the privatized market that animates it. For this reason, Obama can treat the state-sponsored moon landing and the iPhone as equivalent achievements. Finally, even if it belongs to the nation, the capacity for “innovation” really resides in the self. Hence Yunus’s faith in “creativity,” and Obama’s emphasis on “innovators,” the protagonists of this heroic drama, rather than the drama itself.

    This essay explores the individualistic, market-based ideology of “innovation” as it circulates from the English-speaking first world to the so-called third world, where it supplements, when it does not replace, what was once more exclusively called “development.” I am referring principally to projects that often go under the name of “social innovation” (or, relatedly, “social entrepreneurship”), which Stanford University’s Business School defines as “a novel solution to a social problem that is more effective, efficient, sustainable, or just than current solutions” (Stanford Graduate School of Business). “Social innovation” often advertises itself as “market-based solutions to poverty,” proceeding from the conviction that it is exclusion from the market, rather than the opposite, that causes poverty. The practices grouped under this broad umbrella include projects as different the micro-lending banks, for which Yunus shared the 2006 Nobel Peace Prize; smokeless, cell-phone charging cookstoves for South Asia’s rural peasantry; latrines that turn urine into electricity, for use in rural villages without running water; and the edtech academic and TED honoree Sugata Mitra’s “self-organized learning environment” (SOLE), which appears to consist mostly of giving internet-enabled laptops to poor children and calling it a day.

    The discourse of social innovation is a theory about economic process and also a story of the (first-world) self. The ideal innovator that emerges from the examples to follow is a flexible, socially autonomous individual, whose creativity and prophetic vision, nurtured by the market, can refashion social inequalities as discrete “problems” that simply await new solutions. Guided by a faith in the market but also shaped by the austerity that has slashed the budgets of humanitarian and development institutions worldwide, social innovation ideology marks a retreat from the social vision of development. Crucially, the ideologues of innovation also answer a post-development critique of Western arrogance with a generous, even democratic spirit. That is, one of the reasons that “innovation” has come to supersede “development” in the vocabulary of many humanitarian and foreign aid agencies is that innovation ideology’s emphasis on individual agency serves as a response to the legitimate charges of condescension and elitism long directed at Euro-American development agencies. But compromising the social vision of development also means jettisoning the ideal of global equality that, however deluded, dishonest, or self-serving it was, also came with it. This brings us to a critical feature of innovation thinking that is often disguised by the enthusiasm of its tech-economy evangelizers: it is in fact a pessimistic ideal of social change. The ideology of innovation, with its emphasis on processes rather than outcomes, and individual brilliance over social structures, asks us to accommodate global inequality, rather than challenge it. It is a kind of idealism, therefore, well suited to our dispiriting neoliberal moment, where the sense of possibility seems to have shrunk.

    My objective is not to evaluate these efforts individually, nor even to criticize their practical usefulness as solution-oriented projects (not all of them, anyway). Indeed, in response to the difficult, persistent question, “What is the alternative?” it is easy, and not terribly helpful, to simply answer “world socialism,” or at least “import-substitution industrialization.” My objective is perhaps more modest: to define the ideology of “innovation” that undergirds these projects, and to dissect the Anglo-American ego-ideal that it circulates. As an ideology, innovation is driven by a powerful belief, not only in technology and its benevolence, but in a vision of the innovator: the autonomous visionary whose creativity allows him to anticipate and shape capitalist markets.

    An Orthodoxy of Unorthodoxy: Innovation, Revolution, and Salvation

    Given the immodesty of the innovator archetype, it may seem odd that innovation ideology could be considered pessimistic. On its own terms, of course, it is not; but when measured against the utopian ambitions and rhetoric of many “social innovators” and technology evangelists, their actual prescriptions appear comparatively paltry. Human creativity is boundless, and everyone can be an innovator, says Yunus; this is the good news. The bad news, unfortunately, is that not everyone can have indoor plumbing or public lighting. Consider the “pee-powered toilet” sponsored by the Gates Foundation. The outcome of inadequate sewerage in the underdeveloped world has not been changed; only the process of its provision has been innovated (Smithers 2015).  This combination of evangelical enthusiasm and piecemeal accommodation becomes clearer, however, when we excavate innovation’s tangled history, which by necessity, the word seems at first glance to lack entirely.

    A demonstration toilet, capable of powering a light, or even a mobile phone, at the University of the West of England (photograph: </strong><strong>UWE Bristol)
    Figure 1. A demonstration toilet, capable of powering a light, or even a mobile phone, at the University of the West of England (photograph: UWE Bristol)

    For most of its history, the word has been synonymous with false prophecy and dissent: initially, it was linked to deceitful promises of deliverance, either from divine judgment or more temporal forms of punishment. For centuries, this was the most common usage of this term. The charge of innovation warned against either the possibility or the wisdom of remaking the world, and disciplined those “fickle changelings and poor discontents,” as the King says in Shakespeare’s Henry IV, grasping at “hurly-burly innovation.” Religious and political leaders tarred self-styled prophets or rebels as heretical “innovators.” In his 1634 Institution of the Christian Religion, for example, John Calvin warned that “a desire to innovate all things without punishment moveth troublesome men” (Calvin 1763, 716).  Calvin’s notion that innovation was both a political and theological error reflected, of course, his own jealously kept share of temporal and spiritual authority. For Thomas Hobbes, “innovators” were venal conspirators, and innovation a “trumpet of war and sedition.” Distinguishing men from bees—which Aristotle, Hobbes says, wrongly considers a political animal like humans—Hobbes laments the “contestation of honour and preferment” that plagues non-apiary forms of sociality. Bees only “talk” when and how they have to; men and women, by contrast, chatter away in their vanity and ambition (Hobbes 1949, 65-67). The “innovators” of revolutionary Paris, Edmund Burke thundered later, “leave nothing unrent, unrifled, unravaged, or unpolluted with the slime of their filthy offal” (1798, 316-17). Innovation, like its close relative “revolution,” was upheaval, destruction, the reversal of the right order of things.

    Figure 2: The Innovation Tango, in <strong><em>The Evening World</em></strong>
    Figure 2: The Innovation Tango, in The Evening World

    As Godin (2015) shows in his history of the concept in Europe, in the late nineteenth century “innovation” began to be recuperated as an instrumental force in the world, which was key to its transformation into the affirmative concept we know now. Francis Bacon, the philosopher and Lord Chancellor under King James I, was what we might call an “early adopter” of this new positive instrumental meaning. How, he asked, could Britons be so reverent of custom and so suspicious of “innovation,” when their Anglican faith was itself an innovation? (Bacon 1844, 32). Instead of being an act of sudden renting, rifling, and heretical ravaging, “innovation” became a process of patient material improvement.  By the turn of the last century, the word had mostly lost its heretical associations. In fact, “innovation” was far enough removed from wickedness or malice in 1914 that the dance instructor Vernon Castle invented a modest American version of the tango that year and named it “the Innovation.” The partners never touched each other in this chaste improvement upon the Argentine dance. “It is the ideal dance for icebergs, surgeons in antiseptic raiment and militant moralists,” wrote Marguerite Marshall (1914), a thoroughly unimpressed dance critic in the New York Evening World. “Innovation” was then beginning to assume its common contemporary form in commercial advertising and economics, as a synonym for a broadly appealing, unthreatening modification of an existing product.

    Two years earlier, the Austrian-born economist Joseph Schumpeter published his landmark text The Theory of Economic Development, where he first used “innovation” to describe the function of the “entrepreneur” in economic history (1934, 74). For Schumpeter, it was in the innovation process that capitalism’s tendency towards tumult and creative transformation could be seen. He understood innovation historically, as a process of economic transformation, but he also singled out an innovator responsible for driving the process. In his 1942 book Capitalism, Socialism, and Democracy, Schumpeter returned to the idea in the midst of war and the threat of socialism, which gave the concept a new urgency. To innovate, he wrote was “to reform or revolutionize the pattern of production by exploiting an invention or, more generally, an untried technological possibility for producing a new commodity or producing an old one in a new way, by opening up a new source of supply of materials or a new outlet for products, by reorganizing an industry and so on” (Schumpeter 2003, 132). As Schumpeter goes on to acknowledge, this transformative process is hard to quantify or professionalize. The elusiveness of his theory of innovation comes from a central paradox in his own definition of the word: it is both a world-historical force and a quality of personal agency, both a material process and a moral characteristic. It was a historical process embodied in heroic individuals he called “New Men,” and exemplified in non-commercial examples, like the “expressionist liquidation of the object” in painting (126). To innovate was also to do, at the local level of the production process, what Marx and Engels credit the bourgeoisie as a class for accomplishing historically: revolutionizing the means of production, sweeping away what is old before it can ossify. Schumpeter told a different version of this story, though. For Marx, capitalist accumulation is a dialectical historical process, but what Schumpeter called innovation was a drama driven by a particular protagonist: the entrepreneur.

    In a sympathetic 1943 essay about Schumpeter theory of innovation, the Marxist economist Paul Sweezy criticized the centrality Schumpeter gave to individual agency. Sweezy’s interest in the concept is unsurprising, given how Schumpeter’s treatment of capitalism as a dynamic but destructive historical force draws upon Marx’s own. It is therefore not “innovation” as a process to which Sweezy objects, but the mythologized figure of the entrepreneurial “innovator,” the social type driving the process. Rather than a free agent, powering the economy’s inexorable progress, “we may instead regard the typical innovator as the tool of the social relations in which he is enmeshed and which force him to innovate on pain of elimination,” he writes (Sweezy 1943, 96). In other words, it is capital accumulation, not the entrepreneurial function, and certainly not some transcendent ideal of creativity and genius, that drives innovation. And while the innovator (the successful one, anyway) might achieve a pantomime of freedom within the market, for Sweezy this agency is always provisional, since innovation is a conditional economic practice of historically constituted subjects in a volatile and pitiless market, not a moral quality of human beings. Of course, Sweezy’s critique has not won the day. Instead, a particularly heroic version of the Schumpeterian sense of innovation as a human, moral quality liberated by the turbulence of capitalist markets is a mainstream feature of institutional life. An entire genre of business literature exists to teach the techniques of “managing creativity and innovation in the workplace” (The Institute of Leadership and Management 2007),  to uncover the “map of innovation” (O’Connor and Brown 2003), to nurture the “art of innovation” (Kelley 2001), to close the “circle of innovation” (Peters 1999), to collect the recipes in “the innovator’s cookbook” (Johnson 2011), to give you the secrets of “the sorcerers and their apprentices” (Moss 2011)—business writers leave virtually no hackneyed metaphor for entrepreneurial creativity, from the domestic to the occult, untouched.

    As its contemporary proliferation shows, innovation has never quite lost its association with redemption and salvation, even if it is no longer used to signify their false promises. As Lepore (2014) has argued about its close cousin, “disruption,” innovation can be thought of as a secular discourse of economic and personal deliverance. Even as the concept became rehabilitated as procedural, its deviant and heretical connotations were common well into the twentieth century, when Emma Goldman (2000) proudly and defiantly described anarchy as an “uncompromising innovator” that enraged the princes and oligarchs of the world. Its seeming optimism, which is inseparable from the disasters from which it promises to deliver us, is thus best considered as a response to a host of persistent anxieties of twenty-first-century life: economic crisis, violence and war, political polarization, and ecological collapse. Yet the word has come to describe the reinvention or recalibration of processes, whether algorithmic, manufacturing, marketing, or otherwise. Indeed, even Schumpeter regarded the entrepreneurial function as basically technocratic. As he put it in one essay, “it consists in getting things done” (Schumpeter 1941, 151).[1] However, as the book titles above make clear, the entrepreneurial function is also a romance. If capitalism was to survive and thrive, Schumpeter suggested, it needed to do more than produce great fortunes: it had to excite the imagination. Otherwise, it would simply calcify into the very routines it was charged with overthrowing. Innovation discourse today remains,  paradoxically, both procedural and prophetic. The former meaning lends innovation discourse its piecemeal, solution-oriented accommodation to inequality. In this latter sense, though, the word retains some of the heretical rebelliousness of its origins. We are familiar with the lionization of the tech CEO as a non-confirming or “disruptive” visionary, who sets out to “move fast and break things,” as the famous Facebook motto went. The archetypal Silicon Valley innovator is forward-looking and rebellious, regardless of how we might characterize the results of his or her innovation—a social network, a data mining scheme, or Uber-for-whatever. The dissenting meaning of innovation is at play in the case of social innovation, as well, given its aim to address social inequalities in significant new ways. So, in spite of innovation’s implicit bias towards the new, the history and present-day use of the word remind us that its present-day meaning is seeded with its older ones. Innovation’s new secular, instrumental meaning is therefore not a break with its older, prohibited, religious connotation, but an embellishment of it: what is described here is a spirit, an ideal, an ideological rescrambling of the word’s older heterodox meaning to suit a new orthodoxy.

    The Innovation of Underdevelopment: From Exploitation to Exclusion

    In his 1949 inaugural address, which is often credited with popularizing the concept of “development,” Harry Truman called for “a bold new program for making the benefits of our scientific advances and industrial progress available for the improvement and growth of underdeveloped areas” (Truman 1949).[2] “Development” in U.S. modernization theory was defined, writes Nils Gilman, by “progress in technology, military and bureaucratic institutions, and the political and social structure” (2003, 3). It was a post-colonial version of progress that defined itself as universal and placeless; all underdeveloped societies could follow a similar path. As Kristin Ross argues, development in the vein of post-war modernization theory anticipated a future “spatial and temporal convergence” (1996, 11-12). Emerging in the collapse of European colonialism, the concept’s positive value was that it positioned the whole world, south and north, as capable of the same level of social and technical achievement. As Ross suggests, however, the future “convergence” that development anticipates is a kind of Euro-American ego-ideal—the rest of the world’s brightest possible future resembled the present of the United States or western Europe. As Gilman puts it, the modernity development looked forward to was “an abstract version of what postwar American liberals wished their country to be.”

    Emerging as it did in the decline, and then in the wake, of Europe’s African, Asian, and American empires, mainstream mid-century writing on development tread carefully around the issue of exploitation. Gunnar Myrdal, for example, was careful to distinguish the “dynamic” term “underdeveloped” from its predecessor, “backwards” (1957, 7). Rather than view the underdeveloped as static wards of more “advanced” metropolitan countries, in other words, the preference was to view all peoples as capable of historical dynamism, even if they occupied different stages on a singular timeline. Popularizers of modernization theory like Walter Rostow described development as a historical stage that could be measured by certain material benchmarks, like per-capita car ownership. But it also required immaterial, subjective cultural achievements, as Josefina Saldaña-Portillo, Jorge Larrain, and Molly Geidel have pointed out. In his well-known Stages of Economic Growth, Rostow emphasized how achieving modernity required the acquisition of what he called “attitudes,” such as a “Newtonian” worldview and an acclimation to “a life of change and specialized function” (1965, 26). His emphasis on cultural attributes—prerequisites for starting development that are also consequences of achieving it—is an example of the development concept’s circular, often self-contradictory meanings. “Development” was both a process and its end point—a nation undergoes development in order to achieve development, something Cowen and Shenton call the “old utilitarian tautology of development” (1996, 4), in which a precondition for achieving development would appear to be  its presence at the outset.

    This tautology eventually circles back to what Nustad (2007, 40) calls the lingering colonial relationship of trusteeship, the original implication of colonial “development.” For post-colonial critics of developmentalism the very notion of “development” as a process unfolding in time is inseparable from this colonial relation, given the explicit or implicit Euro-American telos of most, if not all, development models. Where modernization theorists “naturalized development’s emergence into a series of discrete stages,” Saldaña-Portillo (2003, 27) writes, the Marxist economists and historians grouped loosely under the heading of “dependency theory” spatialized global inequality, using a model of “core” and “periphery” economies to counter the model of “traditional” and “modern” ones. Two such theorists, Andre Gunder Frank and Walter Rodney, framed their critiques of development with the grammar of the word itself. Like “innovation,” “development” is a progressive noun, which indicates an ongoing process in time. Its temporal and agential imprecision—when will the process ever end? Can it? Who is in charge?—helps to lend development a sense of moral and political neutrality, which it shares with “innovation.” Frank titled his most famous book on the subject The Development of Underdevelopment, the title emphasizing the point that underdevelopment was not a mere absence of development, but capitalist development’s necessary product. Rodney’s book How Europe Underdeveloped Africa did something similar, by making “underdevelop” into a transitive verb, rather than treating “underdevelopment” as a neutral condition.[3]

    As Luc Boltanski and Eve Chiapello argue, this language of neutrality became a hallmark of European accounts of global poverty and underdevelopment after the 1960s. According to their survey of economics and development literature, the category of “exclusion” (and its opposite number, “empowerment”) and the gradual disappearance of “exploitation” from economic and humanitarian literature about poverty. No single person, firm, institution, party, or class is responsible for “exclusion,” Boltanksi and Chiapello explain. Reframing exploitation as exclusion therefore “permits identification of something negative without proceeding to level accusations. The excluded are no one’s victims” (2007, 347 & 354). Exploitation is a circumstance that enriches the exploiter; the poverty that results from exclusion, however, is a misfortune profiting no one. Consider, as an example, the mission statement of the Grameen Foundation, which Yunus founded. It remains one of the leading microlenders in the world, devoted to bringing impoverished people in the global South, especially women, into the financial system through the provision of small, low-collateral loans. “Empowerment” and “innovation” are two of its core values. “We champion innovation that makes a difference in the lives of the poor,” runs one plank of the Foundation’s mission statement (Grameen Foundation India nd). “We seek to empower the world’s poor, especially the poorest women.” “Innovation” is often not defined in such statements, but rather treated as self-evidently meaningful. Like “development,” innovation is a perpetually ongoing process, with no clear beginning or end. One undergoes development to achieve development; innovation, in turn, is the pursuit of innovation, and as soon as one innovates, the innovation thus created soon ceases to be an innovation. This wearying semantic circle helps evacuate the processes of its power dynamics, of winners and losers. As Evgeny Morozov (2014, 5) has argued about what he calls “solutionism,” the celebration of technological and design fixes approaches social problems like inequality, infrastructural collapse, inadequate housing, etc.—which might be regarded as results of “exploitation”—as intellectual puzzles for which we simply have to discover the solutions. The problems are not political; rather, they are conceptual: we either haven’t had the right ideas, or else we haven’t applied them right.[4] Grameen’s mission, to bring the world’s poorest into financial markets that currently do not include them, relies on a fundamental presumption: that the global financial system is something you should definitely want to be a part of.[5] But as Banerjee et. al (2015: 23) have argued, to the extent that microcredit programs offer benefits, they mostly accrue to already profitable businesses. The broader social benefits touted by the programs—women’s “empowerment,” more regular school attendance, and so on—were either negligible or non-existent. And as a local government official in the Indian province of Anhan Pradesh told the New York Times in 2010, microloan programs in his district had not proven to be less exploitative than their predecessors, only more remote. “The money lender lives in the community,” he said. “At least you can burn down his house” (Polgreen and Bajaj 2010).

    Humanitarian Innovation and the Idea of “The Poor”

    Yunus’s TED Talk and the Grameen Foundation’s mission statement draw on the twinned ideal of innovation as procedure and salvation, and in so doing they recapitulate development’s modernist faith in the leveling possibilities of technology, albeit with the individualist, market-based zeal that is particular to neoliberal innovation thinking. “Humanitarian innovation” is a growing subfield of international development theory, which, like “social innovation,” encourages market-based solutions to poverty. Most scholars date the concept to the 2009 fair held by ALNAP (Active Learning Network for Accountability and Performance in Humanitarian Action), an international humanitarian aid agency that measures and evaluates aid programs.  Two of its leading academic proponents, Alexander Betts and Louise Bloom of the Oxford Humanitarian Innovation Project, define it thusly:

    “Innovation is the way in which individuals or organizations solve problems and create change by introducing new solutions to existing problems. Contrary to popular belief, these solutions do not have to be technological and they do not have to be transformative; they simply involve the adaptation of a product or process to context. ‘Humanitarian’ innovation may be understood, in turn, as ‘using the resources and opportunities around you in a particular context, to do something different to what has been done before’ to solve humanitarian challenges” (Betts and Bloom 2015, 4).[6]

    Here and elsewhere, the HIP hews closely to conventional Schumpeterian definitions of the term, which indeed inform most uses of “innovation” in the private sector and elsewhere: as a means of “solving problems.” Read in this light, “innovation” might seem rather innocuous, even banal: a handy way of naming a human capacity for adaptation, improvisation, and organization. But elsewhere, the authors describe humanitarian innovation as an urgent response to very specific contemporary problems that are political and ecological in nature. “Over the past decade, faced with growing resource constraints, humanitarian agencies have held high hopes for contributions from the private sector, particularly the business community,” they write. Compounding this climate of economic austerity that derives from “growing resource constraints” is an environmental and geopolitical crisis that means “record numbers of people are displaced for longer periods by natural disasters and escalating conflicts.” But despite this combination of violence, ecological degradation, and austerity, there is hope in technology: “new technologies, partners, and concepts allow humanitarian actors to understand and address problems quickly and effectively” (Betts and Bloom 2014, 5-6).

    The trope of “exclusion,” and its reliance on a rather anodyne vision of the global financial system as a fair sorter of opportunities and rewards, is crucial to a field that counsels collaboration with the private sector. Indeed, humanitarian innovators adopt a financial vocabulary of “scaling,” “stakeholders,” and “risk” in assessing the dangers and effectiveness (the “cost” and “benefits”) of particular tactics or technologies.  In one paper on entrepreneurial activity in refugee camps, de la Chaux and Haugh make an argument in keeping with innovation discourse’s combination of technocratic proceduralism and utopian grandiosity: “Refugee camp entrepreneurs reduce aid dependency and in so doing help to give life meaning for, and confer dignity on, the entrepreneurs,” they write, emphasizing in their first clause the political and economic austerity that conditions the “entrepreneurial” response (2014, 2). Relying on an exclusion paradigm, the authors point to a “lack of functioning markets” as a cause of poverty in the camps. By “lack of functioning markets,” de la Chaux and Haugh mean lack of capital—but “market,” in this framework, becomes simply an institutional apparatus which one enters and is adjudicated on one’s merits, rather than a field of conflict in which one labors in a globalized class society. At the same time, “innovation” that “empowers” the world’s “poorest” also inherits an enduring faith in technology as a universal instrument of progress. One of the preferred terms for this faith is “design”: a form of techne that, two of its most famous advocates argue, “addresses the needs of the people who will consume a product or service and the infrastructure that enables it” (Brown and Wyatt, 2010).[7] The optimism of design proceeds from the conviction that systems—water safety, nutrition, etc.—fail because they are designed improperly, without input from their users. De la Chaux addresses how ostensibly temporary camps grow into permanent settlements, using Jordan’s Za’atari refugee camp near the Syrian border as an example. Her elegant solution to the infrastructural problems these under-resourced and overpopulated communities experience? “Include urban planners in the early phases of the humanitarian emergency to design out future infrastructure problems,” as if the political question of resources is merely secondary to technical questions of design and expertise (de la Chaux and Haugh 2014, 19; de la Chaux 2015).

    In these examples, we can see once again how the ideal type of the “innovator” or entrepreneur emerges as the protagonist of the historical and economic drama unfolding in the peripheral spaces of the world economy. The humanitarian innovator is a flexible, versatile, pliant, and autonomous individual, whose potential is realized in the struggle for wealth accumulation, but whose private zeal for accumulation is thought to benefit society as a whole.[8] Humanitarian or social innovation discourse emphasizes the agency and creativity of “the poor,” by discursively centering the authority of the “user” or entrepreneur rather than the agency or the consumer. Individual qualities like purpose, passion, creativity, and serendipity are mobilized in the service of broad social goals. Yet while this sort of individualism is central in the literature of social and humanitarian innovation, it is not itself a radically new “innovation.” It instead recalls a pattern that Molly Geidel has recently traced in the literature and philosophy of the Peace Corps. In Peace Corps memoirs and in the agency’s own literature, she writes, the “romantic desire” for salvation and identification with the excluded “poor” was channeled into the “technocratic language of development” (2015, 64).

    Innovation’s emphasis on the intellectual, spiritual, and creative faculties of single entrepreneur as historically decisive recapitulates in these especially individualistic terms a persistent thread in Cold War development thinking: its emphasis on cultural transformations as prerequisites for economic ones. At the same time, humanitarian innovation’s anti-bureaucratic ethos of autonomy and creativity is often framed as a critique of “developmentalism” as a practice and an industry. It is a response to criticisms of twentieth-century development as a form of neocolonialism, as too growth-dependent, too detached from local needs, too fixated on big projects, too hierarchical. Consider the development agency UNICEF, whose 2014 “Innovation Annual Report” embraces a vocabulary and funding model borrowed from venture capital. “We knew that we needed to help solve concrete problems experienced by real people,” reads the report, “not just building imagined solutions at our New York headquarters and then deploy them” (UNICEF 2014, 2). Rejecting a hierarchical model of modernization, in which an American developmentalist elite “deploys” its models elsewhere, UNICEF proposes “empowerment” from within. And in place of “development,” as a technical process of improvement from a belated historical and economic position of premodernity, there is “innovation,” the creative capacity responsive to the desires and talents of the underdeveloped.

    As in the social innovation model promoted by the Stanford Business School and the ideal of “empowerment” advanced by Grameen, the literature of humanitarian innovation sees “the market” as a neutral field. The conflict between the private sector, military, other non-humanitarian actors in the process of humanitarian innovation is mitigated by considering each as an equivalent “stakeholder,” with a shared “investment” in the enterprise and its success; abuse of the humanitarian mission by profit-seeking and military “stakeholders” can be prevented via the fabrication of “best practices” and “voluntary codes of conduct” (Betts and Bloom 2015, 24) One report, produced for ALNAP along with the Humanitarian Innovation Fund, draws on Everett Rogers’s canonical theory of innovation diffusion. Rogers taxonomizes and explains the ways innovative products or methods circulate, from the most forward-thinking “early adopters” to the “laggards” (1983, 247-250). The ALNAP report does grapple with the problems of importing profit-seeking models into humanitarian work, however. “In general,” write Obrecht and Warner (2014, 80-81), “it is important to bear in mind that the objective for humanitarian scaling is improvement to humanitarian assistance, not profit.” Here, the problem is explained as one of “diffusion” and institutional biases in non-profit organizations, not a conflict of interest or a failing of the private market. In the humanitarian sector, they write, “early adopters” of innovations developed elsewhere are comparatively rare, since non-profit workers tend to be biased towards techniques and products they develop themselves. However, as Wendy Brown (2015, 129) has recently argued about the concepts of “best practices” and “benchmarking,” the problem is not necessarily that the goals being set or practices being emulated are intrinsically bad. The problem lies in “the separation of practices from products,” or in other words, the notion that organizational practices translate seamlessly across business, political, and knowledge enterprises, and that different products—market dominance, massive profits, reliable electricity in a rural hamlet, basic literacy—can be accomplished via practices imported from the business world.

    Again, my objective here is not to evaluate the success of individual initiatives pursued under this rubric, nor to castigate individual humanitarian aid projects as irredeemably “neoliberal” and therefore beyond the pale. To do so basks a bit too easily in the comfort of condemnation that the pejorative “neoliberal” offers the social critic, and it runs the risk, as Ferguson (2009, 169) writes, of nostalgia for the era of “old-style developmental states,” which were mostly capitalist as well, after all.[9] Instead, my point is to emphasize the political work that “innovation” as a concept does: it depoliticizes the resource scarcity that makes it seem necessary in the first place by treating the private market as a neutral arbiter or helpful partner rather than an exploiter, and it does so by disavowing the power of a Western subject through the supposed humility and democratic patina of its rhetoric. For example, the USAID Development Innovation Ventures, which seeds projects that will win support from private lenders later, stipulates that “applicants must explain how they will use DIV funds in a catalytic fashion so that they can raise needed resources from sources other than DIV” (USAID 2017). The hoped-for innovation here, it would seem, is the skill with which the applicants accommodate the scarcity of resources, and the facility with which they commercialize their project. One funded project, an initiative to encourage bicycle helmets in Cambodia, “has the potential to save the Cambodian government millions of dollars over the next 10 years,” the description proclaims. But obviously, just because something saves the Cambodian government millions doesn’t mean there is a net gain for the health and safety of Cambodians. It could simply allow the Cambodian government to give more money away to private industry or buy $10 million worth of new weapons to police the Laotian border. “Innovation,” here, requires an adjustment to austerity.

    Adjustment, often reframed positively as “resilience,” is a key concept in this literature. In another report, Betts, Bloom, and Weaver (2015, 8) single out a few exemplary innovators from the informal economy of the displaced person’s camp. They include tailors in a Syrian camp’s outdoor market; the Somali owner of an internet café in a Kenyan refugee camp; an Ethiopian man who repairs refrigerators with salvaged air conditioners and fans; and a Ugandan who built a video-game arcade in a settlement near the Rwandan border. This man, identified only as Abdi, has amassed a collection of second-hand televisions and game consoles he acquired in Kampala, the Ugandan capital. “Instead of waiting for donors I wanted to make a living,” says Abdi in the report, exemplifying the values of what Betts, Bloom, and Weaver call “bottom-up innovation” by the refugee entrepreneur. Their assessment is a generous one that embraces the ingenuity and knowledge of displaced and impoverished people affected by crisis. Top-down or “sector-wide” development aid, they write, “disregards the capabilities and adaptive resourcefulness that people and communities affected by conflict and disaster often demonstrate” (2015, 2). In this report, refugees are people of “great resilience,” whose “creativity” makes them “change makers.” As Julian Reid and Brad Evans write, we apply the word “resilient” to a population “insofar as it adapts to rather than resists the conditions of its suffering in the world” (2014, 81). The discourse of humanitarian innovation has the same concession to the inevitability of the structural conditions that make such resilience necessary in the first place. Nowhere is it suggested that refugee capitalists might be other than benevolent, or that inclusion in circuits of national and transnational capital might exacerbate existing inequalities, rather than transcend them. Furthermore, humanitarian innovation advocates never argue that market-based product and service “innovation” are, in a refugee context, beneficial to the whole, given the paucity of employment and services in affected communities; this would at least be an arguable point. The problem is that the question is never even asked. The market is like oxygen.

    Conclusion: The TED Talk and the Innovation Romance

    In 2003, I visited a recently-settled barrio settlement—one could call it a “shantytown”—perched on a hillside high above the east side of Caracas. I remember vividly a wooden, handmade press, ringed with barbed wire scavenged from a nearby business, that its owner, a middle-aged woman newly arrived in the capital, used to crush sugar cane into juice. It was certainly an innovation, by any reasonable definition: a novel, creative solution to a problem of scarcity, a new process for doing something. I remember being deeply impressed by the device, which I found brilliantly ingenious. What I never thought to call it, though, was a “solution” to its owner’s poverty. Nor, I am sure, did she; she lived in a hard-core chavista neighborhood, where dispossessing the country’s “oligarchs” would have been offered as a better innovation—in the old Emma Goldman sense. Therefore, it is not that individual ingenuity, creativity, fearlessness, hard work, and resistance to the impossible demands that transnational capital has placed on people like the video-game entrepreneur in Uganda, or that woman in Caracas, are disreputable things to single out and praise. Quite the contrary: my objection is to the capitulation to their exploitation that is smuggled in with this admiration.

    I have argued that “innovation” is, at best, a vague concept asked to accommodate far too much in its combination of heroic and technocratic meanings. Innovation, in its modern meaning, is about revolutionizing “process” and technique: this often leaves outcomes unexamined and unquestioned. The outcome of that innovative sugar cane press in Caracas is still a meager income selling juice in a perilous informal marketplace. The promiscuity of innovation’s use also makes it highly mobile and subject to abuse, as even enthusiastic users of the concept, like Betts and Bloom at the Oxford Humanitarian Innovation Project, acknowledge. As they caution, “use of the term in the humanitarian system has lacked conceptual clarity, leading to misuse, overuse, and the risk that it may become hollow rhetoric” (2014, 5). I have also argued that innovation, especially in the context of neoliberal development, must be understood in moral terms, as it makes a virtue of private accumulation and accomodation to scarcity, and it circulates an ego-ideal of the first-world self to an audience of its admirers. It is also an ideological celebration of what Harvey calls the neoliberal alignment of individual well-being with unregulated markets, and what Brown calls “the economization of the self” (2015, 33). Finally, as a response to the enduring crises of third-world poverty, exacerbated by the economic and ecological dangers of the twenty-first century, the language of innovation beats a pessimistic retreat from the ideal of global equality that, in theory at least, development in its manifold forms always held out as its horizon.

    Innovation discourse draws on deep wells—its moral claim is not new, as a reader of The Protestant Ethic and the Spirit of Capitalism will observe. Inspired in part by the example of Benjamin Franklin’s autobiography, Max Weber argued that capitalism in its ascendancy reimagined profit-seeking activities, which might once have been described as avaricious or vulgar as a virtuous “ethos” (2001, 16-17). Capitalism’s challenge to tradition, Weber argued, demanded some justification; reframing business as a calling or a vocation could help provide one. Capitalism in our time demands still demands validation not only as a virtuous discipline, but as an enterprise devoted to serving the “common good,” write Boltanski and Chiapello. As they say, “an existence attuned to the requirements of accumulation must be marked out for a large number of actors to deem it worth the effort of being lived” (2007, 10-11). “Innovation” as an ideology marks out this sphere of purposeful living for the contemporary managerial classes. Here, again, the word’s close association with “creativity” is instrumental, since creativity is often thought to be an intrinsic, instinctual human behavior. “Innovating” is therefore not only a business practice that will, as Franklin argued about his own industriousness, improve oneself in the eyes of both man and God. It is also a secular expression of the most fundamental individual and social features of the self—the impulse to understand and to improve the world. This is particularly evident in the discourse of social innovation, which the Social Innovation Lab at Stanford defines as a practice that aims to leverage the private market to solve modern society’s most intractable “problems”: housing, pollution, hunger, education, and so on. When something like world hunger is described as a “problem” in this way, though, international food systems, agribusiness, international trade, land ownership, and other sources of malnutrition disappear. Structures of oppression and inequality simply become discrete “problems” for no one has yet invented the fix. They are individual nails in search of a hammer, and the social innovator is quite confident that a hammer exists for hunger.

    Microfinance is another one of these hammers. As one economist critical of the microcredit system notes at the beginning of his own book on the subject, “most accounts of microfinance—the large-scale, businesslike provision of financial services to poor people—begin with a story” (Roodman 2012, 1). These are usually some narrative of an encounter with a sympathetic third-world subject. For Roodman, the microfinancial stories of hardship and transcendence have a seductive power over their first-world audiences, of which he is legitimately suspicious. As we saw above, Schumpeter’s procedural “entrepreneurial function” is itself also a story of a creative entrepreneur navigating the tempests of modern capitalism. In the postmodern romance of social innovation in the “underdeveloped” world, the Western subject of the drama is both ever-present and constantly disavowed. The TED Talk, with which we began, is in its crude way the most expressive genre of this contemporary version of the entrepreneurial romance.

    Rhetorically transformative but formally archaic—what could be less innovative than a lecture?—the genre of the social innovation TED Talk models innovation ideology’s combination of grandiosity and proceduralism, even as its strict generic conventions—so often and easily parodied—repeatedly undermine the speakers’ regular claims to transcendent breakthroughs. For example, in his TEDx Montreal address, Ethan Kay (2012) began in the conventional way: with a dire assessment of a monumental, yet easily overlooked, social problem in a third-world country. “If we were to think about the biggest problems affecting our world,” Kay begins, “any socially conscious person would have to include poverty, disease, and climate change. And yet there is one thing that causes all three of these simultaneously, that we pay almost no attention to, even though a very good solution exists.” Having established the scope of the problem, next comes the sentimental identification. The knowledge of this social problem is only possible because of the hospitality and insight of some poor person abroad, something familiar from Geidel’s reading of Peace Corps memoirs and Roodman’s microcredit stories: in Kay’s case, it is in the unelectrified “hut” of a rural Indian woman where, choking on cooking smoke, he realizes the need for a clean-burning indoor cookstove. Then comes the self-deprecating joke, in which the speaker acknowledges his early naivete and establishes his humble capacity for self-reflection. (“I’m just a guy from Cleveland, Ohio, who has trouble cooking a grilled-cheese sandwich,” says Kay, winning a few reluctant laughs.) And then, the technocratic solution emerges: when the insight thus acquired is subjected to the speaker’s reason and empathy, the deceptively simple and yet world-making “solution” emerges. Despite the prominent formal place of the underdeveloped character in this genre, the teller of the innovation story inevitably ends up the hero. The throat-clearing self-seriousness, the ritualistic gestures of humility, the promise to the audience of transformative change without inconvenient political consequences, and the faith in technology as a social leveler all perform the TED Talk’s ego-ideal of social “innovation.”

    One of the most successful social innovation TED Talks is Mitra’s tale of the “self-organized learning environment” (SOLE). Mitra won a $1 million prize from TED in 2013 for a talk based on his “hole-in-the-wall” experiment in New Delhi, which tests poor children’s ability to learn autonomously, guided only by internet-enabled laptops and cloud-based adult mentors abroad. (Ted.com 2013). Mitra’s idea was an excellent example of innovation discourse’s combination of the procedural and the prophetic. In the case of the latter, he begins: “There was a time when Stone Age men and women used to sit and look up at the sky and say, ‘What are those twinkling lights?’ They built the first curriculum, but we’ve lost sight of those wondrous questions” (Mitra 2013). What gets us to this lofty goal, however, is a comparatively simple process. True to genre, Mitra describes the SOLE as the fruit of a serendipitous discovery. After cutting a hole in the wall that separated his technology firm’s offices from an adjoining New Delhi slum, they placed an Internet-enabled computer in the new common area. When he returned weeks later, Mitra found local children using it expertly. Leaving unsupervised children in a room with a laptop, it turns out, activates innate capacities for self-directed learning stifled by conventional schooling. Mitra promises a cost-effective solution to the problem of primary and secondary education in the developing world—do virtually nothing. “This is done by children without the help of any teacher,” Mitra confidently concludes, sharing a PowerPoint slide of the students’ work. “The teacher only raises the question, and then stands back and admires the answer.”

    When we consider innovation’s religious origins in false prophecy, its current orthodoxy in the discourse of technological evangelism—and, more broadly, in analog versions of social innovation—is often a nearly literal example of Rayvon Fouché’s argument that the formerly colonized, “once attended to by bibles and missionaries, now receive the proselytizing efforts of computer scientists wielding integrated circuits in the digital age” (2012, 62). One of the additional ironies of contemporary innovation ideology, though, is that these populations exploited by global capitalism are increasingly charged with redeeming it—the comfortable denizens of the West need only “stand back and admire” the process driven by the entrepreneurial labor of the newly digital underdeveloped subject. To the pain of unemployment, the selfishness of material pursuits, the exploitation of most of humanity by a fraction, the specter of environmental cataclysm that stalks our future and haunts our imagination, and the scandal of illiteracy, market-driven innovation projects like Mitra’s “hole in the wall” offer next to nothing, while claiming to offer almost everything.

    _____

    John Patrick Leary is associate professor of English at Wayne State University in Detroit and a visiting scholar in the Program in Literary Theory at the Universidade de Lisboa in Portugal in 2019. He is the author of A Cultural History of Underdevelopment: Latin America in the U.S. Imagination (Virginia 2016) and Keywords: The New Language of Capitalism, forthcoming in 2019 from Haymarket Books. He blogs about the language and culture of contemporary capitalism at theageofausterity.wordpress.com.

    Back to the essay

    _____

    Notes

    [1] “The entrepreneur and his function are not difficult to conceptualize,” Schumpeter writes: “the defining characteristic is simply the doing of new things or the doing of things that are already being done in a new way (innovation).”

    [2] The term “underdeveloped” was only a bit older: it first appeared in “The Economic Advancement of Under-developed Areas,” a 1942 pamphlet on colonial economic planning by a British economist, Wilfrid Benson.

    [3] I explore this semantic and intellectual history in more detail in my book, A Cultural History of Underdevelopment (Leary, 4-10).

    [4] Morozov describes solutionism as an ideology that sanctions the following delusion: “Recasting all complex social situations either as neatly defined problems with definite, computable solutions or as transparent and self-evident processes that can be easily optimized—if only the right algorithms are in place!”

    [5] “Although the number of unbanked people globally dropped by half a billion from 2011 to 2014,” reads a Foundation web site’s entry under the tab “financial services”, “two billion people are still locked out of formal financial services.” One solution to this problem focuses on Filipino convenience stores, called “sari-sari” stores: “In a project funded by the JPMorgan Chase Foundation, Grameen Foundation is empowering sari-sari store operators to serve as digital financial service agents to their customers.” Clearly, the project must result not only in connecting customers to financial services, but in opening up new markets to JP Morgan Chase. See “Alternative Channels.”

    [6] This quoted definition of “humanitarian innovation” is attributed to an interview with an unnamed international aid worker.

    [7] Erickson (2015, 113-14) writes that “design thinking” in public education “offers the illusion that structural and institutional problems can be solved through a series of cognitive actions…” She calls it “magic, the only alchemy that matters.”

    [8] A management-studies article on the growth of so-called “innovation prizes” for global development claimed sunnily that at a recent conference devoted to such incentives, “there was a sense that society is on the brink of something new, something big, and something that has the power to change the world for the better” (Everett, Wagner, and Barnett 2012, 108).

    [9] “It is here that we have to look more carefully at the ‘arts of government’ that have so radically reconfigured the world in the last few decades,” writes Ferguson, “and I think we have to come up with something more interesting to say about them than just that we’re against them.” Ferguson points out that neoliberalism in Africa—the violent disruption of national markets by imperial capital—looks much different than it does in western Europe, where it usually treated as a form of political rationality or an “art of government” modeled on markets. It is the political rationality, as it is formed through an encounter with the “third world” object of imperial neoliberal capital, that is my concern here.

    _____

    Works Cited

    • Bacon, Francis. 1844. The Works of Francis Bacon, Lord Chancellor of England. Vol. 1. London: Carey and Hart.
    • Banerjee, Abhijit, et al. 2015. “The Miracle of Microfinance? Evidence from a Randomized Evaluation.” American Economic Journal: Applied Economics 7:1.
    • Betts, Alexander, Louise Bloom, and Nina Weaver. 2015. “Refugee Innovation: Humanitarian Innovation That Starts with Communities.” Humanitarian Innovation Project, University of Oxford.
    • Betts, Alexander and Louise Bloom. 2014. “Humanitarian Innovation: The State of the Art.” OCHA Policy and Studies Series.
    • Boltanski, Luc and Eve Chiapello. 2007. The New Spirit of Capitalism. Translated by Gregory Elliot. New York: Verso.
    •  Brown, Tim and Jocelyn Wyatt. 2010. “Design Thinking for Social Innovation.” Stanford Social  Innovation Review.
    • Brown, Wendy. 2015. Undoing the Demos: Neoliberalism’s Stealth Revolution. New York: Zone Books.
    • Burke, Edmund. 1798. The Beauties of the Late Right Hon. Edmund Burke, Selected from the Writings, &c., of that Extraordinary Man. London: J.W. Myers.
    • Calvin, John. 1763. The Institution of the Christian Religion. Translated by Thomas Norton. Glasgow: John Bryce and Archibald McLean.
    • Clark, Donald. 2013. “Sugata Mitra: Slum Chic? 7 Reasons for Doubt.”
    • Cowen, M.P. and R.W. Shenton. 1996. Doctrines of Development. London: Routledge.
    • De la Chaux, Marlen, 2015. “Rethinking Refugee Camps: Turning Boredom into Innovation.” The Conversation (Sep 24).
    • De la Chaux, Marlen and Helen Haugh. 2014. “Entrepreneurship and Innovation: How Institutional Voids Shape Economic Opportunities in Refugee Camps.” Judge Business School, University of Cambridge,
    • Erickson, Megan. 2015. Class War: The Privatization of Childhood. New York: Verso.
    • Everett, Bryony, Erika Wagner, and Christopher Barnett. 2012. “Using Innovation Prizes to Achieve the Millennium Development Goals.” Innovations: Technology, Governance, Globalization 7:1.
    • Ferguson, James. 2009. “The Uses of Neoliberalism.” Antipode 41:S1.
    • Fouché, Rayvon. 2012. “From Black Inventors to One Laptop Per Child: Exporting a Racial Politics of Technology.” In Race after the Internet, edited by Lisa Nakamura and Peter Chow-White. New York: Routledge. 61-84
    • Frank, Andre Gunder. 1991. The Development of Underdevelopment. Stockholm, Sweden: Bethany Books.
    • Geidel, Molly. 2015. Peace Corps Fantasies: How Development Shaped the Global Sixties. Minneapolis: University of Minnesota Press.
    • Gilman, Nils. 2003. Mandarins of the Future: Modernization Theory in Cold War America. Baltimore: Johns Hopkins University Press, 2003.
    • Godin, Benoit. 2015. Innovation Contested: The Idea of Innovation Over the Centuries. New York: Routledge.
    • Goldman, Emma. 2000. “Anarchism: What It Really Stands For.” Marxists Internet Archive.
    • Grameen Foundation India. No date. “Our History.”
    • Hobbes, Thomas. 1949. De Cive, or The Citizen. New York: Appleton-Century-Crofts.
    • Institute of Leadership and Management. 2007. Managing Creativity and Innovation in the Workplace. Oxford, UK: Elsevier.
    • Johnson, Steven. 2011. The Innovator’s Cookbook: Essentials for Inventing What is Next. New York: Riverhead.
    • Kay, Ethan. 2012. “Saving Lives Through Clean Cookstoves.” TEDx Montreal.
    • Kelley, Tom. 2001. The Art of Innovation: Lessons in Creativity from IDEO, America’s Leading Design Firm. New York: Crown Business.
    • Larrain, Jorge. 1991. Theories of Development: Capitalism, Colonialism and Dependency. New York: Wiley.
    • Leary, John Patrick. 2016. A Cultural History of Underdevelopment: Latin America in the U.S. Imagination. University of Virginia Press.
    • Lepore, Jill. 2014. “The Disruption Machine: What the Gospel of Innovation Gets Wrong.” The New Yorker (Jun 23).
    • Marshall, Marguerite Moore. 1914. “In Dancing the Denatured Tango the Couple Keep Two Feet Apart.” The Evening World (Jan 24).
    • Mitra, Sugata. 2013. “Build a School in the Cloud.”
    • Morozov, Evgeny. 2014. To Save Everything, Click Here: The Folly of Technological  Solutionism. New York: Public Affairs.
    • Moss, Frank. 2011. The Sorcerers and Their Apprentices: How the Digital Magicians of the MIT Media Lab Are Creating the Innovative Technologies that Will Transform Our Lives. New York: Crown Business.
    • National Economic Council and Office of Science and Technology Policy. 2015. “A Strategy for American Innovation.” Washington, DC: The White House.
    • North, Michael. 2013. Novelty: A History of the New. Chicago: University of Chicago Press.
    • Nustad, Knut G. 2007. “Development: The Devil We Know?” In Exploring Post-Development: Theory and Practice, Problems and Perspectives, edited by Aram Ziai. London: Routledge. 35-46.
    • Obrecht Alice and Alexandra T. Warner. 2014. “More than Just Luck: Innovation in Humanitarian Action.” London: ALNAP/ODI.
    • O’Connor, Kevin and Paul B. Brown. 2003. The Map of Innovation: Creating Something Out of Nothing. New York: Crown.
    • Peters, Tom. 1999. The Circle of Innovation: You Can’t Shrink Your Way to Greatness. New York: Vintage.
    • Polgreen, Lydia and Vikas Bajaj. 2010. “India Microcredit Faces Collapse From Defaults.” The New York Times (Nov 17).
    • Rodney, Walter. 1981. How Europe Underdeveloped Africa. Washington, DC: Howard University Press.
    • Ross, Kristin. 1996. Fast Cars, Clean Bodies: Decolonization and the Reordering of French Culture. Cambridge, MA: The MIT Press.
    • Rostow, Walter. 1965. The Stages of Economic Growth: A Non-Communist Manifesto. New York: Cambridge University Press.
    • Reid, Julian and Brad Evans. 2014. Resilient Life: The Art of Living Dangerously. New York: John Wiley and Sons.
    • Rogers, Everett M. 1983. Diffusion of Innovations. Third edition. New York: The Free Press.
    • Roodman, David. 2012. Due Diligence: An Impertinent Inquiry into Microfinance. Washington, D.C.: Center for Global Development.
    • Saldaña-Portillo, Josefina. 2003. The Revolutionary Imagination in the Americas and the Age of Development. Durham, NC: Duke University Press.
    • Schumpeter, Joseph. 1934. The Theory of Economic Development. Cambridge, MA. Harvard University Press.
    • Schumpeter, Joseph. 1941. “The Creative Response in Economic History,” The Journal of Economic History 7:2.
    • Schumpeter, Joseph. 2003. Capitalism, Socialism, and Democracy. London: Routledge.
    • Seitler, Ellen. 2005. The Internet Playground: Children’s Access, Entertainment, and Miseducation. New York: Peter Lang.
    • Shakespeare, William. 2005. Henry IV. New York: Bantam Classics.
    • Smithers, Rebecca. 2015. “University Intalls Prototype ‘Pee Power’ Toilet.” The Guardian (Mar 5).
    • Stanford Graduate School of Business, Center for Social Innovation. No date. “Defining Social Innovation.”
    • Sweezy, Paul. 1943. “Professor Schumpeter’s Theory of Innovation.” The Review of Economics and Statistics 25:1.
    • TED.com. No date. “Our Organization.”
    • TED.com. 2013. “Sugata Mitra Creates a School in the Cloud.”
    • Truman, Harry. 1949. “Inaugural Address, January 20, 1949.”
    • UNICEF. 2014. “UNICEF Innovation Annual Report 2014: Focus on Future Strategy.”
    • USAID. 2017. “DIV’s Model in Detail.” (Apr 3).
    • Weber, Max. 2001. The Protestant Ethic and the Spirit of Capitalism. Translated by Talcott Parsons. London: Routledge Classics.
    • Yunus, Muhammad. 2012. “A History of Microfinance.” TEDx Vienna.
  • Bradley J. Fest – The Function of Videogame Criticism

    Bradley J. Fest – The Function of Videogame Criticism

    a review of Ian Bogost, How to Talk about Videogames (University of Minnesota Press, 2015)

    by Bradley J. Fest

    ~

    Over the past two decades or so, the study of videogames has emerged as a rigorous, exciting, and transforming field. During this time there have been a few notable trends in game studies (which is generally the name applied to the study of video and computer games). The first wave, beginning roughly in the mid-1990s, was characterized by wide-ranging debates between scholars and players about what they were actually studying, what aspects of videogames were most fundamental to the medium.[1] Like arguments about whether editing or mise-en-scène was more crucial to the meaning-making of film, the early, sometimes heated conversations in the field were primarily concerned with questions of form. Scholars debated between two perspectives known as narratology and ludology, and asked whether narrative or play was more theoretically important for understanding what makes videogames unique.[2] By the middle of the 2000s, however, this debate appeared to be settled (as perhaps ultimately unproductive and distracting—i.e., obviously both narrative and play are important). Over the past decade, a second wave of scholars has emerged who have moved on to more technical, theoretical concerns, on the one hand, and more social and political issues, on the other (frequently at the same time). Writers such as Patrick Crogan, Nick Dyer-Witherford, Alexander R. Galloway, Patrick Jagoda, Lisa Nakamura, Greig de Peuter, Adrienne Shaw, McKenzie Wark, and many, many others write about how issues such as control and empire, race and class, gender and sexuality, labor and gamification, networks and the national security state, action and procedure can pertain to videogames.[3] Indeed, from a wide sampling of contemporary writing about games, it appears that the old anxieties regarding the seriousness of its object have been put to rest. Of course games are important. They are becoming a dominant cultural medium; they make billions of dollars; they are important political allegories for life in the twenty-first century; they are transforming social space along with labor practices; and, after what many consider a renaissance in independent game development over the past decade, some of them are becoming quite good.

    Ian Bogost has been one of the most prominent voices in this second wave of game criticism. A media scholar, game designer, philosopher, historian, and professor of interactive computing at the Georgia Institute of Technology, Bogost has published a number of influential books. His first, Unit Operations: An Approach to Videogame Criticism (2006), places videogames within a broader theoretical framework of comparative media studies, emphasizing that games deserve to be approached on their own terms, not only because they are worthy of attention in and of themselves but also because of what they can show us about the ways other media operate. Bogost argues that “any medium—poetic, literary, cinematic, computational—can be read as a configurative system, an arrangement of discrete, interlocking units of expressive meaning. I call these general instances of procedural expression, unit operations” (2006, 9). His second book, Persuasive Games: The Expressive Power of Videogames (2007), extends his emphasis on the material, discrete processes of games, arguing that they can and do make arguments; that is, games are rhetorical, and they are rhetorical by virtue of what they and their operator can do, their procedures: games make arguments through “procedural rhetoric.”[4] The publication of Persuasive Games in particular—which he promoted with an appearance on The Colbert Report (2005–14)—saw Bogost emerge as a powerful voice in the broad cohort of second wave writers and scholars.

    But I feel that the publication of Bogost’s most recent book, How to Talk about Videogames (2015), might very well end up signaling the beginning of a third phase of videogame criticism. If the first task of game criticism was to formally define its object, and the second wave of game studies involved asking what games can and do say about the world, the third phase might see critics reflecting on their own processes and procedures, thinking, not necessarily about what videogames are and do, but about what videogame criticism is and does. How to Talk about Videogames is a book that frequently poses the (now quite old) question: what is the function of criticism at the present time? In an industry dominated by multinational media megaconglomerates, what should the role of (academic) game criticism be? What can a handful of researchers and scholars possibly do or say in the face of such a massive, implacable, profit-driven industry, where every announcement about future games further stokes its rabid fan base of slobbering, ravening hordes to spend hundreds of dollars and thousands of hours consuming a form known for its spectacular violence, ubiquitous misogyny, and myopic tribalism? What is the point of writing about games when the videogame industry appears to happily carry on as if nothing is being said at all, impervious to any conversation that people may be having about its products beyond what “fans” demand?

    To read the introduction and conclusion of Bogost’s most recent book, one might think that, suggestions about their viability aside, both the videogame industry and the critical writing surrounding it are in serious crisis, and the matter of the cultural status of the videogame has hardly been put to rest. As a scholar, critic, and designer who has been fairly consistent in positively exploring what digital games can do, what they can uniquely accomplish as a process-based medium, it is striking, at least to this reviewer, that Bogost begins by anxiously admitting,

    whenever I write criticism of videogames, someone strongly invested in games as a hobby always asks the question “is this parody?” as if only a miscreant or a comedian or a psychopath would bother to invest the time and deliberateness in even thinking, let alone writing about videogames with the seriousness that random, anonymous Internet users have already used to write about toasters, let alone deliberate intellectuals about film or literature! (Bogost 2015, xi–xii)

    Bogost calls this kind of attention to the status of his critical endeavor in a number of places in How to Talk about Videogames. The book shows him involved in that untimely activity of silently but implicitly assessing his body of work, reflectively approaching his critical task with cautious trepidation. In a variety of moments from the opening and closing of the book, games and criticism are put into serious question. Videogames are puerile, an “empty diversion” (182), and without value; “games are grotesque. . . . [they] are gross, revolting, heaps of arbitrary anguish” (1); “games are stupid” (9); “that there could be a game criticism [seems] unlikely and even preposterous” (181). In How to Talk about Videogames, Bogost, at least in some ways, is giving up his previous fight over whether or not videogames are serious aesthetic objects worthy of the same kind of hermeneutic attention given to more established art forms.[5] If games are predominantly treated as “perversion, excess” (183), a symptom of “permanent adolescence” (180), as unserious, wasteful, unproductive, violently sadistic entertainments—perhaps there is a reason. How to Talk about Videogames shows Bogost turning an intellectual corner toward a decidedly ironic sense of his role as a critic and the worthiness of his critical object.

    Compare Bogost’s current pessimism with the optimism of his previous volume, How to Do Things with Videogames (2011), to which How to Talk about Videogames functions as a kind of sequel or companion. In this earlier book, he is rather more affirmative about the future of the videogame industry (and, by proxy, videogame criticism):

    What if we allowed that videogames have many possible goals and purposes, each of which couples with many possible aesthetics and designs to create many possible player experiences, none of which bears any necessary relationship to the commercial videogame industry as we currently know it. The more games can do, the more the general public will become accepting of, and interested in, the medium in general. (Bogost 2011, 153)

    2011’s How to Do Things with Videogames aims to bring to the table things that previous popular and scholarly approaches to videogames had ignored in order to show all the other ways that videogames operate, what they are capable of beyond mere mimetic simulation or entertaining distraction, and how game criticism might allow their audiences to expand beyond the province of the “gamer” to mirror the diversified audiences of other media. Individual chapters are devoted to how videogames produce empathy and inspire reverence; they can be vehicles for electioneering and promotion; games can relax, titillate, and habituate; they can be work. Practicing what he calls “media microecology,” a critical method that “seeks to reveal the impact of a medium’s properties on society . . . through a more specialized, focused attention . . . digging deep into one dark, unexplored corner of a media ecosystem” (2011, 7), Bogost argues that game criticism should be attentive to more than simply narrative or play. The debates that dominated the early days of critical game studies, in this regard, only account for a rather limited view of what games can do. Appearing at a time when many were arguing that the medium was beginning to reach aesthetic maturity, Bogost’s 2011 book sounds a note of hope and promise for the future of game studies and the many unexplored possibilities for game design.

    How to Talk about Videogames

    I cannot really overstate, however, the ways in which How to Talk about Videogames, published four years later, shows Bogost reversing tack, questioning his entire enterprise.[6] Even with the appearance of such a serious, well-received game as Gone Home (2013)—to which he devotes a particularly scathing chapter about what the celebration of an ostensibly adolescent game tells us about contemporaneity—this is a book that repeatedly emphasizes the cultural ghetto in which videogames reside. Criticism devoted exclusively to this form risks being “subsistence criticism. . . . God save us from a future of game critics, gnawing on scraps like the zombies that fester in our objects of study” (188). Despite previous claims about videogames “[helping] us expose and interrogate the ways we engage the world in general, not just the ways that computational systems structure or limit that experience” (Bogost 2006, 40), How to Talk about Videogames is, at first glance, a book that raises the question of not only how videogames should be talked about, but whether they have anything to say in the first place.

    But it is difficult to gauge the seriousness of Bogost’s skepticism and reluctance given a book filled with twenty short essays of highly readable, informative, and often compelling criticism. (The disappointingly short essay, “The Blue Shell Is Everything That’s Wrong with America”—in which he writes: “This is the Blue Shell of collapse, the Blue Shell of financial hubris, the Blue Shell of the New Gilded Age” [26]—particularly stands out in the way that it reads an important if overlooked aspect of a popular game in terms of larger social issues.) For it is, really, somewhat unthinkable that someone who has written seven books on the subject would arrive at the conclusion that “videogames are a lot like toasters. . . . Like a toaster, a game is both appliance and hearth, both instrument and aesthetic, both gadget and fetish. It’s preposterous to do game criticism, like it’s preposterous to do toaster criticism” (ix and xii).[7] Bogost’s point here is rhetorical, erring on the side of hyperbole in order to emphasize how videogames are primarily process-based—that they work and function like toasters perhaps more than they affect and move like films or novels (a claim with which I imagine many would disagree), and that there is something preposterous in writing criticism about a process-based technology. A decade after emphasizing videogames’ procedurality in Unit Operations, this is a way for him to restate and reemphasize these important claims for the more popular audience intended for How to Talk about Videogames. Games involve actions, which make them different from other media that can be more passively absorbed. This is why videogames are often written about in reviews “full of technical details and thorough testing and final, definitive scores delivered on improbably precise numerical scales” (ix). Bogost is clear. He is not a reviewer. He is not assessing games’ ability to “satisfy our need for leisure [as] their only function.” He is a critic and the critic’s activity, even if his object resembles a toaster, is different.

    But though it is apparent why games might require a different kind of criticism than other media, what remains unclear is what Bogost believes the role of the critic ought to be. He says, contradicting the conclusion of How to Do Things with Videogames, that “criticism is not conducted to improve the work or the medium, to win over those who otherwise would turn up their noses at it. . . . Rather, it is conducted to get to the bottom of something, to grasp its form, context, function, meaning, and capacities” (xii). This seems like somewhat of a mistake, and a mistake that ignores both the history of criticism and Bogost’s own practice as a critic. Yes, of course criticism should investigate its object, but even Matthew Arnold, who emphasized “disinterestedness . . . keeping aloof from . . . ‘the practical view of things,’” also understood that such an approach could establish “a current of fresh and true ideas” (Arnold 1993 [1864], 37 and 49). No matter how disinterested, criticism can change the ways that art and the world are conceived and thought about. Indeed, only a sentence later it is difficult to discern what precisely Bogost believes the function of videogame criticism to be if not for improving the work, the medium, the world, if not for establishing a current from which new ideas might emerge. He writes that criticism can “venture so far from ordinariness of a subject that the terrain underfoot gives way from manicured path to wilderness, so far that the words that we would spin tousle the hair of madness. And then, to preserve that wilderness and its madness, such that both the works and our reflections on them become imbricated with one another and carried forward into the future where others might find them anew” (xii; more on this in a moment). It is clear that Bogost understands the mode of the critic to be disinterested and objective, to answer ‘the question ‘What is even going on here?’” (x), but it remains unclear why such an activity would even be necessary or worthwhile, and indeed, there is enough in the book that points to criticism being a futile, unnecessary, parodic, parasitic, preposterous endeavor with no real purpose or outcome. In other words, he may say how to talk about videogames, but not why anyone would ever really want to do so.

    I have at least partially convinced myself that Bogost’s claims about videogames being more like toasters than other art forms, along with the statements above regarding the disreputable nature of videogames, are meant as rhetorical provocations, ironic salvos to inspire from others more interesting, rigorous, thoughtful, and complex critical writing, both of the popular and academic stripe. I also understand that, as he did in Unit Operations, Bogost balks at the idea of a critical practice wholly devoted to videogames alone: “the era of fields and disciplines ha[s] ended. The era of critical communities ha[s] ended. And the very idea of game criticism risks Balkanizing games writing from other writing, severing it from the rivers and fields that would sustain it” (187). But even given such an understanding, it is unclear who precisely is suggesting that videogame criticism should be a hermetically sealed niche cut off from the rest of the critical tradition. It is also unclear why videogame criticism is so preposterous, why writing it—even if a critic’s task is limited to getting “to the bottom of something”—is so divorced from the current of other works of cultural criticism. And finally, given what are, at the end of the day, some very good short essays on games that deserve a thoughtful readership, it is unclear why Bogost has framed his activity in such a negatively self-aware fashion.

    So, rather than pursue a discussion about the relative merits and faults of Bogost’s critical self-reflexivity, I think it worth asking what changed between his 2011 and 2015 books, what took him from being a cheerleader—albeit a reticent, tempered, and disinterested one—to questioning the very value of videogame criticism itself. Why does he change from thinking about the various possibilities for doing things with videogames to thinking that “entering a games retail outlet is a lot like entering a sex shop or a liquor store . . . game shops are still vaguely unseemly” (182)?[8] I suspect that such events as 2014’s Gamergate—when independent game designer Zoe Quinn, critic Anita Sarkeesian, and others were threatened and harassed for their feminist views—the generally execrable level of discourse found on internet comments pages, and the questionable cultural identity of the “gamer,” probably account for some of Bogost’s malaise.[9] Indeed, most of the essays found in How to Talk about Videogames initially appeared online, largely in The Atlantic (where he is an editor) and Gamasutra, and, I have to imagine, suffered for it in their comments sections. With this change in audience and platform, it seems to follow that the opening and closing of How to Talk about Videogames reflect a general exhaustion with the level of discourse from fans, companies, and internet trolls. How can criticism possibly thrive or have an impact in a community that so frequently demonstrates its intolerance and rage toward other modes of thinking and being that might upset its worldview and sense of cultural identity? How does one talk to those who will not listen?

    And if these questions perhaps sound particularly apt today—that the “gamer” might bear an awfully striking resemblance to other headline-grabbing individuals and groups dominating the public discussion in the months after the publication of Bogost’s book, namely Donald J. Trump and his supporters—they should. I agree with Bogost that it can be difficult to see the value of criticism at a time when many United States citizens appear, at least on the surface, to be actively choosing to be uncritical. (As Philip Mirowski argues, the promotion of “ignorance [is] the lynchpin in the neoliberal project” [2013, 96].) Given such a discursive landscape, what is the purpose of writing, even in Bogost’s admirably clear (yet at times maddeningly spare) prose, if no amount of stylistic precision or rhetorical complexity—let alone a mastery of basic facts—can influence one’s audience? How to Talk about Videogames is framed as a response to the anti-intellectual atmosphere of the middle of the second decade of the twenty-first century, and it is an understandably despairing one. As such, it is not surprising that Bogost concludes that criticism has no role to play in improving the medium (or perhaps the world) beyond mere phenomenological encounter and description given the social fabric of life in the 2010s. In a time of vocally racist demagoguery, an era witnessing a rising tide of reactionary nationalism in the US and around the world, a period during which it often seems like no words of any kind can have any rhetorical effect at all—procedurally or otherwise—perhaps the best response is to be quiet. But I also think that this is to misunderstand the function of critical thought, regardless of what its object might be.

    To be sure, videogame creators have probably not yet produced a Citizen Kane (1941), and videogame criticism has not yet produced a work like Erich Auerbach’s Mimesis (1946). I am unconvinced, however, that such future accomplishments remain out of reach, that videogames are barred from profound aesthetic expression, and that writing about games preclude the heights attained by previous criticism simply because of some ill-defined aspect of the medium which prevents it from ever aspiring to anything beyond mere craft. Is a study of the Metal Gear series (1987–2015) similar to Roland Barthes’s S/Z (1970) really all that preposterous? Is Mario forever denied his own Samuel Johnson simply because he is composed of code rather than words? For if anything is unclear about Bogost’s book, it is what precisely prohibits videogames from having the effects and impacts of other art forms, why they are restricted to the realm of toasters, incapable of anything beyond adolescent poiesis. Indeed, Bogost’s informative and incisive discussion about Ms. Pac-Man (1981), his thought-provoking interpretation of Mountain (2014), or the many moments of accomplished criticism in his previous books—for example, his masterful discussion of the “figure of fascination” in Unit Operations—betray such claims.[10]

    Matthew Arnold once famously suggested that creativity and criticism were intimately linked, and I believe it might be worthwhile to remember this for the future of videogame criticism:

    It is the business of the critical power . . . “in all branches of knowledge, theology, philosophy, history, art, science, to see the object as in itself it really is.” Thus it tends, at last, to make an intellectual situation of which the creative power can profitably avail itself. It tends to establish an order of ideas, if not absolutely true, yet true by comparison with that which it displaces; to make the best ideas prevail. Presently these new ideas reach society, the touch of truth is the touch of life, and there is a stir and growth everywhere; out of this stir and growth come the creative epochs of literature. (Arnold 1993 [1864], 29)

    In other words, criticism has a vital role to play in the development of an art form, especially if an art form is experiencing contraction or stagnation. Whatever disagreements I might have with Arnold, I too believe that criticism and creativity are indissolubly linked, and further, that criticism has the power to shape and transform the world. Bogost says that “being a critic is not an enjoyable job . . . criticism is not pleasurable” (x). But I suspect that there may still be many who share Arnold’s view of criticism as a creative activity, and maybe the problem is not that videogame criticism is akin to preposterous toaster criticism, but that the function of videogame criticism at the present time is to expand its own sense of what it is doing, of what it is capable, of how and why it is written. When Bogost says he wants “words that . . . would . . . tousle the hair of madness,” why not write in such a fashion (Bogost’s controlled style rarely approaches madness), expanding criticism beyond mere phenomenological summary at best or zombified parasitism at worst. Consider, for instance, Jonathan Arac: “Criticism is literary writing that begins from previous literary writing. . . . There need not be a literary avant-garde for criticism to flourish; in some cases criticism itself plays a leading cultural role” (1989, 7). If we are to take seriously Bogost’s point about how the overwhelmingly positive reaction to Gone Home reveals the aesthetic and political impoverishment of the medium, then it is disappointing to see someone so well-positioned to take a leading cultural role in shaping the conversation about how videogames might change or transform surrendering the field.

    Forget analogies. What if videogame criticism were to begin not from comparing games to toasters but from previous writing, from the history of criticism, from literature and theory, from theories of art and architecture and music, from rhetoric and communication, from poetry? For, given the complex mediations present in even the simplest games—i.e., games not only involve play and narrative, but raise concerns about mimesis, music, sound, spatiality, sociality, procedurality, interface effects, et cetera—it increasingly makes less and less sense to divorce or sequester games from other forms of cultural study or to think that videogames are so unique that game studies requires its own critical modality. If Bogost implores game critics not to limit themselves to a strictly bound, niche field uninformed by other spheres of social and cultural inquiry, if game studies is to go forward into a metacritical third wave where it can become interested in what makes videogames different from other forms and self-reflexively aware of the variety of established and interconnecting modes of cultural criticism from which the field can only benefit, then thinking about the function of criticism historically should guide how and why games are written about at the present time.

    Before concluding, I should also note that something else perhaps changed between 2011 and 2015, namely, Bogost’s alignment with the philosophical movements of speculative realism and object-oriented ontology. In 2012, he published Alien Phenomenology, or What It’s Like to Be a Thing, a book that picks up some of the more theoretical aspects of Unit Operations and draws upon the work of Graham Harman and other anti-correlationists to pursue a flat ontology, arguing that the job of the philosopher “is to amplify the black noise of objects to make the resonant frequencies of the stuffs inside them hum in credibly satisfying ways. Our job is to write the speculative fictions of their processes, their unit operations” (Bogost 2012, 34). Rather than continue pursuing an anthropocentric, correlationist philosophy that can only think about objects in relation to human consciousness, Bogost claims that “the answer to correlationism is not the rejection of any correlate but the acknowledgment of endless ones, all self-absorbed, obsessed by givenness rather than by turpitude” (78). He suggests that philosophy should extend the possibility of phenomenological encounter to all objects, to all units, in his parlance; let phenomenology be alien and weird; let toasters encounter tables, refrigerators, books, climate change, Pittsburgh, Higgs boson particles, the 2016 Electronic Entertainment Expo, bagels, et cetera.[11]

    Though this is not the venue to pursue a broader discussion of Bogost’s philosophical writing, I mention his speculative turn because it seems important for understanding his changing attitudes about criticism. That is, as Graham Harman’s 2012 essay, “The Well-Wrought Broken Hammer,” negatively demonstrates, it is unclear what a flat ontology has to say, if anything, about art, what such a philosophy can bring to critical, hermeneutic activity.[12] Indeed, regardless of where one stands with regard to object-oriented ontology and other speculative realisms, what these philosophies might offer to critics seems to be one of the more vexing and polarizing intellectual questions of our time. Hermeneutics may very well prove inescapably “correlationist,” and, indeed, no matter how disinterested, historical. It is an open question whether or not one can ground a coherent and worthwhile critical practice upon a flat ontology. I am tempted to suspect not. I also suspect that the current trends in continental philosophy, at the end of the day, may not be really interested in criticism as such, and perhaps that is not really such a big deal. Criticism, theory, and philosophy are not synonymous activities nor must they be. (The question about criticism vis-à-vis alien phenomenology also appears to have motivated the Object Lessons series that Bogost edits.) This is all to say, rather than ground videogame criticism in what may very well turn out to be an intellectual fad whose possibilities for writing worthwhile criticism remain somewhat dubious, perhaps there may be more ripe currents and streams—namely, the history of criticism—that can inform how we write about videogames. Criticism may be steered by keeping in view many polestars; let us not be overly swayed by what, for now, burns brightest. For an area of humanistic inquiry that is still very much emerging, it seems a mistake to assume it can and should be nothing more than toaster criticism.

    In this review I have purposefully made few claims about the state of videogames. This is partly because I do not feel that any more work needs to be done to justify writing about the medium. It is also partly because I feel that any broad statement about the form would be an overgeneralization at this point. There are too many games being made in too many places by too many different people for any all-encompassing statement about the state of videogame art to be all that coherent. (In this, I think Bogost’s sense of the need for a media microecology of videogames is still apropos.) But I will say that the state of videogame criticism—and, strangely enough, particularly the academic kind—is one of the few places where humanistic inquiry seems, at least to me, to be growing and expanding rather than contracting or ossifying. Such a generally positive and optimistic statement about a field of the humanities may not adhere to present conceptions about academic activity (indeed, it might even be unfashionable!), which seem to more generally despair about the humanities, and rightfully so. Admitting that some modes of criticism might be, at least in some ways, exhausted, would be an important caveat, especially given how the past few years have seen a considerable amount of reflection about contemporary modes of academic criticism—e.g., Rita Felski’s The Limits of Critique (2015) or Eric Hayot’s “Academic Writing, I Love You. Really, I Do” (2014). But I think that, given how the anti-intellectual miasma that has long been present in US life has intensified in recent years, creeping into seemingly every discourse, one of the really useful functions of videogame criticism may very well be its potential ability to allow reflection on the function of criticism itself in the twenty-first century. If one of the most prominent videogame critics is calling his activity “preposterous” and his object “adolescent,” this should be a cause for alarm, for such claims cannot but help to perpetuate present views about the worthlessness of the humanities. So, I would like to modestly suggest that, rather than look to toasters and widgets to inform how we talk about videogames, let us look to critics and what they have written. Edward W. Said once wrote: “for in its essence the intellectual life—and I speak here mainly about the social sciences and the humanities—is about the freedom to be critical: criticism is intellectual life and, while the academic precinct contains a great deal in it, its spirit is intellectual and critical, and neither reverential nor patriotic” (1994, 11). If one can approach videogames—of all things!—in such a spirit, perhaps other spheres of human activity can rediscover their critical spirit as well.

    _____

    Bradley J. Fest will begin teaching writing this fall at Carnegie Mellon University. His work has appeared or is forthcoming in boundary 2 (interviews here and here), Critical Quarterly, Critique, David Foster Wallace and “The Long Thing” (Bloomsbury, 2014), First Person Scholar, The Silence of Fallout (Cambridge Scholars, 2013), Studies in the Novel, and Wide Screen. He is also the author of a volume of poetry, The Rocking Chair (Blue Sketch, 2015), and a chapbook, “The Shape of Things,” was selected as finalist for the 2015 Tomaž Šalamun Prize and is forthcoming in Verse. Recent poems have appeared in Empty Mirror, PELT, PLINTH, TXTOBJX, and Small Po(r)tions. He previously reviewed Alexander R. Galloway’s The Interface Effect for The b2 Review “Digital Studies.”

    Back to the essay
    _____

    NOTES

    [1] On some of the first wave controversies, see Aarseth (2001).

    [2] For a representative sample of essays and books in the narratology versus ludology debate from the early days of academic videogame criticism, see Murray (1997 and 2004), Aarseth (1997, 2003, and 2004), Juul (2001), and Frasca (2003).

    [3] For representative texts, see Crogan (2011), Dyer-Witherford and Peuter (2009), Galloway (2006a and 2006b), Jagoda (2013 and 2016), Nakamura (2009), Shaw (2014), and Wark (2007). My claims about the vitality of the field of game studies are largely a result of having read these and other critics. There have also been a handful of interesting “videogame memoirs” published recently. See Bissell (2010) and Clune (2015).

    [4] Bogost defines procedurality as follows: “Procedural representation takes a different form than written or spoken representation. Procedural representation explains processes with other processes. . . . [It] is a form of symbolic expression that uses process rather than language” (2007, 9). For my own discussion of proceduralism, particularly with regard to The Stanley Parable (2013) and postmodern metafiction, see Fest (forthcoming 2016).

    [5] For instance, in the concluding chapter of Unit Operations, Bogost writes powerfully and convincingly about the need for a comparative videogame criticism in conversation with other forms of cultural criticism, arguing that “a structural change in our thinking must take place for videogames to thrive, both commercially and culturally” (2006, 179). It appears that the lack of any structural change in the nonetheless wildly thriving—at least financially—videogame industry has given Bogost serious pause.

    [6] Indeed, at one point he even questions the justification for the book in the first place: “The truth is, a book like this one is doomed to relatively modest sales and an even more modest readership, despite the generous support of the university press that publishes it and despite the fact that I am fortunate enough to have a greater reach than the average game critic” (Bogost 2015, 185). It is unclear why the limited reach of his writing might be so worrisome to Bogost given that, historically, the audience for, say, poetry criticism has never been all that large.

    [7] In addition to those previously mentioned, Bogost has also published Racing the Beam: The Atari Video Computer System (2009) and, with Simon Ferrari and Bobby Schweizer, Newsgames: Journalism at Play (2010). Also forthcoming is Play Anything: The Pleasure of Limits, the Uses of Boredom, and the Secret of Games (2016).

    [8] This is, to be sure, a somewhat confusing point. Are not record stores, book stores, and video stores (if such things still exist), along with tea shops, shoe stores, and clothing stores “retail establishment[s] devoted to a singular practice” (Bogost 2015, 182–83)? Are all such establishments unseemly because of the same logic? What makes a game store any different?

    [9] For a brief overview of Gamergate, see Winfield (2014). For a more detailed discussion of both the cultural and technological underpinnings of Gamergate, with a particular emphasis on the relationship between the algorithmic governance of sites such as Reddit or 4chan and online misogyny and harassment, see Massanari’s (2015) important essay. For links to a number of other articles and essays on gaming and feminism, see Ligman (2014) and The New Inquiry (2014). For essays about contemporary “gamer” culture, see Williams (2014) and Frase (2014). On gamers, Bogost writes in a chapter titled “The End of Gamers” from his previous book: “as videogames broaden in appeal, being a ‘gamer’ will actually become less common, if being a gamer means consuming games as one’s primary media diet or identifying with videogames as a primary part of one’s identity” (2011, 154).

    [10] See Bogost (2006, 73–89). Also, to be fair, Bogost devotes a paragraph of the introduction of How to Talk about Videogames to the considerable affective properties of videogames, but concludes the paragraph by saying that games are “Wagnerian Gesamtkunstwerk-flavored chewing gum” (Bogost 2015, ix), which, I feel, considerably undercuts whatever aesthetic value he had just ascribed to them.

    [11] In Alien Phenomenology Bogost calls such lists “Latour litanies” (2012, 38) and discusses this stylistic aspect of object-oriented ontology at some length in the chapter, “Ontography” (35–59).

    [12] See Harman (2012). Bogost addresses such concerns in the conclusion of Alien Phenomenology, responding to criticism about his study of the Atari 2600: “The platform studies project is an example of alien phenomenology. Yet our efforts to draw attention to hardware and software objects have been met with myriad accusations of human erasure: technological determinism most frequently, but many other fears and outrages about ‘ignoring’ or ‘conflating’ or ‘reducing,’ or otherwise doing violence to ‘the cultural aspects’ of things. This is a myth” (2012, 132).

    Back to the essay

    WORKS CITED

    • Aarseth, Espen. 1997. Cybertext: Perspectives on Ergodic Literature. Baltimore: Johns Hopkins University Press.
    • ———. 2001. “Computer Game Studies, Year One.” Game Studies 1, no. 1. http://gamestudies.org/0101/editorial.html.
    • ———. 2003. “Playing Research: Methodological Approaches to Game Analysis.” Game Approaches: Papers from spilforskning.dk Conference, August 28–29. http://hypertext.rmit.edu.au/dac/papers/Aarseth.pdf.
    • ———. 2004. “Genre Trouble: Narrativism and the Art of Simulation.” In First Person: New Media as Story, Performance, and Game, edited by Noah Wardrip-Fruin and Pat Harrigan, 45–55. Cambridge, MA: MIT Press.
    • Arac, Jonathan. 1989. Critical Genealogies: Historical Situations for Postmodern Literary Studies. New York: Columbia University Press.
    • Arnold, Matthew. 1993 (1864). “The Function of Criticism at the Present Time.” In Culture and Anarchy and Other Writings, edited by Stefan Collini, 26–51. New York: Cambridge University Press.
    • Bissell, Tom. 2010. Extra Lives: Why Video Games Matter. New York: Pantheon.
    • Bogost, Ian. 2006. Unit Operations: An Approach to Videogame Criticism. Cambridge, MA:MIT Press.
    • ———. 2007. Persuasive Games: The Expressive Power of Videogame Criticism. Cambridge, MA: MIT Press.
    • ———. 2009. Racing the Beam: The Atari Video Computer System. Cambridge, MA: MIT
    • Press.
    • ———. 2011. How to Do Things with Videogames. Minneapolis: University of Minnesota Press.
    • ———. 2012. Alien Phenomenology, or What It’s Like to Be a Thing. Minneapolis: University of Minnesota Press.
    • ———. 2015. How to Talk about Videogames. Minneapolis: University of Minnesota Press.
    • ———. Forthcoming 2016. Play Anything: The Pleasure of Limits, the Uses of Boredom, and the Secret of Games. New York: Basic Books.
    • Bogost, Ian, Simon Ferrari, and Bobby Schweizer. 2010. Newsgames: Journalism at Play.
    • Cambridge, MA: MIT Press.
    • Clune, Michael W. 2015. Gamelife: A Memoir. New York: Farrar, Straus and Giroux.
    • Crogan, Patrick. 2011. Gameplay Mode: War, Simulation, and Tehnoculture. Minneapolis: University of Minnesota Press.
    • Dyer-Witherford, Nick, and Greig de Peuter. 2009. Games of Empire: Global Capitalism and Video Games. Minneapolis: University of Minnesota Press.
    • Felski, Rita. 2015. The Limits of Critique. Chicago: University of Chicago Press.
    • Fest, Bradley J. Forthcoming 2016. “Metaproceduralism: The Stanley Parable and the Legacies of Postmodern Metafiction.” “Videogame Adaptation,” edited by Kevin M. Flanagan, special issue, Wide Screen.
    • Frasca, Gonzalo. 2003. “Simulation versus Narrative: Introduction to Ludology.” In The Video Game Theory Reader, edited by Mark J. P. Wolf and Bernard Perron, 221–36. New York: Routledge.
    • Frase, Peter. 2014.  “Gamer’s Revanche.” Peter Frase (blog), September 3. http://www.peterfrase.com/2014/09/gamers-revanche/.
    • Galloway, Alexander R. 2006a. “Warcraft and Utopia.” Ctheory.net, February 16. http://www.ctheory.net/articles.aspx?id=507.
    • ———. 2006b. Gaming: Essays on Algorithmic Culture. Minneapolis: University of Minnesota Press.
    • Harman, Graham. 2012. “The Well-Wrought Broken Hammer: Object-Oriented Literary Criticism.” New Literary History 43, no. 2: 183–203.
    • Hayot, Eric. 2014. “Academic Writing, I Love You. Really, I Do.” Critical Inquiry 41, no. 1: 53–77.
    • Jagoda, Patrick. 2013. “Gamification and Other Forms of Play.” boundary 2 40, no. 2: 113–44.
    • ———. 2016. Network Aesthetics. Chicago: University of Chicago Press.
    • Juul, Jesper. 2001. “Games Telling Stories? A Brief Note on Games and Narratives.” Game Studies 1, no. 1. http://www.gamestudies.org/0101/juul-gts/.
    • Ligman, Chris. 2014. “August 31st.” Critical Distance, August 31. http://www.critical-distance.com/2014/08/31/august-31st/.
    • Massanari, Adrienne . 2015. “#Gamergate and The Fappening: How Reddit’s Algorithm, Governance, and Culture Support Toxic Technocultures.” New Media & Society, OnlineFirst, October 9.
    • Mirowski, Philip. 2013. Never Let a Serious Crisis Go to Waste: How Neoliberalism Survived the Financial Meltdown. New York: Verso.
    • Murray, Janet. 1997. Hamlet on the Holodeck: The Future of Narrative in Cyberspace. Cambridge, MA: MIT Press.
    • ———. 2004. “From Game-Story to Cyberdrama.” In First Person: New Media as Story, Performance, and Game, edited by Noah Wardrip-Fruin and Pat Harrigan, 1–11. Cambridge, MA: MIT Press.
    • Nakamura, Lisa. 2009. “Don’t Hate the Player, Hate the Game: The Racialization of Labor in World of Warcraft.” Critical Studies in Media Communication 26, no. 2: 128–44.
    • The New Inquiry. 2014. “TNI Syllabus: Gaming and Feminism.” New Inquiry, September 2. http://thenewinquiry.com/features/tni-syllabus-gaming-and-feminism/.
    • Said, Edward W. 1994. “Identity, Authority, and Freedom: The Potentate and the Traveler.” boundary 2 21, no. 3: 1–18.
    • Shaw, Adrienne. 2014. Gaming at the Edge: Sexuality and Gender at the Margins of Gamer Culture. Minneapolis: University of Minnesota Press.
    • Wark, McKenzie. 2007. Gamer Theory. Cambridge, MA: Harvard University Press.
    • Williams, Ian. “Death to the Gamer.” Jacobin, September 9. https://www.jacobinmag.com/2014/09/death-to-the-gamer/.
    • Winfield, Nick. 2014. “Feminist Critics of Video Games Facing Threats in ‘GamerGate’ Campaign.” New York Times, October 15. http://www.nytimes.com/2014/10/16/technology/gamergate-women-video-game-threats-anita-sarkeesian.html.

    Back to the essay

  • Data and Desire in Academic Life

    Data and Desire in Academic Life

    a review of Erez Aiden and Jean-Baptiste Michel, Uncharted: Big Data as a Lens on Human Culture (Riverhead Books, reprint edition, 2014)
    by Benjamin Haber
    ~

    On a recent visit to San Francisco, I found myself trying to purchase groceries when my credit card was declined. As the cashier is telling me this news, and before I really had time to feel any particular way about it, my leg vibrates. I’ve received a text: “Chase Fraud-Did you use card ending in 1234 for $100.40 at a grocery store on 07/01/2015? If YES reply 1, NO reply 2.” After replying “yes” (which was recognized even though I failed to follow instructions), I swiped my card again and was out the door with my food. Many have probably had a similar experience: most if not all credit card companies automatically track purchases for a variety of reasons, including fraud prevention, the tracking of illegal activity, and to offer tailored financial products and services. As I walked out of the store, for a moment, I felt the power of “big data,” how real-time consumer information can be read as be a predictor of a stolen card in less time than I had to consider why my card had been declined. It was a too rare moment of reflection on those networks of activity that modulate our life chances and capacities, mostly below and above our conscious awareness.

    And then I remembered: didn’t I buy my plane ticket with the points from that very credit card? And in fact, hadn’t I used that card on multiple occasions in San Francisco for purchases not much less than the amount my groceries cost. While the near-instantaneous text provided reassurance before I could consciously recognize my anxiety, the automatic card decline was likely not a sophisticated real-time data-enabled prescience, but a rather blunt instrument, flagging the transaction on the basis of two data points: distance from home and amount of purchase. In fact, there is plenty of evidence to suggest that the gap between data collection and processing, between metadata and content and between current reality of data and its speculative future is still quite large. While Target’s pregnancy predicting algorithm was a journalistic sensation, the more mundane computational confusion that has Gmail constantly serving me advertisements for trade and business schools shows the striking gap between the possibilities of what is collected and the current landscape of computationally prodded behavior. The text from Chase, your Klout score, the vibration of your FitBit, or the probabilistic genetic information from 23 and me are all primarily affective investments in mobilizing a desire for data’s future promise. These companies and others are opening of new ground for discourse via affect, creating networked infrastructures for modulating the body and social life.

    I was thinking about this while reading Uncharted: Big Data as a Lens on Human Culture, a love letter to the power and utility of algorithmic processing of the words in books. Though ostensibly about the Google Ngram Viewer, a neat if one-dimensional tool to visualize the word frequency of a portion of the books scanned by Google, Uncharted is also unquestionably involved in the mobilization of desire for quantification. Though about the academy rather than financialization, medicine, sports or any other field being “revolutionized” by big data, its breathless boosterism and obligatory cautions are emblematic of the emergent datafied spirit of capitalism, a celebratory “coming out” of the quantifying systems that constitute the emergent infrastructures of sociality.

    While published fairly recently, in 2013, Uncharted already feels dated in its strangely muted engagement with the variety of serious objections to sprawling corporate and state run data systems in the post-Snowden, post-Target, post-Ashley Madison era (a list that will always be in need of update). There is still the dazzlement about the sheer magnificent size of this potential new suitor—“If you wrote out all five zettabytes that humans produce every year by hand, you would reach the core of the Milky Way” (11)—all the more impressive when explicitly compared to the dusty old technologies of ink and paper. Authors Erez Aiden and Jean-Baptiste Michel are floating in a world of “simple and beautiful” formulas (45), “strange, fascinating and addictive” methods (22), producing “intriguing, perplexing and even fun” conclusions (119) in their drive to colonize the “uncharted continent” (76) that is the English language. The almost erotic desire for this bounty is made more explicit in their tongue-in-cheek characterization of their meetings with Google employees as an “irresistible… mating dance” (22):

    Scholars and scientists approach engineers, product managers, and even high-level executives about getting access to their companies’ data. Sometimes the initial conversation goes well. They go out for coffee. One thing leads to another, and a year later, a brand-new person enters the picture. Unfortunately this person is usually a lawyer. (22)

    There is a lot to unpack in these metaphors, the recasting of academic dependence on data systems designed and controlled by corporate entities as a sexy new opportunity for scholars and scientists. There are important conversations to be had about these circulations of quantified desire; about who gets access to this kind of data, the ethics of working with companies who have an existential interest in profit and shareholder return and the cultural significance of wrapping business transactions in the language of heterosexual coupling. Here however I am mostly interested in the real allure that this passage and others speaks to, and the attendant fear that mostly whispers, at least in a book written by Harvard PhDs with Ted talks to give.

    For most academics in the social sciences and the humanities “big data” is a term more likely to get caught in the throat than inspire butterflies in the stomach. While Aiden and Michel certainly acknowledge that old-fashion textual analysis (50) and theory (20) will have a place in this brave new world of charts and numbers, they provide a number of contrasts to suggest the relative poverty of even the most brilliant scholar in the face of big data. One hypothetical in particular, that is not directly answered but is strongly implied, spoke to my discipline specifically:

    Consider the following question: Which would help you more if your quest was to learn about contemporary human society—unfettered access to a leading university’s department of sociology, packed with experts on how societies function, or unfettered access to Facebook, a company whose goal is to help mediate human social relationships online? (12)

    The existential threat at the heart of this question was catalyzed for many people in Roger Burrows and Mike Savage’s 2007 “The Coming Crisis of Empirical Sociology,” an early canary singing the worry of what Nigel Thrift has called “knowing capitalism” (2005). Knowing capitalism speaks to the ways that capitalism has begun to take seriously the task of “thinking the everyday” (1) by embedding information technologies within “circuits of practice” (5). For Burrows and Savage these practices can and should be seen as a largely unrecognized world of sophisticated and profit-minded sociology that makes the quantitative tools of academics look like “a very poor instrument” in comparison (2007: 891).

    Indeed, as Burrows and Savage note, the now ubiquitous social survey is a technology invented by social scientists, folks who were once seen as strikingly innovative methodologists (888). Despite ever more sophisticated statistical treatments however, the now over 40 year old social survey remains the heart of social scientific quantitative methodology in a radically changed context. And while declining response rates, a constraining nation-based framing and competition from privately-funded surveys have all decreased the efficacy of academic survey research (890), nothing has threatened the discipline like the embedded and “passive” collecting technologies that fuel big data. And with these methodological changes come profound epistemological ones: questions of how, when, why and what we know of the world. These methods are inspiring changing ideas of generalizability and new expectations around the temporality of research. Does it matter, for example, that studies have questioned the accuracy of the FitBit? The growing popularity of these devices suggests at the very least that sociologists should not count on empirical rigor to save them from irrelevance.

    As academia reorganizes around the speculative potential of digital technologies, there is an increasing pile of capital available to those academics able to translate between the discourses of data capitalism and a variety of disciplinary traditions. And the lure of this capital is perhaps strongest in the humanities, whose scholars have been disproportionately affected by state economic retrenchment on education spending that has increasingly prioritized quantitative, instrumental, and skill-based majors. The increasing urgency in the humanities to use bigger and faster tools is reflected in the surprisingly minimal hand wringing over the politics of working with companies like Facebook, Twitter and Google. If there is trepidation in the N-grams project recounted in Uncharted, it is mostly coming from Google, whose lawyers and engineers have little incentive to bother themselves with the politically fraught, theory-driven, Institutional Review Board slow lane of academic production. The power imbalance of this courtship leaves those academics who decide to partner with these companies at the mercy of their epistemological priorities and, as Uncharted demonstrates, the cultural aesthetics of corporate tech.

    This is a vision of the public humanities refracted through the language of public relations and the “measurable outcomes” culture of the American technology industry. Uncharted has taken to heart the power of (re)branding to change the valence of your work: Aiden and Michel would like you to call their big data inflected historical research “culturomics” (22). In addition to a hopeful attempt to coin a buzzy new work about the digital, culturomics linguistically brings the humanities closer to the supposed precision, determination and quantifiability of economics. And lest you think this multivalent bringing of culture to capital—or rather the renegotiation of “the relationship between commerce and the ivory tower” (8)—is unseemly, Aiden and Michel provide an origin story to show how futile this separation has been.

    But the desire for written records has always accompanied economic activity, since transactions are meaningless unless you can clearly keep track of who owns what. As such, early human writing is dominated by wheeling and dealing: a menagerie of bets, chits, and contracts. Long before we had the writings of prophets, we had the writing of profits. (9)

    And no doubt this is true: culture is always already bound up with economy. But the full-throated embrace of culturomics is not a vision of interrogating and reimagining the relationship between economic systems, culture and everyday life; [1] rather it signals the acceptance of the idea of culture as transactional business model. While Google has long imagined itself as a company with a social mission, they are a publicly held company who will be punished by investors if they neglect their bottom line of increasing the engagement of eyeballs on advertisements. The N-gram Viewer does not make Google money, but it perhaps increases public support for their larger book-scanning initiative, which Google clearly sees as a valuable enough project to invest many years of labor and millions of dollars to defend in court.

    This vision of the humanities is transactionary in another way as well. While much of Uncharted is an attempt to demonstrate the profound, game-changing implications of the N-gram viewer, there is a distinctly small-questions, cocktail-party-conversation feel to this type of inquiry that seems ironically most useful in preparing ABD humanities and social science PhDs for jobs in the service industry than in training them for the future of academia. It might be more precise to say that the N-gram viewer is architecturally designed for small answers rather than small questions. All is resolved through linear projection, a winner and a loser or stasis. This is a vision of research where the precise nature of the mediation (what books have been excluded? what is the effect of treating all books as equally revealing of human culture? what about those humans whose voices have been systematically excluded from the written record?) is ignored, and where the actual analysis of books, and indeed the books themselves, are black-boxed from the researcher.

    Uncharted speaks to perils of doing research under the cloud of existential erasure and to the failure of academics to lead with a different vision of the possibilities of quantification. Collaborating with the wealthy corporate titans of data collection requires an acceptance of these companies own existential mandate: make tons of money by monetizing a dizzying array of human activities while speculatively reimagining the future to attempt to maintain that cash flow. For Google, this is a vision where all activities, not just “googling” are collected and analyzed in a seamlessly updating centralized system. Cars, thermostats, video games, photos, businesses are integrated not for the public benefit but because of the power of scale to sell or rent or advertise products. Data is promised as a deterministic balm for the unknowability of life and Google’s participation in academic research gives them the credibility to be your corporate (sen.se) mother. What, might we imagine, are the speculative possibilities of networked data not beholden to shareholder value?
    _____

    Benjamin Haber is a PhD candidate in Sociology at CUNY Graduate Center and a Digital Fellow at The Center for the Humanities. His current research is a cultural and material exploration of emergent infrastructures of corporeal data through a queer theoretical framework. He is organizing a conference called “Queer Circuits in Archival Times: Experimentation and Critique of Networked Data” to be held in New York City in May 2016.

    Back to the essay

    _____

    Notes

    [1] A project desperately needed in academia, where terms like “neoliberalism,” “biopolitics” and “late capitalism” more often than not are used briefly at end of a short section on implications rather than being given the critical attention and nuanced intentionality that they deserve.

    Works Cited

    Savage, Mike, and Roger Burrows. 2007. “The Coming Crisis of Empirical Sociology.” Sociology 41 (5): 885–99.

    Thrift, Nigel. 2005. Knowing Capitalism. London: SAGE.

  • The Ground Beneath the Screens

    The Ground Beneath the Screens

    Jussi Parikka, A Geology of Media (University of Minnesota Press, 2015)Jussi Parikka, The Anthrobscene (University of Minnesota Press, 2015)a review of Jussi Parikka, A Geology of Media (University of Minnesota Press, 2015) and The Anthrobscene (University of Minnesota Press, 2015)
    by Zachary Loeb

    ~

     

     

     

     

    Despite the aura of ethereality that clings to the Internet, today’s technologies have not shed their material aspects. Digging into the materiality of such devices does much to trouble the adoring declarations of “The Internet Is the Answer.” What is unearthed by digging is the ecological and human destruction involved in the creation of the devices on which the Internet depends—a destruction that Jussi Parikka considers an obscenity at the core of contemporary media.

    Parikka’s tale begins deep below the Earth’s surface in deposits of a host of different minerals that are integral to the variety of devices without which you could not be reading these words on a screen. This story encompasses the labor conditions in which these minerals are extracted and eventually turned into finished devices, it tells of satellites, undersea cables, massive server farms, and it includes a dark premonition of the return to the Earth which will occur following the death (possibly a premature death due to planned obsolescence) of the screen at which you are currently looking.

    In a connected duo of new books, The Anthrobscene (referenced below as A) and A Geology of Media (referenced below as GM), media scholar Parikka wrestles with the materiality of the digital. Parikka examines the pathways by which planetary elements become technology, while considering the transformations entailed in the anthropocene, and artistic attempts to render all of this understandable. Drawing upon thinkers ranging from Lewis Mumford to Donna Haraway and from the Situationists to Siegfried Zielinski – Parikka constructs a way of approaching media that emphasizes that it is born of the Earth, borne upon the Earth, and fated eventually to return to its place of origin. Parikka’s work demands that materiality be taken seriously not only by those who study media but also by all of those who interact with media – it is a demand that the anthropocene must be made visible.

    Time is an important character in both The Anthrobscene and A Geology of Media for it provides the context in which one can understand the long history of the planet as well as the scale of the years required for media to truly decompose. Parikka argues that materiality needs to be considered beyond a simple focus upon machines and infrastructure, but instead should take into account “the idea of the earth, light, air, and time as media” (GM 3). Geology is harnessed as a method of ripping open the black box of technology and analyzing what the components inside are made of – copper, lithium, coltan, and so forth. The engagement with geological materiality is key for understanding the environmental implications of media, both in terms of the technologies currently in circulation and in terms of predicting the devices that will emerge in the coming years. Too often the planet is given short shrift in considerations of the technical, but “it is the earth that provides for media and enables it”, it is “the affordances of its geophysical reality that make technical media happen” (GM 13). Drawing upon Mumford’s writings about “paleotechnics” and “neotechnics” (concepts which Mumford had himself adapted from the work of Patrick Geddes), Parikka emphasizes that both the age of coal (paleotechnics) and the age of electricity (neotechnics) are “grounded in the wider mobilization of the materiality of the earth” (GM 15). Indeed, electric power is often still quite reliant upon the extraction and burning of coal.

    More than just a pithy neologism, Parikka introduces the term “anthrobscene” to highlight the ecological violence inherent in “the massive changes human practices, technologies, and existence have brought across the ecological board” (GM 16-17) shifts that often go under the more morally vague title of “the anthropocene.” For Parikka, “the addition of the obscene is self-explanatory when one starts to consider the unsustainable, politically dubious, and ethically suspicious practices that maintain technological culture and its corporate networks” (A 6). Like a curse word beeped out by television censors, much of the obscenity of the anthropocene goes unheard even as governments and corporations compete with ever greater élan for the privilege of pillaging portions of the planet – Parikka seeks to reinscribe the obscenity.

    The world of high tech media still relies upon the extraction of metals from the earth and, as Parikka shows, a significant portion of the minerals mined today are destined to become part of media technologies. Therefore, in contemplating geology and media it can be fruitful to approach media using Zielinski’s notion of “deep time” wherein “durations become a theoretical strategy of resistance against the linear progress myths that impose a limited context for understanding technological change” (GM 37, A 23). Deploying the notion of “deep time” demonstrates the ways in which a “metallic materiality links the earth to the media technological” while also emphasizing the temporality “linked to the nonhuman earth times of decay and renewal” (GM 44, A 30). Thus, the concept of “deep time” can be particularly useful in thinking through the nonhuman scales of time involved in media, such as the centuries required for e-waste to decompose.

    Whereas “deep time” provides insight into media’s temporal quality, “psychogeophysics” presents a method for thinking through the spatial. “Psychogeophysics” is a variation of the Situationist idea of “the psychogeographical,” but where the Situationists focused upon the exploration of the urban environment, “psychogeophysics” (which appeared as a concept in a manifesto in Mute magazine) moves beyond the urban sphere to contemplate the oblate spheroid that is the planet. What the “geophysical twist brings is a stronger nonhuman element that is nonetheless aware of the current forms of exploitation but takes a strategic point of view on the nonorganic too” (GM 64). Whereas an emphasis on the urban winds up privileging the world built by humans, the shift brought by “psychogeophysics” allows people to bear witness to “a cartography of architecture of the technological that is embedded in the geophysical” (GM 79).

    The material aspects of media technology consist of many areas where visibility has broken down. In many cases this is suggestive of an almost willful disregard (ignoring exploitative mining and labor conditions as well as the harm caused by e-waste), but in still other cases it is reflective of the minute scales that materiality can assume (such as metallic dust that dangerously fills workers’ lungs after they shine iPad cases). The devices that are surrounded by an optimistic aura in some nations, thus obtain this sheen at the literal expense of others: “the residue of the utopian promise is registered in the soft tissue of a globally distributed cheap labor force” (GM 89). Indeed, those who fawn with religious adoration over the newest high-tech gizmo may simply be demonstrating that nobody they know personally will be sickened in assembling it, or be poisoned by it when it becomes e-waste. An emphasis on geology and materiality, as Parikka demonstrates, shows that the era of digital capitalism contains many echoes of the exploitation characteristic of bygone periods – appropriation of resources, despoiling of the environment, mistreatment of workers, exportation of waste, these tragedies have never ceased.

    Digital media is excellent at creating a futuristic veneer of “smart” devices and immaterial sounding aspects such as “the cloud,” and yet a material analysis demonstrates the validity of the old adage “the more things change the more they stay the same.” Despite efforts to “green” digital technology, “computer culture never really left the fossil (fuel) age anyway” (GM 111). But beyond relying on fossil fuels for energy, these devices can themselves be considered as fossils-to-be as they go to rest in dumps wherein they slowly degrade, so that “we can now ask what sort of fossil layer is defined by the technical media condition…our future fossils layers are piling up slowly but steadily as an emblem of an apocalypse in slow motion” (GM 119). We may not be surrounded by dinosaurs and trilobites, but the digital media that we encounter are tomorrow’s fossils – which may be quite mysterious and confounding to those who, thousands of years hence, dig them up. Businesses that make and sell digital media thrive on a sense of time that consists of planned obsolescence, regular updates, and new products, but to take responsibility for the materiality of these devices requires that “notions of temporality must escape any human-obsessed vocabulary and enter into a closer proximity with the fossil” (GM 135). It requires a woebegone recognition that our technological detritus may be present on the planet long after humanity has vanished.

    The living dead that lurch alongside humanity today are not the zombies of popular entertainment, but the undead media devices that provide the screens for consuming such distractions. Already fossils, bound to be disposed of long before they stop working, it is vital “to be able to remember that media never dies, but remains as toxic residue,” and thus “we should be able to repurpose and reuse solutions in new ways, as circuit bending and hardware hacking practices imply” (A 41). We live with these zombies, we live among them, and even when we attempt to pack them off to unseen graveyards they survive under the surface. A Geology of Media is thus “a call for further materialization of media not only as media but as that bit which it consists of: the list of the geophysical elements that give us digital culture” (GM 139).

    It is not simply that “machines themselves contain a planet” (GM 139) but that the very materiality of the planet is becoming riddled with a layer of fossilized machines.

    * * *

    The image of the world conjured up by Parikka in A Geology of Media and The Anthrobscene is far from comforting – after all, Parikka’s preference for talking about “the anthrobscene” does much to set a funereal tone. Nevertheless, these two books by Parikka do much to demonstrate that “obscene” may be a very fair word to use when discussing today’s digital media. By emphasizing the materiality of media, Parikka avoids the thorny discussions of the benefits and shortfalls of various platforms to instead pose a more challenging ethical puzzle: even if a given social media platform can be used for ethical ends, to what extent is this irrevocably tainted by the materiality of the device used to access these platforms? It is a dark assessment which Parikka describes without much in the way of optimistic varnish, as he describes the anthropocene (on the first page of The Anthrobscene) as “a concept that also marks the various violations of environmental and human life in corporate practices and technological culture that are ensuring that there won’t be much of humans in the future scene of life” (A 1).

    And yet both books manage to avoid the pitfall of simply coming across as wallowing in doom. Parikka is not pining for a primal pastoral fantasy, but is instead seeking to provide new theoretical tools with which his readers can attempt to think through the materiality of media. Here, Parikka’s emphasis on the way that digital technology is still heavily reliant upon mining and fossil fuels acts as an important counter to gee-whiz futurism. Similarly Parikka’s mobilization of the notion of “deep time” and fossils acts as an important contribution to thinking through the lifecycles of digital media. Dwelling on the undeath of a smartphone slowly decaying in an e-waste dump over centuries is less about evoking a fearful horror than it is about making clear the horribleness of technological waste. The discussion of “deep time” seems like it can function as a sort of geological brake on accelerationist thinking, by emphasizing that no matter how fast humans go, the planet has its own sense of temporality. Throughout these two slim books, Parikka draws upon a variety of cultural works to strengthen his argument: ranging from the earth-pillaging mad scientist of Arthur Conan Doyle’s Professor Challenger, to the Coal Fired Computers of Yokokoji-Harwood (YoHa), to Molleindustria’s smartphone game “Phone Story” which plays out on a smartphone’s screen the tangles of extraction, assembly, and disposal that are as much a part of the smartphone’s story as whatever uses for which the final device is eventually used. Cultural and artistic works, when they intend to, may be able to draw attention to the obscenity of the anthropocene.

    The Anthrobscene and A Geology of Media are complementary texts, but one need not read both in order to understand the other. As part of the University of Minnesota Press’s “Forerunners” series, The Anthrobscene is a small book (in terms of page count and physical size) which moves at a brisk pace, in some ways it functions as a sort of greatest hits version of A Geology of Media – containing many of the essential high points, but lacking some of the elements that ultimately make A Geology of Media a satisfying and challenging book. Yet the duo of books work wonderfully together as The Anthrobscene acts as a sort of primer – that a reader of both books will detect many similarities between the two is not a major detraction, for these books tell a story that often goes unheard today.

    Those looking for neat solutions to the anthropocene’s quagmire will not find them in either of these books – and as these texts are primarily aimed at an academic audience this is not particularly surprising. These books are not caught up in offering hope – be it false or genuine. At the close of A Geology of Media when Parikka discusses the need “to repurpose and reuse solutions in new ways, as circuit bending and hardware hacking practices imply” (A 41) – this does not appear as a perfect panacea but as way of possibly adjusting. Parikka is correct in emphasizing the ways in which the extractive regimes that characterized the paleotechnic continue on in the neotechnic era, and this is a point which Mumford himself made regarding the way that the various “technic” eras do not represent clean breaks from each other. As Mumford put it, “the new machines followed, not their own pattern, but the pattern laid down by previous economic and technical structures” (Mumford 2010, 236) – in other words, just as Parikka explains, the paleotechnic survives well into the neotechnic. The reason this is worth mentioning is not to challenge Parikka, but to highlight that the “neotechnic” is not meant as a characterization of a utopian technical epoch that has parted ways with the exploitation that had characterized the preceding period. For Mumford the need was to move beyond the anthropocentrism of the neotechnic period and move towards what he called (in The Culture of Cities) the “biotechnic” a period wherein “technology itself will be oriented toward the culture of life” (Mumford 1938, 495). Granted, as Mumford’s later work and as these books by Parikka make clear – instead of arriving at the “biotechnic” what we might get is instead the anthrobscene. And reading these books by Parikka makes it clear that one could not characterize the anthrobscene as being “oriented toward the culture of life” – indeed, it may be exactly the opposite. Or, to stick with Mumford a bit longer, it may be that the anthrobscene is the result of the triumph of “authoritarian technics” over “democratic” ones. Nevertheless, the true dirge like element of Parikka’s books is that they raise the possibility that it may well be too late to shift paths – that the neotechnic was perhaps just a coat of fresh paint applied to hide the rusting edifice of paleotechnics.

    A Geology of Media and The Anthrobscene are conceptual toolkits, they provide the reader with the drills and shovels they need to dig into the materiality of digital media. But what these books make clear is that along with the pickaxe and the archeologist’s brush, if one is going to dig into the materiality of media one also needs a gasmask if one is to endure the noxious fumes. Ultimately, what Parikka shows is that the Situationist inspired graffiti of May 1968 “beneath the streets – the beach” needs to be rewritten in the anthrobscene.

    Perhaps a fitting variation for today would read: “beneath the streets – the graveyard.”
    _____

    Zachary Loeb is a writer, activist, librarian, and terrible accordion player. He earned his MSIS from the University of Texas at Austin, and is currently working towards an MA in the Media, Culture, and Communications department at NYU. His research areas include media refusal and resistance to technology, ethical implications of technology, infrastructure and e-waste, as well as the intersection of library science with the STS field. Using the moniker “The Luddbrarian,” Loeb writes at the blog Librarian Shipwreck. He is a frequent contributor to The b2 Review Digital Studies section.

    Back to the essay
    _____

    Works Cited

    Mumford, Lewis. 2010. Technics and Civilization. Chicago: University of Chicago Press.

    Mumford, Lewis. 1938. The Culture of Cities. New York: Harcourt, Brace and Company.

  • Orientation and Asian Literature: A Conversation with Rob Wilson

    Orientation and Asian Literature: A Conversation with Rob Wilson

    Rob Wilson

    Last week, the journal Former People published an interview with poet, translator, scholar and b2 contributor Rob Wilson, who discusses the crossroads and merging lanes of Asian literature, both within and set against a global highway:

    “‘Asia’ remains an impossible if necessary category, enacting arbitrary and power-laden inclusions and exclusions from its origins in Greece and Rome and imperial England down to its present iterations when East Asia and China seem to dominate. Asia is a catachresis, as Gayatri Spivak tracks it in Other Asias, even as she attempts to include, compare, and translate ‘other Indias’ inside what gets taken as the literature of India within dominant Anglophone frameworks of comparative literature.” Read the full conversation.

    __

    cover photo: The Women of Algiers, 1834, by Eugène Delacroix