boundary 2

Tag: digital media

  • Zachary Loeb – All Watched Over By Machines (Review of Levine, Surveillance Valley)

    Zachary Loeb – All Watched Over By Machines (Review of Levine, Surveillance Valley)

    a review of Yasha Levine, Surveillance Valley: The Secret Military History of the Internet (PublicAffairs, 2018)

    by Zachary Loeb

    ~

    There is something rather precious about Google employees, and Internet users, who earnestly believe the “don’t be evil” line. Though those three words have often been taken to represent a sort of ethos, their primary function is as a steam vent – providing a useful way to allow building pressure to escape before it can become explosive. While “don’t be evil” is associated with Google, most of the giants of Silicon Valley have their own variations of this comforting ideological façade: Apple’s “think different,” Facebook’s talk of “connecting the world,” the smiles on the side of Amazon boxes. And when a revelation troubles this carefully constructed exterior – when it turns out Google is involved in building military drones, when it turns out that Amazon is making facial recognition software for the police – people react in shock and outrage. How could this company do this?!?

    What these revelations challenge is not simply the mythos surrounding particular tech companies, but the mythos surrounding the tech industry itself. After all, many people have their hopes invested in the belief that these companies are building a better brighter future, and they are naturally taken aback when they are forced to reckon with stories that reveal how these companies are building the types of high-tech dystopias that science fiction has been warning us about for decades. And in this space there are some who seem eager to allow a new myth to take root: one in which the unsettling connections between big tech firms and the military industrial complex is something new. But as Yasha Levine’s important new book, Surveillance Valley, deftly demonstrates the history of the big tech firms, complete with its panoptic overtones, is thoroughly interwoven with the history of the repressive state apparatus. While many people may be at least nominally aware of the links between early computing, or the proto-Internet, and the military, Levine’s book reveals the depth of these connections and how they persist. As he provocatively puts it, “the Internet was developed as a weapon and remains a weapon today” (9).

    Thus, cases of Google building military drones, Facebook watching us all, and Amazon making facial recognition software for the police, need to be understood not as aberrations. Rather, they are business as usual.

    Levine begins his account with the war in Vietnam, and the origins of a part of the Department of Defense known as the Advanced Research Projects Agency (ARPA) – an outfit born of the belief that victory required the US to fight a high-tech war. ARPA’s technocrats earnestly believed “in the power of science and technology to solve the world’s problems” (23), and they were confident that the high-tech systems they developed and deployed (such as Project Igloo White) would allow the US to triumph in Vietnam. And though the US was not ultimately victorious in that conflict, the worldview of ARPA’s technocrats was, as was the linkage between the nascent tech sector and the military. Indeed, the tactics and techniques developed in Vietnam were soon to be deployed for dealing with domestic issues, “giving a modern scientific veneer to public policies that reinforced racism and structural poverty” (30).

    Much of the early history of computers, as Levine documents, is rooted in systems developed to meet military and intelligence needs during WWII – but the Cold War provided plenty of impetus for further military reliance on increasingly complex computing systems. And as fears of nuclear war took hold, computer systems (such as SAGE) were developed to surveil the nation and provide military officials with a steady flow of information. Along with the advancements in computing came the dispersion of cybernetic thinking which treated humans as information processing machines, not unlike computers, and helped advance a worldview wherein, given enough data, computers could make sense of the world. All that was needed was to feed more, and more, information into the computers – and intelligence agencies proved to be among the first groups interested in taking advantage of these systems.

    While the development of these systems of control and surveillance ran alongside attempts to market computers to commercial firms, Levine’s point is that it was not an either/or situation but a both/and, “computer technology is always ‘dual use,’ to be used in both commercial and military applications” (58) – and this split allows computer scientists and engineers who would be morally troubled by the “military applications” of their work to tell themselves that they work strictly on the commercial, or scientific side. ARPANET, the famous forerunner of the Internet, was developed to connect computer centers at a variety of prominent universities. Reliant on Interface Message Processors (IMPs) the system routed messages through the network through a variety of nodes and in the case that one node went down the system would reroute the message through other nodes – it was a system of relaying information built to withstand a nuclear war.

    Though all manner of utopian myths surround the early Internet, and by extension its forerunner, Levine highlights that “surveillance was baked in from the very beginning” (75). Case in point, the largely forgotten CONUS Intel program that gathered information on millions of Americans. By encoding this information on IBM punch cards, which were then fed into a computer, law enforcement groups and the army were able to access information not only regarding criminal activity, but activities protected by the first amendment. As news of these databases reached the public they generated fears of a high-tech surveillance society, leading some Senators, such as Sam Ervin, to push back against the program. And in a foreshadowing of further things to come, “the army promised to destroy the surveillance files, but the Senate could not obtain definitive proof that the files were ever fully expunged,” (87). Though there were concerns about the surveillance potential of ARPANET, its growing power was hardly checked, and more government agencies began building their own subnetworks (PRNET, SATNET). Yet, as they relied on different protocols, these networks could not connect to each other, until TCP/IP “the same basic network language that powers the Internet today” (95), allowed them to do so.

    Yet surveillance of citizens, and public pushback against computerized control, is not the grand origin story that most people are familiar with when it comes to the Internet. Instead the story that gets told is one whereby a military technology is filtered through the sieve of a very selective segment of the 1960s counterculture to allow it to emerge with some rebellious credibility. This view, owing much to Stewart Brand, transformed the nascent Internet from a military technology into a technology for everybody “that just happened to be run by the Pentagon” (106). Brand played a prominent and public role in rebranding the computer, as well as those working on the computers – turning these cold calculating machines into doors to utopia, and portraying computer programmers and entrepreneurs as the real heroes of the counterculture. In the process the military nature of these machines disappeared behind a tie-dyed shirt, and the fears of a surveillance society were displaced by hip promises of total freedom. The government links to the network were further hidden as ARPANET slowly morphed into the privatized commercial system we know as the Internet. It may seem mind boggling that the Internet was simply given away with “no real public debate, no discussion, no dissension, and no oversight” (121), but it is worth remembering that this was not the Internet we know. Rather it was how the myth of the Internet we know was built. A myth that combined, as was best demonstrated by Wired magazine, “an unquestioning belief in the ultimate goodness and rightness of markets and decentralized computer technology, no matter how it was used” (133).

    The shift from ARPANET to the early Internet to the Internet of today presents a steadily unfolding tale wherein the result is that, today, “the Internet is like a giant, unseen blob that engulfs the modern world” (169). And in terms of this “engulfing” it is difficult to not think of a handful of giant tech companies (Amazon, Facebook, Apple, eBay, Google) who are responsible for much of that. In the present Internet atmosphere people have become largely inured to the almost clichéd canard that “if you’re not paying, you are the product,” but what this represents is how people have, largely, come to accept that the Internet is one big surveillance machine. Of course, feeding information to the giants made a sort of sense, many people (at least early on) seem to have been genuinely taken in by Google’s “Don’t Be Evil” image, and they saw themselves as the beneficiaries of the fact that “the more Google knew about someone, the better its search results would be” (150). The key insight that firms like Google seem to have understood is that a lot can be learned about a person based on what they do online (especially when they think no one is watching) – what people search for, what sites people visit, what people buy. And most importantly, what these companies understand is that “everything that people do online leaves a trail of data” (169), and controlling that data is power. These companies “know us intimately, even the things that we hide from those closest to us” (171). ARPANET found itself embroiled in a major scandal, at its time, when it was revealed how it was being used to gather information on and monitor regular people going about their lives – and it may well be that “in a lot of ways” the Internet “hasn’t changed much from its ARPANET days. It’s just gotten more powerful” (168).

    But even as people have come to gradually accept, by their actions if not necessarily by their beliefs, that the Internet is one big surveillance machine – periodically events still puncture this complacency. Case in point: Edward Snowden’s revelations about the NSA which splashed the scale of Internet assisted surveillance across the front pages of the world’s newspapers. Reporting linked to the documents Snowden leaked revealed how “the NSA had turned Silicon Valley’s globe-spanning platforms into a de facto intelligence collection apparatus” (193), and these documents exposed “the symbiotic relationship between Silicon Valley and the US government” (194). And yet, in the ensuing brouhaha, Silicon Valley was largely able to paint itself as the victim. Levine attributes some of this to Snowden’s own libertarian political bent, as he became a cult hero amongst technophiles, cypher-punks, and Internet advocates, “he swept Silicon Valley’s role in Internet surveillance under the rug” (199), while advancing a libertarian belief in “the utopian promise of computer networks” (200) similar to that professed by Steward Brand. In many ways Snowden appeared as the perfect heir apparent to the early techno-libertarians, especially as he (like them) focused less on mass political action and instead more on doubling-down on the idea that salvation would come through technology. And Snowden’s technology of choice was Tor.

    While Tor may project itself as a solution to surveillance, and be touted as such by many of its staunchest advocates, Levine casts doubt on this. Noting that, “Tor works only if people are dedicated to maintaining a strict anonymous Internet routine,” one consisting of dummy e-mail accounts and all transactions carried out in Bitcoin, Levine suggests that what Tor offers is “a false sense of privacy” (213). Levine describes the roots of Tor in an original need to provide government operatives with an ability to access the Internet, in the field, without revealing their true identities; and in order for Tor to be effective (and not simply signal that all of its users are spies and soldiers) the platform needed to expand its user base: “Tor was like a public square—the bigger and more diverse the group assembled there, the better spies could hide in the crowd” (227).

    Though Tor had spun off as an independent non-profit, it remained reliant for much of its funding on the US government, a matter which Tor aimed to downplay through emphasizing its radical activist user base and by forming close working connections with organizations like WikiLeaks that often ran afoul of the US government. And in the figure of Snowden, Tor found a perfect public advocate, who seemed to be living proof of Tor’s power – after all, he had used it successfully. Yet, as the case of Ross Ulbricht (the “Dread Pirate Roberts” of Silk Road notoriety) demonstrated, Tor may not be as impervious as it seems – researchers at Carnegie Mellon University “had figured out a cheap and easy way to crack Tor’s super-secure network” (263). To further complicate matters Tor had come to be seen by the NSA “as a honeypot,” to the NSA “people with something to hide” were the ones using Tor and simply by using it they were “helping to mark themselves for further surveillance” (265). And much of the same story seems to be true for the encrypted messaging service Signal (it is government funded, and less secure than its fans like to believe). While these tools may be useful to highly technically literate individuals committed to maintaining constant anonymity, “for the average users, these tools provided a false sense of security and offered the opposite of privacy” (267).

    The central myth of the Internet frames it as an anarchic utopia built by optimistic hippies hoping to save the world from intrusive governments through high-tech tools. Yet, as Surveillance Valley documents, “computer technology can’t be separated from the culture in which it is developed and used” (273). Surveillance is at the core of, and has always been at the core of, the Internet – whether the all-seeing eye be that of the government agency, or the corporation. And this is a problem that, alas, won’t be solved by crypto-fixes that present technological solutions to political problems. The libertarian ethos that undergirds the Internet works well for tech giants and cypher-punks, but a real alternative is not a set of tools that allow a small technically literate gaggle to play in the shadows, but a genuine democratization of the Internet.

     

    *

     

    Surveillance Valley is not interested in making friends.

    It is an unsparing look at the origins of, and the current state of, the Internet. And it is a book that has little interest in helping to prop up the popular myths that sustain the utopian image of the Internet. It is a book that should be read by anyone who was outraged by the Facebook/Cambridge Analytica scandal, anyone who feels uncomfortable about Google building drones or Amazon building facial recognition software, and frankly by anyone who uses the Internet. At the very least, after reading Surveillance Valley many of those aforementioned situations seem far less surprising. While there are no shortage of books, many of them quite excellent, that argue that steps need to be taken to create “the Internet we want,” in Surveillance Valley Yasha Levine takes a step back and insists “first we need to really understand what the Internet really is.” And it is not as simple as merely saying “Google is bad.”

    While much of the history that Levine unpacks won’t be new to historians of technology, or those well versed in critiques of technology, Surveillance Valley brings many, often separate strands into one narrative. Too often the early history of computing and the Internet is placed in one silo, while the rise of the tech giants is placed in another – by bringing them together, Levine is able to show the continuities and allow them to be understood more fully. What is particularly noteworthy in Levine’s account is his emphasis on early pushback to ARPANET, an often forgotten series of occurrences that certainly deserves a book of its own. Levine describes students in the 1960s who saw in early ARPANET projects “a networked system of surveillance, political control, and military conquest being quietly assembled by diligent researchers and engineers at college campuses around the country,” and as Levine provocatively adds, “the college kids had a point” (64). Similarly, Levine highlights NBC reporting from 1975 on the CIA and NSA spying on Americans by utilizing ARPANET, and on the efforts of Senators to rein in these projects. Though Levine is not presenting, nor is he claiming to present, a comprehensive history of pushback and resistance, his account makes it clear that liberatory claims regarding technology were often met with skepticism. And much of that skepticism proved to be highly prescient.

    Yet this history of resistance has largely been forgotten amidst the clever contortions that shifted the Internet’s origins, in the public imagination, from counterinsurgency in Vietnam to the counterculture in California. Though the area of Surveillance Valley that will likely cause the most contention is Levine’s chapters on crypto-tools like Tor and Signal, perhaps his greatest heresy is in his refusal to pay homage to the early tech-evangels like Stewart Brand and Kevin Kelly. While the likes of Brand, and John Perry Barlow, are often celebrated as visionaries whose utopian blueprints have been warped by power-hungry tech firms, Levine is frank in framing such figures as long-haired libertarians who knew how to spin a compelling story in such a way that made empowering massive corporations seem like a radical act. And this is in keeping with one of the major themes that runs, often subtlety, through Surveillance Valley: the substitution of technology for politics. Thus, in his book, Levine does not only frame the Internet as disempowering insofar as it runs on surveillance and relies on massive corporations, but he emphasizes how the ideological core of the Internet focuses all political action on technology. To every social, economic, and political problem the Internet presents itself as the solution – but Levine is unwilling to go along with that idea.

    Those who were familiar with Levine’s journalism before he penned Surveillance Valley will know that much of his reporting has covered crypto-tech, like Tor, and similar privacy technologies. Indeed, to a certain respect, Surveillance Valley can be read as an outgrowth of that reporting. And it is also important to note, as Levine does in the book, that Levine did not make himself many friends in the crypto community by taking on Tor. It is doubtful that cypherpunks will like Surveillance Valley, but it is just as doubtful that they will bother to actually read it and engage with Levine’s argument or the history he lays out. This is a shame, for it would be a mistake to frame Levine’s book as an attack on Tor (or on those who work on the project). Levine’s comments on Tor are in keeping with the thrust of the larger argument of his book: such privacy tools are high-tech solutions to problems created by high-tech society, that mainly serve to keep people hooked into all those high-tech systems. And he questions the politics of Tor, noting that “Silicon Valley fears a political solution to privacy. Internet Freedom and crypto offer an acceptable solution” (268). Or, to put it another way, Tor is kind of like shopping at Whole Foods – people who are concerned about their food are willing to pay a bit more to get their food there, but in the end shopping there lets people feel good about what they’re doing without genuinely challenging the broader system. And, of course, now Whole Foods is owned by Amazon. The most important element of Levine’s critique of Tor is not that it doesn’t work, for some (like Snowden) it clearly does, but that most users do not know how to use it properly (and are unwilling to lead a genuinely full-crypto lifestyle) and so it fails to offer more than a false sense of security.

    Thus, to say it again, Surveillance Valley isn’t particularly interested in making a lot of friends. With one hand it brushes away the comforting myths about the Internet, and with the other it pushes away the tools that are often touted as the solution to many of the Internet’s problems. And in so doing Levine takes on a variety of technoculture’s sainted figures like Stewart Brand, Edward Snowden, and even organizations like the EFF. While Levine clearly doesn’t seem interested in creating new myths, or propping up new heroes, it seems as though he somewhat misses an opportunity here. Levine shows how some groups and individuals had warned about the Internet back when it was still ARPANET, and a greater emphasis on such people could have helped create a better sense of alternatives and paths that were not taken. Levine notes near the book’s end that, “we live in bleak times, and the Internet is a reflection of them: run by spies and powerful corporations just as our society is run by them. But it isn’t all hopeless” (274). Yet it would be easier to believe the “isn’t all hopeless” sentiment, had the book provided more analysis of successful instances of pushback. While it is respectable that Levine puts forward democratic (small d) action as the needed response, this comes as the solution at the end of a lengthy work that has discussed how the Internet has largely eroded democracy. What Levine’s book points to is that it isn’t enough to just talk about democracy, one needs to recognize that some technologies are democratic while others are not. And though we are loathe to admit it, perhaps the Internet (and computers) simply are not democratic technologies. Sure, we may be able to use them for democratic purposes, but that does not make the technologies themselves democratic.

    Surveillance Valley is a troubling book, but it is an important book. It smashes comforting myths and refuses to leave its readers with simple solutions. What it demonstrates in stark relief is that surveillance and unnerving links to the military-industrial complex are not signs that the Internet has gone awry, but signs that the Internet is functioning as intended.

    _____

    Zachary Loeb is a writer, activist, librarian, and terrible accordion player. He earned his MSIS from the University of Texas at Austin, an MA from the Media, Culture, and Communications department at NYU, and is currently working towards a PhD in the History and Sociology of Science department at the University of Pennsylvania. His research areas include media refusal and resistance to technology, ideologies that develop in response to technological change, and the ways in which technology factors into ethical philosophy – particularly in regards of the way in which Jewish philosophers have written about ethics and technology. Using the moniker “The Luddbrarian,” Loeb writes at the blog Librarian Shipwreck, and is a frequent contributor to The b2 Review Digital Studies section.

    Back to the essay

  • Michael Miller — Seeing Ourselves, Loving Our Captors: Mark Jarzombek’s Digital Stockholm Syndrome in the Post-Ontological Age

    Michael Miller — Seeing Ourselves, Loving Our Captors: Mark Jarzombek’s Digital Stockholm Syndrome in the Post-Ontological Age

    a review of Mark Jarzombek, Digital Stockholm Syndrome in the Post-Ontological Age (University of Minnesota Press Forerunners Series, 2016)

    by Michael Miller

    ~

    All existence is Beta, basically. A ceaseless codependent improvement unto death, but then death is not even the end. Nothing will be finalized. There is no end, no closure. The search will outlive us forever

    — Joshua Cohen, Book of Numbers

    Being a (in)human is to be a beta tester

    — Mark Jarzombek, Digital Stockholm Syndrome in the Post-Ontological Age

    Too many people have access to your state of mind

    — Renata Adler, Speedboat

    Whenever I read through Vilém Flusser’s vast body of work and encounter, in print no less, one of the core concepts of his thought—which is that “human communication is unnatural” (2002, 5)––I find it nearly impossible to shake the feeling that the late Czech-Brazilian thinker must have derived some kind of preternatural pleasure from insisting on the ironic gesture’s repetition. Flusser’s rather grim view that “there is no possible form of communication that can communicate concrete experience to others” (2016, 23) leads him to declare that the intersubjective dimension of communication implies inevitably the existence of a society which is, in his eyes, itself an unnatural institution. One can find all over in Flusser’s work traces of his life-long attempt to think through the full philosophical implications of European nihilism, and evidence of this intellectual engagement can be readily found in his theories of communication.

    One of Flusser’s key ideas that draws me in is his notion that human communication affords us the ability to “forget the meaningless context in which we are completely alone and incommunicado, that is, the world in which we are condemned to solitary confinement and death: the world of ‘nature’” (2002, 4). In order to help stave off the inexorable tide of nature’s muted nothingness, Flusser suggests that humans communicate by storing memories, externalized thoughts whose eventual transmission binds two or more people into a system of meaning. Only when an intersubjective system of communication like writing or speech is established between people does the purpose of our enduring commitment to communication become clear: we communicate in order “to become immortal within others (2016, 31). Flusser’s playful positing of the ironic paradox inherent in the improbability of communication—that communication is unnatural to the human but it is also “so incredibly rich despite its limitations” (26)––enacts its own impossibility. In a representatively ironic sense, Flusser’s point is that all we are able to fully understand is our inability to understand fully.

    As Flusser’s theory of communication can be viewed as his response to the twentieth-century’s shifting technical-medial milieu, his ideas about communication and technics eventually led him to conclude that “the original intention of producing the apparatus, namely, to serve the interests of freedom, has turned on itself…In a way, the terms human and apparatus are reversed, and human beings operate as a function of the apparatus. A man gives an apparatus instructions that the apparatus has instructed him to give” (2011,73).[1] Flusser’s skeptical perspective toward the alleged affordances of human mastery over technology is most assuredly not the view that Apple or Google would prefer you harbor (not-so-secretly). Any cursory glance at Wired or the technology blog at Insider Higher Ed, to pick two long-hanging examples, would yield a radically different perspective than the one Flusser puts forth in his work. In fact, Flusser writes, “objects meant to be media may obstruct communication” (2016, 45). If media objects like the technical apparatuses of today actually obstruct communication, then why are we so often led to believe that they facilitate it? And to shift registers just slightly, if everything is said to be an object of some kind—even technical apparatuses––then cannot one be permitted to claim daily communion with all kinds of objects? What happens when an object—and an object as obsolete as a book, no less—speaks to us? Will we still heed its call?

    ***

    Speaking in its expanded capacity as neither narrator nor focalized character, the book as literary object addresses us in a direct and antagonistic fashion in the opening line to Joshua Cohen’s 2015 novel Book of Numbers. “If you’re reading this on a screen, fuck off. I’ll only talk if I’m gripped with both hands” (5), the book-object warns. As Cohen’s narrative tells the story of a struggling writer named Joshua Cohen (whose backstory corresponds mostly to the historical-biographical author Joshua Cohen) who is contracted to ghostwrite the memoir of another Joshua Cohen (who is the CEO of a massive Google-type company named Tetration), the novel’s middle section provides an “unedited” transcript of the conversation between the two Cohens in which the CEO recounts his upbringing and tremendous business success in and around the Bay Area from the late 1970s up to 2013 of the narrative’s present. The novel’s Silicon Valley setting, nominal and characterological doubling, and structural narrative coupling of the two Cohens’ lives makes it all but impossible to distinguish the personal histories of Cohen-the-CEO and Cohen-the-narrator from the cultural history of the development of personal computing and networked information technologies. The history of one Joshua Cohen––or all Joshua Cohens––is indistinguishable from the history of intrusive computational/digital media. “I had access to stuff I shouldn’t have had access to, but then Principal shouldn’t have had such access to me—cameras, mics,” Cohen-the-narrator laments. In other words, as Cohen-the-narrator ghostwrites another Cohen’s memoir within the context of the broad history of personal computing and the emergence of algorithmic governance and surveillance, the novel invites us to consider how the history of an individual––or every individual, it does not really matter––is also nothing more or anything less than the surveilled history of its data usage, which is always written by someone or something else, the ever-present Not-Me (who just might have the same name as me). The Self is nothing but a networked repository of information to be mined in the future.

    While the novel’s opening line addresses its hypothetical reader directly, its relatively benign warning fixes reader and text in a relation of rancor. The object speaks![2] And yet tech-savvy twenty-first century readers are not the only ones who seem to be fed up with books; books too are fed up with us, and perhaps rightly so. In an age when objects are said to speak vibrantly and withdraw infinitely; processes like human cognition are considered to be operative in complex technical-computational systems; and when the only excuse to preserve the category of “subjective experience” we are able to muster is that it affords us the ability “to grasp how networks technically distribute and disperse agency,” it would seem at first glance that the second-person addressee of the novel’s opening line would intuitively have to be a reading, thinking subject.[3] Yet this is the very same reading subject who has been urged by Cohen’s novel to politely “fuck off” if he or she has chosen to read the text on a screen. And though the text does not completely dismiss its readers who still prefer “paper of pulp, covers of board and cloth” (5), a slight change of preposition in its title points exactly to what the book fears most of all: Book as Numbers. The book-object speaks, but only to offer an ominous admonition: neither the book nor its readers ought to be reducible to computable numbers.

    The transduction of literary language into digital bits eliminates the need for a phenomenological, reading subject, and it suggests too that literature––or even just language in a general sense––and humans in particular are ontologically reducible to data objects that can be “read” and subsequently “interpreted” by computational algorithms. As Cohen’s novel collapses the distinction between author, narrator, character, and medium, its narrator observes that “the only record of my one life would be this record of another’s” (9). But in this instance, the record of one’s (or another’s) life is merely the history of how personal computational technologies have effaced the phenomenological subject. How have we arrived at the theoretically permissible premise that “People matter, but they don’t occupy a privileged subject position distinct from everything else in the world” (Huehls 20)? How might the “turn toward ontology” in theory/philosophy be viewed as contributing to our present condition?

    * **

    Mark Jarzombek’s Digital Stockholm Syndrome in the Post-Ontological Age (2016) provides a brief, yet stylistically ironic and incisive interrogation into how recent iterations of post- or inhumanist theory have found a strange bedfellow in the rhetorical boosterism that accompanies the alleged affordances of digital technologies and big data. Despite the differences between these two seemingly unrelated discourses, they both share a particularly critical or diminished conception of the anthro- in “anthropocentrism” that borrows liberally from the postulates of the “ontological turn” in theory/philosophy (Rosenberg n.p.). While the parallels between these discourses are not made explicit in Jarzombek’s book, Digital Stockholm Syndrome asks us to consider how a shared commitment to an ontologically diminished view of “the human” that galvanizes both technological determinism’s anti-humanism and post- or inhumanist theory has found its common expression in recent philosophies of ontology. In other words, the problem Digital Stockholm Syndrome takes up is this: what kind of theory of ontology, Being, and to a lesser extent, subjectivity, appeals equally to contemporary philosophers and Silicon Valley tech-gurus? Jarzombek gestures toward such an inquiry early on: “What is this new ontology?” he asks, and “What were the historical situations that produced it? And how do we adjust to the realities of the new Self?” (x).

    A curious set of related philosophical commitments united by their efforts to “de-center” and occasionally even eject “anthropocentrism” from the critical conversation constitute some of the realities swirling around Jarzombek’s “new Self.”[4] Digital Stockholm Syndrome provocatively locates the conceptual legibility of these philosophical realities squarely within an explicitly algorithmic-computational historical milieu. By inviting such a comparison, Jarzombek’s book encourages us to contemplate how contemporary ontological thought might mediate our understanding of the historical and philosophical parallels that bind the tradition of in humanist philosophical thinking and the rhetoric of twenty-first century digital media.[5]

    In much the same way that Alexander Galloway has argued for a conceptual confluence that exceeds the contingencies of coincidence between “the structure of ontological systems and the structure of the most highly evolved technologies of post-Fordist capitalism” (347), Digital Stockholm Syndrome argues similarly that today’s world is “designed from the micro/molecular level to fuse the algorithmic with the ontological” (italics in original, x).[6] We now understand Being as the informatic/algorithmic byproduct of what ubiquitous computational technologies have gathered and subsequently fed back to us. Our personal histories––or simply the records of our data use (and its subsequent use of us)––comprise what Jarzombek calls our “ontic exhaust…or what data experts call our data exhaust…[which] is meticulously scrutinized, packaged, formatted, processed, sold, and resold to come back to us in the form of entertainment, social media, apps, health insurance, clickbait, data contracts, and the like” (x).

    The empty second-person pronoun is placed on equal ontological footing with, and perhaps even defined by, its credit score, medical records, 4G data usage, Facebook likes, and threefold of its Tweets. “The purpose of these ‘devices,’” Jarzombek writes, “is to produce, magnify, and expose our ontic exhaust” (25). We give our ontic exhaust away for free every time we log into Facebook because it, in return, feeds back to us the only sense of “self” we are able to identify as “me.”[7] If “who we are cannot be traced from the human side of the equation, much less than the analytic side.‘I’ am untraceable” (31), then why do techno-determinists and contemporary oracles of ontology operate otherwise? What accounts for their shared commitment to formalizing ontology? Why must the Self be tracked and accounted for like a map or a ledger?

    As this “new Self,” which Jarzombek calls the “Being-Global” (2), travels around the world and checks its bank statement in Paris or tags a photo of a Facebook friend in Berlin while sitting in a cafe in Amsterdam, it leaks ontic exhaust everywhere it goes. While the hoovering up of ontic exhaust by GPS and commercial satellites “make[s] us global,” it also inadvertently redefines Being as a question of “positioning/depositioning” (1). For Jarzombek, the question of today’s ontology is not so much a matter of asking “what exists?” but of asking “where is it and how can it be found?” Instead of the human who attempts to locate and understand Being, now Being finds us, but only as long as we allow ourselves to be located.

    Today’s ontological thinking, Jarzombek points out, is not really interested in asking questions about Being––it is too “anthropocentric.”[8] Ontology in the twenty-first century attempts to locate Being by gathering data, keeping track, tracking changes, taking inventory, making lists, listing litanies, crunching the numbers, and searching the database. “Can I search for it on Google?” is now the most important question for ontological thought in the twenty-first century.

    Ontological thinking––which today means ontological accounting, or finding ways to account for the ontologically actuarial––is today’s philosophical equivalent to a best practices for data management, except there is no difference between one’s data and one’s Self. Nonetheless, any ontological difference that might have once stubbornly separated you from data about you no longer applies. Digital Stockholm Syndrome identifies this shift with the formulation: “From ontology to trackology” (71).[9] The philosophical shift that has allowed data about the Self to become the ontological equivalent to the Self emerges out of what Jarzombek calls an “animated ontology.”

    In this “animated ontology,” subject position and object position are indistinguishable…The entire system of humanity is microprocessed through the grid of sequestered empiricism” (31, 29). Jarzombek is careful to distinguish his “animated ontology” from the recently rebooted romanticisms which merely turn their objects into vibrant subjects. He notes that “the irony is that whereas the subject (the ‘I’) remains relatively stable in its ability to self-affirm (the lingering by-product of the psychologizing of the modern Self), objectivity (as in the social sciences) collapses into the illusions produced by the global cyclone of the informatic industry” (28).”[10] By devising tricky new ways to flatten ontology (all of which are made via po-faced linguistic fiat), “the human and its (dis/re-)embodied computational signifiers are on equal footing”(32). I do not define my data, but my data define me.

    ***

    Digital Stockholm Syndrome asserts that what exists in today’s ontological systems––systems both philosophical and computational––is what can be tracked and stored as data. Jarzombek sums up our situation with another pithy formulation: “algorithmic modeling + global positioning + human scaling +computational speed=data geopolitics” (12). While the universalization of tracking technologies defines the “global” in Jarzombek’s Being-Global, it also provides us with another way to understand the humanities’ enthusiasm for GIS and other digital mapping platforms as institutional-disciplinary expressions of a “bio-chemo-techno-spiritual-corporate environment that feeds the Human its sense-of-Self” (5).

    Mark Jarzombek, Digital Stockholm Syndrome in the Post-Ontological Age

    One wonders if the incessant cultural and political reminders regarding the humanities’ waning relevance have moved humanists to reconsider the very basic intellectual terms of their broad disciplinary pursuits. How come it is humanities scholars who are in some cases most visibly leading the charge to overturn many decades of humanist thought? Has the internalization of this depleted conception of the human reshaped the basic premises of humanities scholarship, Digital Stockholm Syndrome wonders? What would it even mean to pursue a “humanities” purged of “the human?” And is it fair to wonder if this impoverished image of humanity has trickled down into the formation of new (sub)disciplines?”[11]

    In a late chapter titled “Onto-Paranoia,” Jarzombek finally arrives at a working definition of Digital Stockholm Syndrome: data visualization. For Jarzombek, data-visualization “has been devised by the architects of the digital world” to ease the existential torture—or “onto-torture”—that is produced by Security Threats (59). Security threats are threatening because they remind us that “security is there to obscure the fact that whole purpose is to produce insecurity” (59). When a system fails, or when a problem occurs, we need to be conscious of the fact that the system has not really failed; “it means that the system is working” (61).[12] The Social, the Other, the Not-Me—these are all variations of the same security threat, which is just another way of defining “indeterminacy” (66). So if everything is working the way it should, we rarely consider the full implications of indeterminacy—both technical and philosophical—because to do so might make us paranoid, or worse: we would have to recognize ourselves as (in)human subjects.

    Data-visualizations, however, provide a soothing salve which we can (self-)apply in order to ease the pain of our “onto-torture.” Visualizing data and creating maps of our data use provide us with a useful and also pleasurable tool with which we locate ourselves in the era of “post-ontology.”[13] “We experiment with and develop data visualization and collection tools that allow us to highlight urban phenomena. Our methods borrow from the traditions of science and design by using spatial analytics to expose patterns and communicating those results, through design, to new audiences,” we are told by one data-visualization project (http://civicdatadesignlab.org/).  As we affirm our existence every time we travel around the globe and self-map our location, we silently make our geo-data available for those who care to sift through it and turn it into art or profit.

    “It is a paradox that our self-aestheticizing performance as subjects…feeds into our ever more precise (self-)identification as knowable and predictable (in)human-digital objects,” Jarzombek writes. Yet we ought not to spend too much time contemplating the historical and philosophical complexities that have helped create this paradoxical situation. Perhaps it is best we do not reach the conclusion that mapping the Self as an object on digital platforms increases the creeping unease that arises from the realization that we are mappable, hackable, predictable, digital objects––that our data are us. We could, instead, celebrate how our data (which we are and which is us) is helping to change the world. “’Big data’ will not change the world unless it is collected and synthesized into tools that have a public benefit,” the same data visualization project announces on its website’s homepage.

    While it is true that I may be a little paranoid, I have finally rested easy after having read Digital Stockholm Syndrome because I now know that my data/I are going to good use.[14] Like me, maybe you find comfort in knowing that your existence is nothing more than a few pixels in someone else’s data visualization.

    _____

    Michael Miller is a doctoral candidate in the Department of English at Rice University. His work has appeared or is forthcoming in symplokē and the Journal of Film and Video.

    Back to the essay

    _____

    Notes

    [1] I am reminded of a similar argument advanced by Tung-Hui Hu in his A Prehistory of the Cloud (2016). Encapsulating Flusser’s spirit of healthy skepticism toward technical apparatuses, the situation that both Flusser and Hu fear is one in which “the technology has produced the means of its own interpretation” (xixx).

    [2] It is not my aim to wade explicitly into discussions regarding “object-oriented ontology” or other related philosophical developments. For the purposes of this essay, however, Andrew Cole’s critique of OOO as a “new occasionalism” will be useful. “’New occasionalism,’” Cole writes, “is the idea that when we speak of things, we put them into contact with one another and ourselves” (112). In other words, the speaking of objects makes them objectively real, though this is only possible when everything is considered to be an object. The question, though, is not about what is or is not an object, but is rather what it means to be. For related arguments regarding the relation between OOO/speculative realism/new materialism and mysticism, see Sheldon (2016), Altieri (2016), Wolfendale (2014), O’Gorman (2013), and to a lesser extent Colebrook (2013).

    [3] For the full set of references here, see Bennett (2010), Hayles (2014 and 2016), and Hansen (2015).

    [4] While I cede that no thinker of “post-humanism” worth her philosophical salt would admit the possibility or even desirability of purging the sins of “correlationism” from critical thought all together, I cannot help but view such occasional posturing with a skeptical eye. For example, I find convincing Barbara Herrnstein-Smith’s recent essay “Scientizing the Humanities: Shifts, Negotiations, Collisions,” in which she compares the drive in contemporary critical theory to displace “the human” from humanistic inquiry to the impossible and equally incomprehensible task of overcoming the “‘astro’-centrism of astronomy or the biocentrism of biology” (359).

    [5] In “Modest Proposal for the Inhuman,” Julian Murphet identifies four interrelated strands of post- or inhumanist thought that combine a kind of metaphysical speculation with a full-blown demolition of traditional ontology’s conceptual foundations. They are: “(1) cosmic nihilism, (2) molecular bio-plasticity, (3) technical accelerationism, and (4) animality. These sometimes overlapping trends are severally engaged in the mortification of humankind’s stubborn pretensions to mastery over the domain of the intelligible and the knowable in an era of sentient machines, routine genetic modification, looming ecological disaster, and irrefutable evidence that we share 99 percent of our biological information with chimpanzees” (653).

    [6] The full quotation from Galloway’s essay reads: “Why, within the current renaissance of research in continental philosophy, is there a coincidence between the structure of ontological systems and the structure of the most highly evolved technologies of post-Fordist capitalism? [….] Why, in short, is there a coincidence between today’s ontologies and the software of big business?” (347). Digital Stockholm Syndrome begins by accepting Galloway’s provocations as descriptive instead of speculative. We do not necessarily wonder in 2017 if “there is a coincidence between today’s ontologies and the software of big business”; we now wonder instead how such a confluence came to be.

    [7] Wendy Hui Kyun Chun makes a similar point in her 2016 monograph Updating to Remain the Same: Habitual New Media. She writes, “If users now ‘curate’ their lives, it is because their bodies have become archives” (x-xi). While there is not ample space here to discuss the  full theoretical implications of her book, Chun’s discussion of the inherently gendered dimension to confession, self-curation as self-exposition, and online privacy as something that only the unexposed deserve (hence the need for preemptive confession and self-exposition on the internet) in digital/social media networks is tremendously relevant to Jarzombek’s Digital Stockholm Syndrome, as both texts consider the Self as a set of mutable and “marketable/governable/hackable categories” (Jarzombek 26) that are collected without our knowledge and subsequently fed back to the data/media user in the form of its own packaged and unique identity. For recent similar variations of this argument, see Simanowski (2017) and McNeill (2012).

    I also think Chun’s book offers a helpful tool for thinking through recent confessional memoirs or instances of “auto-theory” (fictionalized or not) like Maggie Nelson’s The Argonauts (2015), Sheila Heti’s How Should a Person Be (2010), Marie Calloway’s what purpose did i serve in your life (2013), and perhaps to a lesser degree Tao Lin’s Richard Yates (2010), Taipei (2013), Natasha Stagg’s Surveys, and Ben Lerner’s Leaving the Atocha Station (2011) and 10:04 (2014). The extent to which these texts’ varied formal-aesthetic techniques can be said to be motivated by political aims is very much up for debate, but nonetheless, I think it is fair to say that many of them revel in the reveal. That is to say, via confession or self-exposition, many of these novels enact the allegedly performative subversion of political power by documenting their protagonists’ and/or narrators’ certain social/political acts of transgression. Chun notes, however, that this strategy of self-revealing performs “resistance as a form of showing off and scandalizing, which thrives off moral outrage. This resistance also mimics power by out-spying, monitoring, watching, and bringing to light, that is, doxing” (151). The term “autotheory,” which was has been applied to Nelson’s The Argonauts in particular, takes on a very different meaning in this context. “Autotheory” can be considered as a theory of the self, or a self-theorization, or perhaps even the idea that personal experience is itself a kind of theory might apply here, too. I wonder, though, how its meaning would change if the prefix “auto” was understood within a media-theoretical framework not as “self” but as “automation.” “Autotheory” becomes, then, an automatization of theory or theoretical thinking, but also a theoretical automatization; or more to the point: what if “autotheory” describes instead a theorization of the Self or experience wherein “the self” is only legible as the product of automated computational-algorithmic processes?

    [8] Echoing the critiques of “correlationism” or “anthropocentrism” or what have you, Jarzombek declares that “The age of anthrocentrism is over” (32).

    [9] Whatever notion of (self)identity the Self might find to be most palatable today, Jarzombek argues, is inevitably mediated via global satellites. “The intermediaries are the satellites hovering above the planet. They are what make us global–what make me global” (1), and as such, they represent the “civilianization” of military technologies (4). What I am trying to suggest is that the concepts and categories of self-identity we work with today are derived from the informatic feedback we receive from long-standing military technologies.

    [10] Here Jarzombek seems to be suggesting that the “object” in the “objectivity” of “the social sciences” has been carelessly conflated with the “object” in “object-oriented” philosophy. The prioritization of all things “objective” in both philosophy and science has inadvertently produced this semantic and conceptual slippage. Data objects about the Self exist, and thus by existing, they determine what is objective about the Self. In this new formulation, what is objective about the Self or subject, in other words, is what can be verified as information about the self. In Indexing It All: The Subject in the Age of Documentation, Information, and Data (2014), Ronald Day argues that these global tracking technologies supplant traditional ontology’s “ideas or concepts of our human manner of being” and have in the process “subsume[d] and subvert[ed] the former roles of personal judgment and critique in personal and social beings and politics” (1). While such technologies might be said to obliterate “traditional” notions of subjectivity, judgment, and critique, Day demonstrates how this simultaneous feeding-forward and feeding back of data-about-the-Self represents the return of autoaffection, though in his formulation self-presence is defined as information or data-about-the-self whose authenticity is produced when it is fact-checked against a biographical database (3)—self-presence is a presencing of data-about-the-Self. This is all to say that the Self’s informational “aboutness”–its representation in and as data–comes to stand in for the Self’s identity, which can only be comprehended as “authentic” in its limited metaphysical capacity as a general informatic or documented “aboutness.”

    [11] Flusser is again instructive on this point, albeit in his own idiosyncratic way­­. Drawing attention to the strange unnatural plurality in the term “humanities,” he writes, “The American term humanities appropriately describes the essence of these disciplines. It underscores that the human being is an unnatural animal” (2002, 3). The plurality of “humanities,” as opposed to the singular “humanity,” constitutes for Flusser a disciplinary admission that not only the category of “the human” is unnatural, but that the study of such an unnatural thing is itself unnatural, as well. I think it is also worth pointing out that in the context of Flusser’s observation, we might begin to situate the rise of “the supplemental humanities” as an attempt to redefine the value of a humanities education. The spatial humanities, the energy humanities, medical humanities, the digital humanities, etc.—it is not difficult to see how these disciplinary off-shoots consider themselves as supplements to whatever it is they think “the humanities” are up to; regardless, their institutional injection into traditional humanistic discourse will undoubtedly improve both(sub)disciplines, with the tacit acknowledgment being that the latter has just a little more to gain from the former in terms of skills, technical know-how, and data management. Many thanks to Aaron Jaffe for bringing this point to my attention.

    [12] In his essay “Algorithmic Catastrophe—The Revenge of Contingency,” Yuk Hui notes that “the anticipation of catastrophe becomes a design principle” (125). Drawing from the work of Bernard Stiegler, Hui shows how the pharmacological dimension of “technics, which aims to overcome contingency, also generates accidents” (127). And so “as the anticipation of catastrophe becomes a design principle…it no longer plays the role it did with the laws of nature” (132). Simply put, by placing algorithmic catastrophe on par with a failure of reason qua the operations of mathematics, Hui demonstrates how “algorithms are open to contingency” only insofar as “contingency is equivalent to a causality, which can be logically and technically deduced” (136). To take Jarzombek’s example of the failing computer or what have you, while the blue screen of death might be understood to represent the faithful execution of its programmed commands, we should also keep in mind that the obverse of Jarzombek’s scenario would force us to come to grips with how the philosophical implications of the “shit happens” logic that underpins contingency-as-(absent) causality “accompanies and normalizes speculative aesthetics” (139).

    [13] I am reminded here of one of the six theses from the manifesto “What would a floating sheep map?,” jointly written by the Floating Sheep Collective, which is a cohort of geography professors. The fifth thesis reads: “Map or be mapped. But not everything can (or should) be mapped.” The Floating Sheep Collective raises in this section crucially important questions regarding ownership of data with regard to marginalized communities. Because it is not always clear when to map and when not to map, they decide that “with mapping squarely at the center of power struggles, perhaps it’s better that not everything be mapped.” If mapping technologies operate as ontological radars–the Self’s data points help point the Self towards its own ontological location in and as data—then it is fair to say that such operations are only philosophically coherent when they are understood to be framed within the parameters outlined by recent iterations of ontological thinking and its concomitant theoretical deflation of the rich conceptual make-up that constitutes the “the human.” You can map the human’s data points, but only insofar as you buy into the idea that points of data map the human. See http://manifesto.floatingsheep.org/.

    [14]Mind/paranoia: they are the same word!”(Jarzombek 71).

    _____

    Works Cited

    • Adler, Renata. Speedboat. New York Review of Books Press, 1976.
    • Altieri, Charles. “Are We Being Materialist Yet?” symplokē 24.1-2 (2016):241-57.
    • Calloway, Marie. what purpose did i serve in your life. Tyrant Books, 2013.
    • Chun, Wendy Hui Kyun. Updating to Remain the Same: Habitual New Media. The MIT Press, 2016.
    • Cohen, Joshua. Book of Numbers. Random House, 2015.
    • Cole, Andrew. “The Call of Things: A Critique of Object-Oriented Ontologies.” minnesota review 80 (2013): 106-118.
    • Colebrook, Claire. “Hypo-Hyper-Hapto-Neuro-Mysticism.” Parrhesia 18 (2013).
    • Day, Ronald. Indexing It All: The Subject in the Age of Documentation, Information, and Data. The MIT Press, 2014.
    • Floating Sheep Collective. “What would a floating sheep map?” http://manifesto.floatingsheep.org/.
    • Flusser, Vilém. Into the Universe of Technical Images. Translated by Nancy Ann Roth. University of Minnesota Press, 2011.
    • –––. The Surprising Phenomenon of Human Communication. 1975. Metaflux, 2016.
    • –––. Writings, edited by Andreas Ströhl. Translated by Erik Eisel. University of Minnesota Press, 2002.
    • Galloway, Alexander R. “The Poverty of Philosophy: Realism and Post-Fordism.” Critical Inquiry 39.2 (2013): 347-366.
    • Hansen, Mark B.N. Feed Forward: On the Future of Twenty-First Century Media. Duke University Press, 2015.
    • Hayles, N. Katherine. “Cognition Everywhere: The Rise of the Cognitive Nonconscious and the Costs of Consciousness.” New Literary History 45.2 (2014):199-220.
    • –––. “The Cognitive Nonconscious: Enlarging the Mind of the Humanities.” Critical Inquiry 42 (Summer 2016): 783-808.
    • Herrnstein-Smith, Barbara. “Scientizing the Humanities: Shifts, Collisions, Negotiations.” Common Knowledge  22.3 (2016):353-72.
    • Heti, Sheila. How Should a Person Be? Picador, 2010.
    • Hu, Tung-Hui. A Prehistory of the Cloud. The MIT Press, 2016.
    • Huehls, Mitchum. After Critique: Twenty-First Century Fiction in a Neoliberal Age. Oxford University Press, 2016.
    • Hui, Yuk. “Algorithmic Catastrophe–The Revenge of Contingency.” Parrhesia 23(2015): 122-43.
    • Jarzombek, Mark. Digital Stockholm Syndrome in the Post-Ontological Age. University of Minnesota Press, 2016.
    • Lin, Tao. Richard Yates. Melville House, 2010.
    • –––. Taipei. Vintage, 2013.
    • McNeill, Laurie. “There Is No ‘I’ in Network: Social Networking Sites and Posthuman Auto/Biography.” Biography 35.1 (2012): 65-82.
    • Murphet, Julian. “A Modest Proposal for the Inhuman.” Modernism/Modernity 23.3(2016): 651-70.
    • Nelson, Maggie. The Argonauts. Graywolf P, 2015.
    • O’Gorman, Marcel. “Speculative Realism in Chains: A Love Story.” Angelaki: Journal of the Theoretical Humanities 18.1 (2013): 31-43.
    • Rosenberg, Jordana. “The Molecularization of Sexuality: On Some Primitivisms of the Present.” Theory and Event 17.2 (2014):  n.p.
    • Sheldon, Rebekah. “Dark Correlationism: Mysticism, Magic, and the New Realisms.” symplokē 24.1-2 (2016): 137-53.
    • Simanowski, Roberto. “Instant Selves: Algorithmic Autobiographies on Social Network Sites.” New German Critique 44.1 (2017): 205-216.
    • Stagg, Natasha. Surveys. Semiotext(e), 2016.
    • Wolfendale, Peter. Object Oriented Philosophy: The Noumenon’s New Clothes. Urbanomic, 2014.
  • Audrey Watters – The Best Way to Predict the Future is to Issue a Press Release

    Audrey Watters – The Best Way to Predict the Future is to Issue a Press Release

    By Audrey Watters

    ~

    This talk was delivered at Virginia Commonwealth University today as part of a seminar co-sponsored by the Departments of English and Sociology and the Media, Art, and Text PhD Program. The slides are also available here.

    Thank you very much for inviting me here to speak today. I’m particularly pleased to be speaking to those from Sociology and those from the English and those from the Media, Art, and Text departments, and I hope my talk can walk the line between and among disciplines and methods – or piss everyone off in equal measure. Either way.

    This is the last public talk I’ll deliver in 2016, and I confess I am relieved (I am exhausted!) as well as honored to be here. But when I finish this talk, my work for the year isn’t done. No rest for the wicked – ever, but particularly in the freelance economy.

    As I have done for the past six years, I will spend the rest of November and December publishing my review of what I deem the “Top Ed-Tech Trends” of the year. It’s an intense research project that usually tops out at about 75,000 words, written over the course of four to six weeks. I pick ten trends and themes in order to closely at the recent past, the near-term history of education technology. Because of the amount of information that is published about ed-tech – the amount of information, its irrelevance, its incoherence, its lack of context – it can be quite challenging to keep up with what is really happening in ed-tech. And just as importantly, what is not happening.

    So that’s what I try to do. And I’ll boast right here – no shame in that – no one else does as in-depth or thorough job as me, certainly no one who is entirely independent from venture capital, corporate or institutional backing, or philanthropic funding. (Of course, if you look for those education technology writers who are independent from venture capital, corporate or institutional backing, or philanthropic funding, there is pretty much only me.)

    The stories that I write about the “Top Ed-Tech Trends” are the antithesis of most articles you’ll see about education technology that invoke “top” and “trends.” For me, still framing my work that way – “top trends” – is a purposeful rhetorical move to shed light, to subvert, to offer a sly commentary of sorts on the shallowness of what passes as journalism, criticism, analysis. I’m not interested in making quickly thrown-together lists and bullet points. I’m not interested in publishing clickbait. I am interested nevertheless in the stories – shallow or sweeping – that we tell and spread about technology and education technology, about the future of education technology, about our technological future.

    Let me be clear, I am not a futurist – even though I’m often described as “ed-tech’s Cassandra.” The tagline of my website is “the history of the future of education,” and I’m much more interested in chronicling the predictions that others make, have made about the future of education than I am writing predictions of my own.

    One of my favorites: “Books will soon be obsolete in schools,” Thomas Edison said in 1913. Any day now. Any day now.

    Here are a couple of more recent predictions:

    “In fifty years, there will be only ten institutions in the world delivering higher education and Udacity has a shot at being one of them.” – that’s Sebastian Thrun, best known perhaps for his work at Google on the self-driving car and as a co-founder of the MOOC (massive open online course) startup Udacity. The quotation is from 2012.

    And from 2013, by Harvard Business School professor, author of the book The Innovator’s Dilemma, and popularizer of the phrase “disruptive innovation,” Clayton Christensen: “In fifteen years from now, half of US universities may be in bankruptcy. In the end I’m excited to see that happen. So pray for Harvard Business School if you wouldn’t mind.”

    Pray for Harvard Business School. No. I don’t think so.

    Both of these predictions are fantasy. Nightmarish, yes. But fantasy. Fantasy about a future of education. It’s a powerful story, but not a prediction made based on data or modeling or quantitative research into the growing (or shrinking) higher education sector. Indeed, according to the latest statistics from the Department of Education – now granted, this is from the 2012–2013 academic year – there are 4726 degree-granting postsecondary institutions in the United States. A 46% increase since 1980. There are, according to another source (non-governmental and less reliable, I think), over 25,000 universities in the world. This number is increasing year-over-year as well. So to predict that the vast vast majority of these schools (save Harvard, of course) will go away in the next decade or so or that they’ll be bankrupt or replaced by Silicon Valley’s version of online training is simply wishful thinking – dangerous, wishful thinking from two prominent figures who will benefit greatly if this particular fantasy comes true (and not just because they’ll get to claim that they predicted this future).

    Here’s my “take home” point: if you repeat this fantasy, these predictions often enough, if you repeat it in front of powerful investors, university administrators, politicians, journalists, then the fantasy becomes factualized. (Not factual. Not true. But “truthy,” to borrow from Stephen Colbert’s notion of “truthiness.”) So you repeat the fantasy in order to direct and to control the future. Because this is key: the fantasy then becomes the basis for decision-making.

    Fantasy. Fortune-telling. Or as capitalism prefers to call it “market research.”

    “Market research” involves fantastic stories of future markets. These predictions are often accompanied with a press release touting the size that this or that market will soon grow to – how many billions of dollars schools will spend on computers by 2020, how many billions of dollars of virtual reality gear schools will buy by 2025, how many billions of dollars of schools will spend on robot tutors by 2030, how many billions of dollars will companies spend on online training by 2035, how big will coding bootcamp market will be by 2040, and so on. The markets, according to the press releases, are always growing. Fantasy.

    In 2011, the analyst firm Gartner predicted that annual tablet shipments would exceed 300 million units by 2015. Half of those, the firm said, would be iPads. IDC estimates that the total number of shipments in 2015 was actually around 207 million units. Apple sold just 50 million iPads. That’s not even the best worst Gartner prediction. In October of 2006, Gartner said that Apple’s “best bet for long-term success is to quit the hardware business and license the Mac to Dell.” Less than three months later, Apple introduced the iPhone. The very next day, Apple shares hit $97.80, an all-time high for the company. By 2012 – yes, thanks to its hardware business – Apple’s stock had risen to the point that the company was worth a record-breaking $624 billion.

    But somehow, folks – including many, many in education and education technology – still pay attention to Gartner. They still pay Gartner a lot of money for consulting and forecasting services.

    People find comfort in these predictions, in these fantasies. Why?

    Gartner is perhaps best known for its “Hype Cycle,” a proprietary graphic presentation that claims to show how emerging technologies will be adopted.

    According to Gartner, technologies go through five stages: first, there is a “technology trigger.” As the new technology emerges, a lot of attention is paid to it in the press. Eventually it reaches the second stage: the “peak of inflated expectations.” So many promises have been made about this technological breakthrough. Then, the third stage: the “trough of disillusionment.” Interest wanes. Experiments fail. Promises are broken. As the technology matures, the hype picks up again, more slowly – this is the “slope of enlightenment.” Eventually the new technology becomes mainstream – the “plateau of productivity.”

    It’s not that hard to identify significant problems with the Hype Cycle, least of which being it’s not a cycle. It’s a curve. It’s not a particularly scientific model. It demands that technologies always move forward along it.

    Gartner says its methodology is proprietary – which is code for “hidden from scrutiny.” Gartner says, rather vaguely, that it relies on scenarios and surveys and pattern recognition to place technologies on the line. But most of the time when Gartner uses the word “methodology,” it is trying to signify “science,” and what it really means is “expensive reports you should buy to help you make better business decisions.”

    Can it really help you make better business decisions? It’s just a curve with some technologies plotted along it. The Hype Cycle doesn’t help explain why technologies move from one stage to another. It doesn’t account for technological precursors – new technologies rarely appear out of nowhere – or political or social changes that might prompt or preclude adoption. And at the end it is simply too optimistic, unreasonably so, I’d argue. No matter how dumb or useless a new technology is, according to the Hype Cycle at least, it will eventually become widely adopted. Where would you plot the Segway, for example? (In 2008, ever hopeful, Gartner insisted that “This thing certainly isn’t dead and maybe it will yet blossom.” Maybe it will, Gartner. Maybe it will.)

    And maybe this gets to the heart as to why I’m not a futurist. I don’t share this belief in an increasingly technological future; I don’t believe that more technology means the world gets “more better.” I don’t believe that more technology means that education gets “more better.”

    Every year since 2004, the New Media Consortium, a non-profit organization that advocates for new media and new technologies in education, has issued its own forecasting report, the Horizon Report, naming a handful of technologies that, as the name suggests, it contends are “on the horizon.”

    Unlike Gartner, the New Media Consortium is fairly transparent about how this process works. The organization invites various “experts” to participate in the advisory board that, throughout the course of each year, works on assembling its list of emerging technologies. The process relies on the Delphi method, whittling down a long list of trends and technologies by a process of ranking and voting until six key trends, six emerging technologies remain.

    Disclosure/disclaimer: I am a folklorist by training. The last time I took a class on “methods” was, like, 1998. And admittedly I never learned about the Delphi method – what the New Media Consortium uses for this research project – until I became a scholar of education technology looking into the Horizon Report. As a folklorist, of course, I did catch the reference to the Oracle of Delphi.

    Like so much of computer technology, the roots of the Delphi method are in the military, developed during the Cold War to forecast technological developments that the military might use and that the military might have to respond to. The military wanted better predictive capabilities. But – and here’s the catch – it wanted to identify technology trends without being caught up in theory. It wanted to identify technology trends without developing models. How do you do that? You gather experts. You get those experts to consensus.

    So here is the consensus from the past twelve years of the Horizon Report for higher education. These are the technologies it has identified that are between one and five years from mainstream adoption:

    It’s pretty easy, as with the Gartner Hype Cycle, to look at these predictions and note that they are almost all wrong in some way or another.

    Some are wrong because, say, the timeline is a bit off. The Horizon Report said in 2010 that “open content” was less than a year away from widespread adoption. I think we’re still inching towards that goal – admittedly “open textbooks” have seen a big push at the federal and at some state levels in the last year or so.

    Some of these predictions are just plain wrong. Virtual worlds in 2007, for example.

    And some are wrong because, to borrow a phrase from the theoretical physicist Wolfgang Pauli, they’re “not even wrong.” Take “collaborative learning,” for example, which this year’s K–12 report posits as a mid-term trend. Like, how would you argue against “collaborative learning” as occurring – now or some day – in classrooms? As a prediction about the future, it is not even wrong.

    But wrong or right – that’s not really the problem. Or rather, it’s not the only problem even if it is the easiest critique to make. I’m not terribly concerned about the accuracy of the predictions about the future of education technology that the Horizon Report has made over the last decade. But I do wonder how these stories influence decision-making across campuses.

    What might these predictions – this history of the future – tell us about the wishful thinking surrounding education technology and about the direction that the people the New Media Consortium views as “experts” want the future to take. What can we learn about the future by looking at the history of our imagining about education’s future. What role does powerful ed-tech storytelling (also known as marketing) play in shaping that future? Because remember: to predict the future is to control it – to attempt to control the story, to attempt to control what comes to pass.

    It’s both convenient and troubling then these forward-looking reports act as though they have no history of their own; they purposefully minimize or erase their own past. Each year – and I think this is what irks me most – the NMC fails to looks back at what it had predicted just the year before. It never revisits older predictions. It never mentions that they even exist. Gartner too removes technologies from the Hype Cycle each year with no explanation for what happened, no explanation as to why trends suddenly appear and disappear and reappear. These reports only look forward, with no history to ground their direction in.

    I understand why these sorts of reports exist, I do. I recognize that they are rhetorically useful to certain people in certain positions making certain claims about “what to do” in the future. You can write in a proposal that, “According to Gartner… blah blah blah.” Or “The Horizon Reports indicates that this is one of the most important trends in coming years, and that is why we need to commit significant resources – money and staff – to this initiative.” But then, let’s be honest, these reports aren’t about forecasting a future. They’re about justifying expenditures.

    “The best way to predict the future is to invent it,” computer scientist Alan Kay once famously said. I’d wager that the easiest way is just to make stuff up and issue a press release. I mean, really. You don’t even need the pretense of a methodology. Nobody is going to remember what you predicted. Nobody is going to remember if your prediction was right or wrong. Nobody – certainly not the technology press, which is often painfully unaware of any history, near-term or long ago – is going to call you to task. This is particularly true if you make your prediction vague – like “within our lifetime” – or set your target date just far enough in the future – “In fifty years, there will be only ten institutions in the world delivering higher education and Udacity has a shot at being one of them.”

    Let’s consider: is there something about the field of computer science in particular – and its ideological underpinnings – that makes it more prone to encourage, embrace, espouse these sorts of predictions? Is there something about Americans’ faith in science and technology, about our belief in technological progress as a signal of socio-economic or political progress, that makes us more susceptible to take these predictions at face value? Is there something about our fears and uncertainties – and not just now, days before this Presidential Election where we are obsessed with polls, refreshing Nate Silver’s website obsessively – that makes us prone to seek comfort, reassurance, certainty from those who can claim that they know what the future will hold?

    “Software is eating the world,” investor Marc Andreessen pronounced in a Wall Street Journal op-ed in 2011. “Over the next 10 years,” he wrote, “I expect many more industries to be disrupted by software, with new world-beating Silicon Valley companies doing the disruption in more cases than not.” Buy stock in technology companies was really the underlying message of Andreessen’s op-ed; this isn’t another tech bubble, he wanted to reinsure investors. But many in Silicon Valley have interpreted this pronouncement – “software is eating the world” – as an affirmation and an inevitability. I hear it repeated all the time – “software is eating the world” – as though, once again, repeating things makes them true or makes them profound.

    If we believe that, indeed, “software is eating the world,” that we are living in a moment of extraordinary technological change, that we must – according to Gartner or the Horizon Report – be ever-vigilant about emerging technologies, that these technologies are contributing to uncertainty, to disruption, then it seems likely that we will demand a change in turn to our educational institutions (to lots of institutions, but let’s just focus on education). This is why this sort of forecasting is so important for us to scrutinize – to do so quantitatively and qualitatively, to look at methods and at theory, to ask who’s telling the story and who’s spreading the story, to listen for counter-narratives.

    This technological change, according to some of the most popular stories, is happening faster than ever before. It is creating an unprecedented explosion in the production of information. New information technologies, so we’re told, must therefore change how we learn – change what we need to know, how we know, how we create and share knowledge. Because of the pace of change and the scale of change and the locus of change (that is, “Silicon Valley” not “The Ivory Tower”) – again, so we’re told – our institutions, our public institutions can no longer keep up. These institutions will soon be outmoded, irrelevant. Again – “In fifty years, there will be only ten institutions in the world delivering higher education and Udacity has a shot at being one of them.”

    These forecasting reports, these predictions about the future make themselves necessary through this powerful refrain, insisting that technological change is creating so much uncertainty that decision-makers need to be ever vigilant, ever attentive to new products.

    As Neil Postman and others have cautioned us, technologies tend to become mythic – unassailable, God-given, natural, irrefutable, absolute. So it is predicted. So it is written. Techno-scripture, to which we hand over a certain level of control – to the technologies themselves, sure, but just as importantly to the industries and the ideologies behind them. Take, for example, the founding editor of the technology trade magazine Wired, Kevin Kelly. His 2010 book was called What Technology Wants, as though technology is a living being with desires and drives; the title of his 2016 book, The Inevitable. We humans, in this framework, have no choice. The future – a certain flavor of technological future – is pre-ordained. Inevitable.

    I’ll repeat: I am not a futurist. I don’t make predictions. But I can look at the past and at the present in order to dissect stories about the future.

    So is the pace of technological change accelerating? Is society adopting technologies faster than it’s ever done before? Perhaps it feels like it. It certainly makes for a good headline, a good stump speech, a good keynote, a good marketing claim, a good myth. But the claim starts to fall apart under scrutiny.

    This graph comes from an article in the online publication Vox that includes a couple of those darling made-to-go-viral videos of young children using “old” technologies like rotary phones and portable cassette players – highly clickable, highly sharable stuff. The visual argument in the graph: the number of years it takes for one quarter of the US population to adopt a new technology has been shrinking with each new innovation.

    But the data is flawed. Some of the dates given for these inventions are questionable at best, if not outright inaccurate. If nothing else, it’s not so easy to pinpoint the exact moment, the exact year when a new technology came into being. There often are competing claims as to who invented a technology and when, for example, and there are early prototypes that may or may not “count.” James Clerk Maxwell did publish A Treatise on Electricity and Magnetism in 1873. Alexander Graham Bell made his famous telephone call to his assistant in 1876. Guglielmo Marconi did file his patent for radio in 1897. John Logie Baird demonstrated a working television system in 1926. The MITS Altair 8800, an early personal computer that came as a kit you had to assemble, was released in 1975. But Martin Cooper, a Motorola exec, made the first mobile telephone call in 1973, not 1983. And the Internet? The first ARPANET link was established between UCLA and the Stanford Research Institute in 1969. The Internet was not invented in 1991.

    So we can reorganize the bar graph. But it’s still got problems.

    The Internet did become more privatized, more commercialized around that date – 1991 – and thanks to companies like AOL, a version of it became more accessible to more people. But if you’re looking at when technologies became accessible to people, you can’t use 1873 as your date for electricity, you can’t use 1876 as your year for the telephone, and you can’t use 1926 as your year for the television. It took years for the infrastructure of electricity and telephony to be built, for access to become widespread; and subsequent technologies, let’s remember, have simply piggy-backed on these existing networks. Our Internet service providers today are likely telephone and TV companies; our houses are already wired for new WiFi-enabled products and predictions.

    Economic historians who are interested in these sorts of comparisons of technologies and their effects typically set the threshold at 50% – that is, how long does it take after a technology is commercialized (not simply “invented”) for half the population to adopt it. This way, you’re not only looking at the economic behaviors of the wealthy, the early-adopters, the city-dwellers, and so on (but to be clear, you are still looking at a particular demographic – the privileged half.)

    And that changes the graph again:

    How many years do you think it’ll be before half of US households have a smart watch? A drone? A 3D printer? Virtual reality goggles? A self-driving car? Will they? Will it be fewer years than 9? I mean, it would have to be if, indeed, “technology” is speeding up and we are adopting new technologies faster than ever before.

    Some of us might adopt technology products quickly, to be sure. Some of us might eagerly buy every new Apple gadget that’s released. But we can’t claim that the pace of technological change is speeding up just because we personally go out and buy a new iPhone every time Apple tells us the old model is obsolete. Removing the headphone jack from the latest iPhone does not mean “technology changing faster than ever,” nor does showing how headphones have changed since the 1970s. None of this is really a reflection of the pace of change; it’s a reflection of our disposable income and a ideology of obsolescence.

    Some economic historians like Robert J. Gordon actually contend that we’re not in a period of great technological innovation at all; instead, we find ourselves in a period of technological stagnation. The changes brought about by the development of information technologies in the last 40 years or so pale in comparison, Gordon argues (and this is from his recent book The Rise and Fall of American Growth: The US Standard of Living Since the Civil War), to those “great inventions” that powered massive economic growth and tremendous social change in the period from 1870 to 1970 – namely electricity, sanitation, chemicals and pharmaceuticals, the internal combustion engine, and mass communication. But that doesn’t jibe with “software is eating the world,” does it?

    Let’s return briefly to those Horizon Report predictions again. They certainly reflect this belief that technology must be speeding up. Every year, there’s something new. There has to be. That’s the purpose of the report. The horizon is always “out there,” off in the distance.

    But if you squint, you can see each year’s report also reflects a decided lack of technological change. Every year, something is repeated – perhaps rephrased. And look at the predictions about mobile computing:

    • 2006 – the phones in their pockets
    • 2007 – the phones in their pockets
    • 2008 – oh crap, we don’t have enough bandwidth for the phones in their pockets
    • 2009 – the phones in their pockets
    • 2010 – the phones in their pockets
    • 2011 – the phones in their pockets
    • 2012 – the phones too big for their pockets
    • 2013 – the apps on the phones too big for their pockets
    • 2015 – the phones in their pockets
    • 2016 – the phones in their pockets

    This hardly makes the case for technological speeding up, for technology changing faster than it’s ever changed before. But that’s the story that people tell nevertheless. Why?

    I pay attention to this story, as someone who studies education and education technology, because I think these sorts of predictions, these assessments about the present and the future, frequently serve to define, disrupt, destabilize our institutions. This is particularly pertinent to our schools which are already caught between a boundedness to the past – replicating scholarship, cultural capital, for example – and the demands they bend to the future – preparing students for civic, economic, social relations yet to be determined.

    But I also pay attention to these sorts of stories because there’s that part of me that is horrified at the stuff – predictions – that people pass off as true or as inevitable.

    “65% of today’s students will be employed in jobs that don’t exist yet.” I hear this statistic cited all the time. And it’s important, rhetorically, that it’s a statistic – that gives the appearance of being scientific. Why 65%? Why not 72% or 53%? How could we even know such a thing? Some people cite this as a figure from the Department of Labor. It is not. I can’t find its origin – but it must be true: a futurist said it in a keynote, and the video was posted to the Internet.

    The statistic is particularly amusing when quoted alongside one of the many predictions we’ve been inundated with lately about the coming automation of work. In 2014, The Economist asserted that “nearly half of American jobs could be automated in a decade or two.”“Before the end of this century,” Wired Magazine’s Kevin Kelly announced earlier this year, “70 percent of today’s occupations will be replaced by automation.”

    Therefore the task for schools – and I hope you can start to see where these different predictions start to converge – is to prepare students for a highly technological future, a future that has been almost entirely severed from the systems and processes and practices and institutions of the past. And if schools cannot conform to this particular future, then “In fifty years, there will be only ten institutions in the world delivering higher education and Udacity has a shot at being one of them.”

    Now, I don’t believe that there’s anything inevitable about the future. I don’t believe that Moore’s Law – that the number of transistors on an integrated circuit doubles every two years and therefore computers are always exponentially smaller and faster – is actually a law. I don’t believe that robots will take, let alone need take, all our jobs. I don’t believe that YouTube has been rendered school irrevocably out-of-date. I don’t believe that technologies are changing so quickly that we should hand over our institutions to entrepreneurs, privatize our public sphere for techno-plutocrats.

    I don’t believe that we should cheer Elon Musk’s plans to abandon this planet and colonize Mars – he’s predicted he’ll do so by 2026. I believe we stay and we fight. I believe we need to recognize this as an ego-driven escapist evangelism.

    I believe we need to recognize that predicting the future is a form of evangelism as well. Sure gets couched in terms of science, it is underwritten by global capitalism. But it’s a story – a story that then takes on these mythic proportions, insisting that it is unassailable, unverifiable, but true.

    The best way to invent the future is to issue a press release. The best way to resist this future is to recognize that, once you poke at the methodology and the ideology that underpins it, a press release is all that it is.

    Image credits: 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28. And a special thanks to Tressie McMillan Cottom and David Golumbia for organizing this talk. And to Mike Caulfield for always helping me hash out these ideas.
    _____

    Audrey Watters is a writer who focuses on education technology – the relationship between politics, pedagogy, business, culture, and ed-tech. She has worked in the education field for over 15 years: teaching, researching, organizing, and project-managing. Although she was two chapters into her dissertation (on a topic completely unrelated to ed-tech), she decided to abandon academia, and she now happily fulfills the one job recommended to her by a junior high aptitude test: freelance writer. Her stories have appeared on NPR/KQED’s education technology blog MindShift, in the data section of O’Reilly Radar, on Inside Higher Ed, in The School Library Journal, in The Atlantic, on ReadWriteWeb, and Edutopia. She is the author of the recent book The Monsters of Education Technology (Smashwords, 2014) and working on a book called Teaching Machines. She maintains the widely-read Hack Education blog, and writes frequently for The b2 Review Digital Studies magazine on digital technology and education.

    Back to the essay

  • Bradley J. Fest – The Function of Videogame Criticism

    Bradley J. Fest – The Function of Videogame Criticism

    a review of Ian Bogost, How to Talk about Videogames (University of Minnesota Press, 2015)

    by Bradley J. Fest

    ~

    Over the past two decades or so, the study of videogames has emerged as a rigorous, exciting, and transforming field. During this time there have been a few notable trends in game studies (which is generally the name applied to the study of video and computer games). The first wave, beginning roughly in the mid-1990s, was characterized by wide-ranging debates between scholars and players about what they were actually studying, what aspects of videogames were most fundamental to the medium.[1] Like arguments about whether editing or mise-en-scène was more crucial to the meaning-making of film, the early, sometimes heated conversations in the field were primarily concerned with questions of form. Scholars debated between two perspectives known as narratology and ludology, and asked whether narrative or play was more theoretically important for understanding what makes videogames unique.[2] By the middle of the 2000s, however, this debate appeared to be settled (as perhaps ultimately unproductive and distracting—i.e., obviously both narrative and play are important). Over the past decade, a second wave of scholars has emerged who have moved on to more technical, theoretical concerns, on the one hand, and more social and political issues, on the other (frequently at the same time). Writers such as Patrick Crogan, Nick Dyer-Witherford, Alexander R. Galloway, Patrick Jagoda, Lisa Nakamura, Greig de Peuter, Adrienne Shaw, McKenzie Wark, and many, many others write about how issues such as control and empire, race and class, gender and sexuality, labor and gamification, networks and the national security state, action and procedure can pertain to videogames.[3] Indeed, from a wide sampling of contemporary writing about games, it appears that the old anxieties regarding the seriousness of its object have been put to rest. Of course games are important. They are becoming a dominant cultural medium; they make billions of dollars; they are important political allegories for life in the twenty-first century; they are transforming social space along with labor practices; and, after what many consider a renaissance in independent game development over the past decade, some of them are becoming quite good.

    Ian Bogost has been one of the most prominent voices in this second wave of game criticism. A media scholar, game designer, philosopher, historian, and professor of interactive computing at the Georgia Institute of Technology, Bogost has published a number of influential books. His first, Unit Operations: An Approach to Videogame Criticism (2006), places videogames within a broader theoretical framework of comparative media studies, emphasizing that games deserve to be approached on their own terms, not only because they are worthy of attention in and of themselves but also because of what they can show us about the ways other media operate. Bogost argues that “any medium—poetic, literary, cinematic, computational—can be read as a configurative system, an arrangement of discrete, interlocking units of expressive meaning. I call these general instances of procedural expression, unit operations” (2006, 9). His second book, Persuasive Games: The Expressive Power of Videogames (2007), extends his emphasis on the material, discrete processes of games, arguing that they can and do make arguments; that is, games are rhetorical, and they are rhetorical by virtue of what they and their operator can do, their procedures: games make arguments through “procedural rhetoric.”[4] The publication of Persuasive Games in particular—which he promoted with an appearance on The Colbert Report (2005–14)—saw Bogost emerge as a powerful voice in the broad cohort of second wave writers and scholars.

    But I feel that the publication of Bogost’s most recent book, How to Talk about Videogames (2015), might very well end up signaling the beginning of a third phase of videogame criticism. If the first task of game criticism was to formally define its object, and the second wave of game studies involved asking what games can and do say about the world, the third phase might see critics reflecting on their own processes and procedures, thinking, not necessarily about what videogames are and do, but about what videogame criticism is and does. How to Talk about Videogames is a book that frequently poses the (now quite old) question: what is the function of criticism at the present time? In an industry dominated by multinational media megaconglomerates, what should the role of (academic) game criticism be? What can a handful of researchers and scholars possibly do or say in the face of such a massive, implacable, profit-driven industry, where every announcement about future games further stokes its rabid fan base of slobbering, ravening hordes to spend hundreds of dollars and thousands of hours consuming a form known for its spectacular violence, ubiquitous misogyny, and myopic tribalism? What is the point of writing about games when the videogame industry appears to happily carry on as if nothing is being said at all, impervious to any conversation that people may be having about its products beyond what “fans” demand?

    To read the introduction and conclusion of Bogost’s most recent book, one might think that, suggestions about their viability aside, both the videogame industry and the critical writing surrounding it are in serious crisis, and the matter of the cultural status of the videogame has hardly been put to rest. As a scholar, critic, and designer who has been fairly consistent in positively exploring what digital games can do, what they can uniquely accomplish as a process-based medium, it is striking, at least to this reviewer, that Bogost begins by anxiously admitting,

    whenever I write criticism of videogames, someone strongly invested in games as a hobby always asks the question “is this parody?” as if only a miscreant or a comedian or a psychopath would bother to invest the time and deliberateness in even thinking, let alone writing about videogames with the seriousness that random, anonymous Internet users have already used to write about toasters, let alone deliberate intellectuals about film or literature! (Bogost 2015, xi–xii)

    Bogost calls this kind of attention to the status of his critical endeavor in a number of places in How to Talk about Videogames. The book shows him involved in that untimely activity of silently but implicitly assessing his body of work, reflectively approaching his critical task with cautious trepidation. In a variety of moments from the opening and closing of the book, games and criticism are put into serious question. Videogames are puerile, an “empty diversion” (182), and without value; “games are grotesque. . . . [they] are gross, revolting, heaps of arbitrary anguish” (1); “games are stupid” (9); “that there could be a game criticism [seems] unlikely and even preposterous” (181). In How to Talk about Videogames, Bogost, at least in some ways, is giving up his previous fight over whether or not videogames are serious aesthetic objects worthy of the same kind of hermeneutic attention given to more established art forms.[5] If games are predominantly treated as “perversion, excess” (183), a symptom of “permanent adolescence” (180), as unserious, wasteful, unproductive, violently sadistic entertainments—perhaps there is a reason. How to Talk about Videogames shows Bogost turning an intellectual corner toward a decidedly ironic sense of his role as a critic and the worthiness of his critical object.

    Compare Bogost’s current pessimism with the optimism of his previous volume, How to Do Things with Videogames (2011), to which How to Talk about Videogames functions as a kind of sequel or companion. In this earlier book, he is rather more affirmative about the future of the videogame industry (and, by proxy, videogame criticism):

    What if we allowed that videogames have many possible goals and purposes, each of which couples with many possible aesthetics and designs to create many possible player experiences, none of which bears any necessary relationship to the commercial videogame industry as we currently know it. The more games can do, the more the general public will become accepting of, and interested in, the medium in general. (Bogost 2011, 153)

    2011’s How to Do Things with Videogames aims to bring to the table things that previous popular and scholarly approaches to videogames had ignored in order to show all the other ways that videogames operate, what they are capable of beyond mere mimetic simulation or entertaining distraction, and how game criticism might allow their audiences to expand beyond the province of the “gamer” to mirror the diversified audiences of other media. Individual chapters are devoted to how videogames produce empathy and inspire reverence; they can be vehicles for electioneering and promotion; games can relax, titillate, and habituate; they can be work. Practicing what he calls “media microecology,” a critical method that “seeks to reveal the impact of a medium’s properties on society . . . through a more specialized, focused attention . . . digging deep into one dark, unexplored corner of a media ecosystem” (2011, 7), Bogost argues that game criticism should be attentive to more than simply narrative or play. The debates that dominated the early days of critical game studies, in this regard, only account for a rather limited view of what games can do. Appearing at a time when many were arguing that the medium was beginning to reach aesthetic maturity, Bogost’s 2011 book sounds a note of hope and promise for the future of game studies and the many unexplored possibilities for game design.

    How to Talk about Videogames

    I cannot really overstate, however, the ways in which How to Talk about Videogames, published four years later, shows Bogost reversing tack, questioning his entire enterprise.[6] Even with the appearance of such a serious, well-received game as Gone Home (2013)—to which he devotes a particularly scathing chapter about what the celebration of an ostensibly adolescent game tells us about contemporaneity—this is a book that repeatedly emphasizes the cultural ghetto in which videogames reside. Criticism devoted exclusively to this form risks being “subsistence criticism. . . . God save us from a future of game critics, gnawing on scraps like the zombies that fester in our objects of study” (188). Despite previous claims about videogames “[helping] us expose and interrogate the ways we engage the world in general, not just the ways that computational systems structure or limit that experience” (Bogost 2006, 40), How to Talk about Videogames is, at first glance, a book that raises the question of not only how videogames should be talked about, but whether they have anything to say in the first place.

    But it is difficult to gauge the seriousness of Bogost’s skepticism and reluctance given a book filled with twenty short essays of highly readable, informative, and often compelling criticism. (The disappointingly short essay, “The Blue Shell Is Everything That’s Wrong with America”—in which he writes: “This is the Blue Shell of collapse, the Blue Shell of financial hubris, the Blue Shell of the New Gilded Age” [26]—particularly stands out in the way that it reads an important if overlooked aspect of a popular game in terms of larger social issues.) For it is, really, somewhat unthinkable that someone who has written seven books on the subject would arrive at the conclusion that “videogames are a lot like toasters. . . . Like a toaster, a game is both appliance and hearth, both instrument and aesthetic, both gadget and fetish. It’s preposterous to do game criticism, like it’s preposterous to do toaster criticism” (ix and xii).[7] Bogost’s point here is rhetorical, erring on the side of hyperbole in order to emphasize how videogames are primarily process-based—that they work and function like toasters perhaps more than they affect and move like films or novels (a claim with which I imagine many would disagree), and that there is something preposterous in writing criticism about a process-based technology. A decade after emphasizing videogames’ procedurality in Unit Operations, this is a way for him to restate and reemphasize these important claims for the more popular audience intended for How to Talk about Videogames. Games involve actions, which make them different from other media that can be more passively absorbed. This is why videogames are often written about in reviews “full of technical details and thorough testing and final, definitive scores delivered on improbably precise numerical scales” (ix). Bogost is clear. He is not a reviewer. He is not assessing games’ ability to “satisfy our need for leisure [as] their only function.” He is a critic and the critic’s activity, even if his object resembles a toaster, is different.

    But though it is apparent why games might require a different kind of criticism than other media, what remains unclear is what Bogost believes the role of the critic ought to be. He says, contradicting the conclusion of How to Do Things with Videogames, that “criticism is not conducted to improve the work or the medium, to win over those who otherwise would turn up their noses at it. . . . Rather, it is conducted to get to the bottom of something, to grasp its form, context, function, meaning, and capacities” (xii). This seems like somewhat of a mistake, and a mistake that ignores both the history of criticism and Bogost’s own practice as a critic. Yes, of course criticism should investigate its object, but even Matthew Arnold, who emphasized “disinterestedness . . . keeping aloof from . . . ‘the practical view of things,’” also understood that such an approach could establish “a current of fresh and true ideas” (Arnold 1993 [1864], 37 and 49). No matter how disinterested, criticism can change the ways that art and the world are conceived and thought about. Indeed, only a sentence later it is difficult to discern what precisely Bogost believes the function of videogame criticism to be if not for improving the work, the medium, the world, if not for establishing a current from which new ideas might emerge. He writes that criticism can “venture so far from ordinariness of a subject that the terrain underfoot gives way from manicured path to wilderness, so far that the words that we would spin tousle the hair of madness. And then, to preserve that wilderness and its madness, such that both the works and our reflections on them become imbricated with one another and carried forward into the future where others might find them anew” (xii; more on this in a moment). It is clear that Bogost understands the mode of the critic to be disinterested and objective, to answer ‘the question ‘What is even going on here?’” (x), but it remains unclear why such an activity would even be necessary or worthwhile, and indeed, there is enough in the book that points to criticism being a futile, unnecessary, parodic, parasitic, preposterous endeavor with no real purpose or outcome. In other words, he may say how to talk about videogames, but not why anyone would ever really want to do so.

    I have at least partially convinced myself that Bogost’s claims about videogames being more like toasters than other art forms, along with the statements above regarding the disreputable nature of videogames, are meant as rhetorical provocations, ironic salvos to inspire from others more interesting, rigorous, thoughtful, and complex critical writing, both of the popular and academic stripe. I also understand that, as he did in Unit Operations, Bogost balks at the idea of a critical practice wholly devoted to videogames alone: “the era of fields and disciplines ha[s] ended. The era of critical communities ha[s] ended. And the very idea of game criticism risks Balkanizing games writing from other writing, severing it from the rivers and fields that would sustain it” (187). But even given such an understanding, it is unclear who precisely is suggesting that videogame criticism should be a hermetically sealed niche cut off from the rest of the critical tradition. It is also unclear why videogame criticism is so preposterous, why writing it—even if a critic’s task is limited to getting “to the bottom of something”—is so divorced from the current of other works of cultural criticism. And finally, given what are, at the end of the day, some very good short essays on games that deserve a thoughtful readership, it is unclear why Bogost has framed his activity in such a negatively self-aware fashion.

    So, rather than pursue a discussion about the relative merits and faults of Bogost’s critical self-reflexivity, I think it worth asking what changed between his 2011 and 2015 books, what took him from being a cheerleader—albeit a reticent, tempered, and disinterested one—to questioning the very value of videogame criticism itself. Why does he change from thinking about the various possibilities for doing things with videogames to thinking that “entering a games retail outlet is a lot like entering a sex shop or a liquor store . . . game shops are still vaguely unseemly” (182)?[8] I suspect that such events as 2014’s Gamergate—when independent game designer Zoe Quinn, critic Anita Sarkeesian, and others were threatened and harassed for their feminist views—the generally execrable level of discourse found on internet comments pages, and the questionable cultural identity of the “gamer,” probably account for some of Bogost’s malaise.[9] Indeed, most of the essays found in How to Talk about Videogames initially appeared online, largely in The Atlantic (where he is an editor) and Gamasutra, and, I have to imagine, suffered for it in their comments sections. With this change in audience and platform, it seems to follow that the opening and closing of How to Talk about Videogames reflect a general exhaustion with the level of discourse from fans, companies, and internet trolls. How can criticism possibly thrive or have an impact in a community that so frequently demonstrates its intolerance and rage toward other modes of thinking and being that might upset its worldview and sense of cultural identity? How does one talk to those who will not listen?

    And if these questions perhaps sound particularly apt today—that the “gamer” might bear an awfully striking resemblance to other headline-grabbing individuals and groups dominating the public discussion in the months after the publication of Bogost’s book, namely Donald J. Trump and his supporters—they should. I agree with Bogost that it can be difficult to see the value of criticism at a time when many United States citizens appear, at least on the surface, to be actively choosing to be uncritical. (As Philip Mirowski argues, the promotion of “ignorance [is] the lynchpin in the neoliberal project” [2013, 96].) Given such a discursive landscape, what is the purpose of writing, even in Bogost’s admirably clear (yet at times maddeningly spare) prose, if no amount of stylistic precision or rhetorical complexity—let alone a mastery of basic facts—can influence one’s audience? How to Talk about Videogames is framed as a response to the anti-intellectual atmosphere of the middle of the second decade of the twenty-first century, and it is an understandably despairing one. As such, it is not surprising that Bogost concludes that criticism has no role to play in improving the medium (or perhaps the world) beyond mere phenomenological encounter and description given the social fabric of life in the 2010s. In a time of vocally racist demagoguery, an era witnessing a rising tide of reactionary nationalism in the US and around the world, a period during which it often seems like no words of any kind can have any rhetorical effect at all—procedurally or otherwise—perhaps the best response is to be quiet. But I also think that this is to misunderstand the function of critical thought, regardless of what its object might be.

    To be sure, videogame creators have probably not yet produced a Citizen Kane (1941), and videogame criticism has not yet produced a work like Erich Auerbach’s Mimesis (1946). I am unconvinced, however, that such future accomplishments remain out of reach, that videogames are barred from profound aesthetic expression, and that writing about games preclude the heights attained by previous criticism simply because of some ill-defined aspect of the medium which prevents it from ever aspiring to anything beyond mere craft. Is a study of the Metal Gear series (1987–2015) similar to Roland Barthes’s S/Z (1970) really all that preposterous? Is Mario forever denied his own Samuel Johnson simply because he is composed of code rather than words? For if anything is unclear about Bogost’s book, it is what precisely prohibits videogames from having the effects and impacts of other art forms, why they are restricted to the realm of toasters, incapable of anything beyond adolescent poiesis. Indeed, Bogost’s informative and incisive discussion about Ms. Pac-Man (1981), his thought-provoking interpretation of Mountain (2014), or the many moments of accomplished criticism in his previous books—for example, his masterful discussion of the “figure of fascination” in Unit Operations—betray such claims.[10]

    Matthew Arnold once famously suggested that creativity and criticism were intimately linked, and I believe it might be worthwhile to remember this for the future of videogame criticism:

    It is the business of the critical power . . . “in all branches of knowledge, theology, philosophy, history, art, science, to see the object as in itself it really is.” Thus it tends, at last, to make an intellectual situation of which the creative power can profitably avail itself. It tends to establish an order of ideas, if not absolutely true, yet true by comparison with that which it displaces; to make the best ideas prevail. Presently these new ideas reach society, the touch of truth is the touch of life, and there is a stir and growth everywhere; out of this stir and growth come the creative epochs of literature. (Arnold 1993 [1864], 29)

    In other words, criticism has a vital role to play in the development of an art form, especially if an art form is experiencing contraction or stagnation. Whatever disagreements I might have with Arnold, I too believe that criticism and creativity are indissolubly linked, and further, that criticism has the power to shape and transform the world. Bogost says that “being a critic is not an enjoyable job . . . criticism is not pleasurable” (x). But I suspect that there may still be many who share Arnold’s view of criticism as a creative activity, and maybe the problem is not that videogame criticism is akin to preposterous toaster criticism, but that the function of videogame criticism at the present time is to expand its own sense of what it is doing, of what it is capable, of how and why it is written. When Bogost says he wants “words that . . . would . . . tousle the hair of madness,” why not write in such a fashion (Bogost’s controlled style rarely approaches madness), expanding criticism beyond mere phenomenological summary at best or zombified parasitism at worst. Consider, for instance, Jonathan Arac: “Criticism is literary writing that begins from previous literary writing. . . . There need not be a literary avant-garde for criticism to flourish; in some cases criticism itself plays a leading cultural role” (1989, 7). If we are to take seriously Bogost’s point about how the overwhelmingly positive reaction to Gone Home reveals the aesthetic and political impoverishment of the medium, then it is disappointing to see someone so well-positioned to take a leading cultural role in shaping the conversation about how videogames might change or transform surrendering the field.

    Forget analogies. What if videogame criticism were to begin not from comparing games to toasters but from previous writing, from the history of criticism, from literature and theory, from theories of art and architecture and music, from rhetoric and communication, from poetry? For, given the complex mediations present in even the simplest games—i.e., games not only involve play and narrative, but raise concerns about mimesis, music, sound, spatiality, sociality, procedurality, interface effects, et cetera—it increasingly makes less and less sense to divorce or sequester games from other forms of cultural study or to think that videogames are so unique that game studies requires its own critical modality. If Bogost implores game critics not to limit themselves to a strictly bound, niche field uninformed by other spheres of social and cultural inquiry, if game studies is to go forward into a metacritical third wave where it can become interested in what makes videogames different from other forms and self-reflexively aware of the variety of established and interconnecting modes of cultural criticism from which the field can only benefit, then thinking about the function of criticism historically should guide how and why games are written about at the present time.

    Before concluding, I should also note that something else perhaps changed between 2011 and 2015, namely, Bogost’s alignment with the philosophical movements of speculative realism and object-oriented ontology. In 2012, he published Alien Phenomenology, or What It’s Like to Be a Thing, a book that picks up some of the more theoretical aspects of Unit Operations and draws upon the work of Graham Harman and other anti-correlationists to pursue a flat ontology, arguing that the job of the philosopher “is to amplify the black noise of objects to make the resonant frequencies of the stuffs inside them hum in credibly satisfying ways. Our job is to write the speculative fictions of their processes, their unit operations” (Bogost 2012, 34). Rather than continue pursuing an anthropocentric, correlationist philosophy that can only think about objects in relation to human consciousness, Bogost claims that “the answer to correlationism is not the rejection of any correlate but the acknowledgment of endless ones, all self-absorbed, obsessed by givenness rather than by turpitude” (78). He suggests that philosophy should extend the possibility of phenomenological encounter to all objects, to all units, in his parlance; let phenomenology be alien and weird; let toasters encounter tables, refrigerators, books, climate change, Pittsburgh, Higgs boson particles, the 2016 Electronic Entertainment Expo, bagels, et cetera.[11]

    Though this is not the venue to pursue a broader discussion of Bogost’s philosophical writing, I mention his speculative turn because it seems important for understanding his changing attitudes about criticism. That is, as Graham Harman’s 2012 essay, “The Well-Wrought Broken Hammer,” negatively demonstrates, it is unclear what a flat ontology has to say, if anything, about art, what such a philosophy can bring to critical, hermeneutic activity.[12] Indeed, regardless of where one stands with regard to object-oriented ontology and other speculative realisms, what these philosophies might offer to critics seems to be one of the more vexing and polarizing intellectual questions of our time. Hermeneutics may very well prove inescapably “correlationist,” and, indeed, no matter how disinterested, historical. It is an open question whether or not one can ground a coherent and worthwhile critical practice upon a flat ontology. I am tempted to suspect not. I also suspect that the current trends in continental philosophy, at the end of the day, may not be really interested in criticism as such, and perhaps that is not really such a big deal. Criticism, theory, and philosophy are not synonymous activities nor must they be. (The question about criticism vis-à-vis alien phenomenology also appears to have motivated the Object Lessons series that Bogost edits.) This is all to say, rather than ground videogame criticism in what may very well turn out to be an intellectual fad whose possibilities for writing worthwhile criticism remain somewhat dubious, perhaps there may be more ripe currents and streams—namely, the history of criticism—that can inform how we write about videogames. Criticism may be steered by keeping in view many polestars; let us not be overly swayed by what, for now, burns brightest. For an area of humanistic inquiry that is still very much emerging, it seems a mistake to assume it can and should be nothing more than toaster criticism.

    In this review I have purposefully made few claims about the state of videogames. This is partly because I do not feel that any more work needs to be done to justify writing about the medium. It is also partly because I feel that any broad statement about the form would be an overgeneralization at this point. There are too many games being made in too many places by too many different people for any all-encompassing statement about the state of videogame art to be all that coherent. (In this, I think Bogost’s sense of the need for a media microecology of videogames is still apropos.) But I will say that the state of videogame criticism—and, strangely enough, particularly the academic kind—is one of the few places where humanistic inquiry seems, at least to me, to be growing and expanding rather than contracting or ossifying. Such a generally positive and optimistic statement about a field of the humanities may not adhere to present conceptions about academic activity (indeed, it might even be unfashionable!), which seem to more generally despair about the humanities, and rightfully so. Admitting that some modes of criticism might be, at least in some ways, exhausted, would be an important caveat, especially given how the past few years have seen a considerable amount of reflection about contemporary modes of academic criticism—e.g., Rita Felski’s The Limits of Critique (2015) or Eric Hayot’s “Academic Writing, I Love You. Really, I Do” (2014). But I think that, given how the anti-intellectual miasma that has long been present in US life has intensified in recent years, creeping into seemingly every discourse, one of the really useful functions of videogame criticism may very well be its potential ability to allow reflection on the function of criticism itself in the twenty-first century. If one of the most prominent videogame critics is calling his activity “preposterous” and his object “adolescent,” this should be a cause for alarm, for such claims cannot but help to perpetuate present views about the worthlessness of the humanities. So, I would like to modestly suggest that, rather than look to toasters and widgets to inform how we talk about videogames, let us look to critics and what they have written. Edward W. Said once wrote: “for in its essence the intellectual life—and I speak here mainly about the social sciences and the humanities—is about the freedom to be critical: criticism is intellectual life and, while the academic precinct contains a great deal in it, its spirit is intellectual and critical, and neither reverential nor patriotic” (1994, 11). If one can approach videogames—of all things!—in such a spirit, perhaps other spheres of human activity can rediscover their critical spirit as well.

    _____

    Bradley J. Fest will begin teaching writing this fall at Carnegie Mellon University. His work has appeared or is forthcoming in boundary 2 (interviews here and here), Critical Quarterly, Critique, David Foster Wallace and “The Long Thing” (Bloomsbury, 2014), First Person Scholar, The Silence of Fallout (Cambridge Scholars, 2013), Studies in the Novel, and Wide Screen. He is also the author of a volume of poetry, The Rocking Chair (Blue Sketch, 2015), and a chapbook, “The Shape of Things,” was selected as finalist for the 2015 Tomaž Šalamun Prize and is forthcoming in Verse. Recent poems have appeared in Empty Mirror, PELT, PLINTH, TXTOBJX, and Small Po(r)tions. He previously reviewed Alexander R. Galloway’s The Interface Effect for The b2 Review “Digital Studies.”

    Back to the essay
    _____

    NOTES

    [1] On some of the first wave controversies, see Aarseth (2001).

    [2] For a representative sample of essays and books in the narratology versus ludology debate from the early days of academic videogame criticism, see Murray (1997 and 2004), Aarseth (1997, 2003, and 2004), Juul (2001), and Frasca (2003).

    [3] For representative texts, see Crogan (2011), Dyer-Witherford and Peuter (2009), Galloway (2006a and 2006b), Jagoda (2013 and 2016), Nakamura (2009), Shaw (2014), and Wark (2007). My claims about the vitality of the field of game studies are largely a result of having read these and other critics. There have also been a handful of interesting “videogame memoirs” published recently. See Bissell (2010) and Clune (2015).

    [4] Bogost defines procedurality as follows: “Procedural representation takes a different form than written or spoken representation. Procedural representation explains processes with other processes. . . . [It] is a form of symbolic expression that uses process rather than language” (2007, 9). For my own discussion of proceduralism, particularly with regard to The Stanley Parable (2013) and postmodern metafiction, see Fest (forthcoming 2016).

    [5] For instance, in the concluding chapter of Unit Operations, Bogost writes powerfully and convincingly about the need for a comparative videogame criticism in conversation with other forms of cultural criticism, arguing that “a structural change in our thinking must take place for videogames to thrive, both commercially and culturally” (2006, 179). It appears that the lack of any structural change in the nonetheless wildly thriving—at least financially—videogame industry has given Bogost serious pause.

    [6] Indeed, at one point he even questions the justification for the book in the first place: “The truth is, a book like this one is doomed to relatively modest sales and an even more modest readership, despite the generous support of the university press that publishes it and despite the fact that I am fortunate enough to have a greater reach than the average game critic” (Bogost 2015, 185). It is unclear why the limited reach of his writing might be so worrisome to Bogost given that, historically, the audience for, say, poetry criticism has never been all that large.

    [7] In addition to those previously mentioned, Bogost has also published Racing the Beam: The Atari Video Computer System (2009) and, with Simon Ferrari and Bobby Schweizer, Newsgames: Journalism at Play (2010). Also forthcoming is Play Anything: The Pleasure of Limits, the Uses of Boredom, and the Secret of Games (2016).

    [8] This is, to be sure, a somewhat confusing point. Are not record stores, book stores, and video stores (if such things still exist), along with tea shops, shoe stores, and clothing stores “retail establishment[s] devoted to a singular practice” (Bogost 2015, 182–83)? Are all such establishments unseemly because of the same logic? What makes a game store any different?

    [9] For a brief overview of Gamergate, see Winfield (2014). For a more detailed discussion of both the cultural and technological underpinnings of Gamergate, with a particular emphasis on the relationship between the algorithmic governance of sites such as Reddit or 4chan and online misogyny and harassment, see Massanari’s (2015) important essay. For links to a number of other articles and essays on gaming and feminism, see Ligman (2014) and The New Inquiry (2014). For essays about contemporary “gamer” culture, see Williams (2014) and Frase (2014). On gamers, Bogost writes in a chapter titled “The End of Gamers” from his previous book: “as videogames broaden in appeal, being a ‘gamer’ will actually become less common, if being a gamer means consuming games as one’s primary media diet or identifying with videogames as a primary part of one’s identity” (2011, 154).

    [10] See Bogost (2006, 73–89). Also, to be fair, Bogost devotes a paragraph of the introduction of How to Talk about Videogames to the considerable affective properties of videogames, but concludes the paragraph by saying that games are “Wagnerian Gesamtkunstwerk-flavored chewing gum” (Bogost 2015, ix), which, I feel, considerably undercuts whatever aesthetic value he had just ascribed to them.

    [11] In Alien Phenomenology Bogost calls such lists “Latour litanies” (2012, 38) and discusses this stylistic aspect of object-oriented ontology at some length in the chapter, “Ontography” (35–59).

    [12] See Harman (2012). Bogost addresses such concerns in the conclusion of Alien Phenomenology, responding to criticism about his study of the Atari 2600: “The platform studies project is an example of alien phenomenology. Yet our efforts to draw attention to hardware and software objects have been met with myriad accusations of human erasure: technological determinism most frequently, but many other fears and outrages about ‘ignoring’ or ‘conflating’ or ‘reducing,’ or otherwise doing violence to ‘the cultural aspects’ of things. This is a myth” (2012, 132).

    Back to the essay

    WORKS CITED

    • Aarseth, Espen. 1997. Cybertext: Perspectives on Ergodic Literature. Baltimore: Johns Hopkins University Press.
    • ———. 2001. “Computer Game Studies, Year One.” Game Studies 1, no. 1. http://gamestudies.org/0101/editorial.html.
    • ———. 2003. “Playing Research: Methodological Approaches to Game Analysis.” Game Approaches: Papers from spilforskning.dk Conference, August 28–29. http://hypertext.rmit.edu.au/dac/papers/Aarseth.pdf.
    • ———. 2004. “Genre Trouble: Narrativism and the Art of Simulation.” In First Person: New Media as Story, Performance, and Game, edited by Noah Wardrip-Fruin and Pat Harrigan, 45–55. Cambridge, MA: MIT Press.
    • Arac, Jonathan. 1989. Critical Genealogies: Historical Situations for Postmodern Literary Studies. New York: Columbia University Press.
    • Arnold, Matthew. 1993 (1864). “The Function of Criticism at the Present Time.” In Culture and Anarchy and Other Writings, edited by Stefan Collini, 26–51. New York: Cambridge University Press.
    • Bissell, Tom. 2010. Extra Lives: Why Video Games Matter. New York: Pantheon.
    • Bogost, Ian. 2006. Unit Operations: An Approach to Videogame Criticism. Cambridge, MA:MIT Press.
    • ———. 2007. Persuasive Games: The Expressive Power of Videogame Criticism. Cambridge, MA: MIT Press.
    • ———. 2009. Racing the Beam: The Atari Video Computer System. Cambridge, MA: MIT
    • Press.
    • ———. 2011. How to Do Things with Videogames. Minneapolis: University of Minnesota Press.
    • ———. 2012. Alien Phenomenology, or What It’s Like to Be a Thing. Minneapolis: University of Minnesota Press.
    • ———. 2015. How to Talk about Videogames. Minneapolis: University of Minnesota Press.
    • ———. Forthcoming 2016. Play Anything: The Pleasure of Limits, the Uses of Boredom, and the Secret of Games. New York: Basic Books.
    • Bogost, Ian, Simon Ferrari, and Bobby Schweizer. 2010. Newsgames: Journalism at Play.
    • Cambridge, MA: MIT Press.
    • Clune, Michael W. 2015. Gamelife: A Memoir. New York: Farrar, Straus and Giroux.
    • Crogan, Patrick. 2011. Gameplay Mode: War, Simulation, and Tehnoculture. Minneapolis: University of Minnesota Press.
    • Dyer-Witherford, Nick, and Greig de Peuter. 2009. Games of Empire: Global Capitalism and Video Games. Minneapolis: University of Minnesota Press.
    • Felski, Rita. 2015. The Limits of Critique. Chicago: University of Chicago Press.
    • Fest, Bradley J. Forthcoming 2016. “Metaproceduralism: The Stanley Parable and the Legacies of Postmodern Metafiction.” “Videogame Adaptation,” edited by Kevin M. Flanagan, special issue, Wide Screen.
    • Frasca, Gonzalo. 2003. “Simulation versus Narrative: Introduction to Ludology.” In The Video Game Theory Reader, edited by Mark J. P. Wolf and Bernard Perron, 221–36. New York: Routledge.
    • Frase, Peter. 2014.  “Gamer’s Revanche.” Peter Frase (blog), September 3. http://www.peterfrase.com/2014/09/gamers-revanche/.
    • Galloway, Alexander R. 2006a. “Warcraft and Utopia.” Ctheory.net, February 16. http://www.ctheory.net/articles.aspx?id=507.
    • ———. 2006b. Gaming: Essays on Algorithmic Culture. Minneapolis: University of Minnesota Press.
    • Harman, Graham. 2012. “The Well-Wrought Broken Hammer: Object-Oriented Literary Criticism.” New Literary History 43, no. 2: 183–203.
    • Hayot, Eric. 2014. “Academic Writing, I Love You. Really, I Do.” Critical Inquiry 41, no. 1: 53–77.
    • Jagoda, Patrick. 2013. “Gamification and Other Forms of Play.” boundary 2 40, no. 2: 113–44.
    • ———. 2016. Network Aesthetics. Chicago: University of Chicago Press.
    • Juul, Jesper. 2001. “Games Telling Stories? A Brief Note on Games and Narratives.” Game Studies 1, no. 1. http://www.gamestudies.org/0101/juul-gts/.
    • Ligman, Chris. 2014. “August 31st.” Critical Distance, August 31. http://www.critical-distance.com/2014/08/31/august-31st/.
    • Massanari, Adrienne . 2015. “#Gamergate and The Fappening: How Reddit’s Algorithm, Governance, and Culture Support Toxic Technocultures.” New Media & Society, OnlineFirst, October 9.
    • Mirowski, Philip. 2013. Never Let a Serious Crisis Go to Waste: How Neoliberalism Survived the Financial Meltdown. New York: Verso.
    • Murray, Janet. 1997. Hamlet on the Holodeck: The Future of Narrative in Cyberspace. Cambridge, MA: MIT Press.
    • ———. 2004. “From Game-Story to Cyberdrama.” In First Person: New Media as Story, Performance, and Game, edited by Noah Wardrip-Fruin and Pat Harrigan, 1–11. Cambridge, MA: MIT Press.
    • Nakamura, Lisa. 2009. “Don’t Hate the Player, Hate the Game: The Racialization of Labor in World of Warcraft.” Critical Studies in Media Communication 26, no. 2: 128–44.
    • The New Inquiry. 2014. “TNI Syllabus: Gaming and Feminism.” New Inquiry, September 2. http://thenewinquiry.com/features/tni-syllabus-gaming-and-feminism/.
    • Said, Edward W. 1994. “Identity, Authority, and Freedom: The Potentate and the Traveler.” boundary 2 21, no. 3: 1–18.
    • Shaw, Adrienne. 2014. Gaming at the Edge: Sexuality and Gender at the Margins of Gamer Culture. Minneapolis: University of Minnesota Press.
    • Wark, McKenzie. 2007. Gamer Theory. Cambridge, MA: Harvard University Press.
    • Williams, Ian. “Death to the Gamer.” Jacobin, September 9. https://www.jacobinmag.com/2014/09/death-to-the-gamer/.
    • Winfield, Nick. 2014. “Feminist Critics of Video Games Facing Threats in ‘GamerGate’ Campaign.” New York Times, October 15. http://www.nytimes.com/2014/10/16/technology/gamergate-women-video-game-threats-anita-sarkeesian.html.

    Back to the essay

  • Jürgen Geuter — Liberty, an iPhone, and the Refusal to Think Politically

    Jürgen Geuter — Liberty, an iPhone, and the Refusal to Think Politically

    By Jürgen Geuter
    ~

    The relationship of government and governed has always been complicated. Questions of power, legitimacy, structural and institutional violence, of rights and rules and restrictions keep evading any ultimate solution, chaining societies to constant struggles about shifting balances between different positions and extremes or defining completely new aspects or perspectives on them to shake off the often perceived stalemate. Politics.

    Politics is a simple word but one with a lot of history. Coming from the ancient Greek term for “city” (as in city-state) the word pretty much shows what it is about: Establishing the structures that a community can thrive on. Policy is infrastructure. Not made of wire or asphalt but of ideas and ways of connecting them while giving the structure ways of enforcing the integrity of itself.

    But while the processes of negotiation and discourse that define politics will never stop while intelligent beings exist recent years have seen the emergence of technology as a replacement of politics. From Lawrence Lessig’s “Code is Law” to Marc Andreessen’s “Software Is Eating the World”: A small elite of people building the tools and technologies that we use to run our lives have in a way started emancipating from politics as an idea. Because where politics – especially in democratic societies – involves potentially more people than just a small elite, technologism and its high priests pull off a fascinating trick: defining policy and politics while claiming not to be political.

    This is useful for a bunch of reasons. It allows to effectively sidestep certain existing institutions and structures avoiding friction and loss of forward momentum. “Move fast and break things” was Facebook’s internal motto until only very recently. It also makes it easy to shed certain responsibilities that we expect political entities of power to fulfill. Claiming “not to be political” allows you to have mobs of people hunting others on your service without really having to do anything about it until it becomes a PR problem. Finally, evading the label of politics grants a lot more freedoms when it comes to wielding powers that the political structures have given you: It’s no coincidence that many Internet platform declare “free speech” a fundamental and absolute right, a necessary truth of the universe, unless it’s about showing a woman breastfeeding or talking about the abuse free speech extremists have thrown at feminists.

    Yesterday news about a very interesting case directly at the contact point of politics and technologism hit mainstream media: Apple refused – in a big and well-written open letter to its customers – to fulfill an order by the District Court of California to help the FBI unlock an iPhone 5c that belonged to one of the shooters in last year’s San Bernadino shooting, in which 14 people were killed and 22 more were injured.

    Apple’s argument is simple and ticks all the boxes of established technical truths about cryptography: Apple’s CEO Tim Cook points out that adding a back door to its iPhones would endanger all of Apple’s customers because nobody can make sure that such a back door would only be used by law enforcement. Some hacker could find that hole and use it to steal information such as pictures, credit card details or personal data from people’s iPhones or make these little pocket computers do illegal things. The dangers Apple correctly outlines are immense. The beautifully crafted letter ends with the following statements:

    Opposing this order is not something we take lightly. We feel we must speak up in the face of what we see as an overreach by the U.S. government.

    We are challenging the FBI’s demands with the deepest respect for American democracy and a love of our country. We believe it would be in the best interest of everyone to step back and consider the implications.

    While we believe the FBI’s intentions are good, it would be wrong for the government to force us to build a backdoor into our products. And ultimately, we fear that this demand would undermine the very freedoms and liberty our government is meant to protect.

    Nothing in that defense is new: The debate about government backdoors has been going on for decades with companies, software makers and government officials basically exchanging the same bullets points every few years. Government: “We need access. For security.” Software people: “Yeah but then nobody’s system is secure anymore.” Rinse and repeat. That whole debate hasn’t even changed through Edward Snowden’s leaks: While the positions were presented in an increasingly shriller and shriller tone the positions themselves stayed monolithic and unmoved. Two unmovable objects yelling at each other to get out of the way.

    Apple’s open letter was received with high praise all through the tech-savvy elites, from the cypherpunks to journalists and technologists. One tweet really stood out for me because it illustrates a lot of what we have so far talked about:

    Read that again. Tim Cook/Apple are clearly separated from politics and politicians when it comes to – and here’s the kicker – the political concept of individual liberty. A deeply political debate, the one about where the limits of individual liberty might be is ripped out of the realm of politicians (and us, but we’ll come to that later). Sing the praises of the new Guardian of the Digital Universe.

    But is the court order really exactly the fundamental danger for everybody’s individual liberty that Apple presents? The actual text paints a different picture. The court orders Apple to help the FBI access one specific, identified iPhone. The court order lists the actual serial number of the device. What “help” means in this context is also specified in great detail:

    1. Apple is supposed to disable features of the iPhone automatically deleting all user data stored on the device which are usually in place to prevent device thieves from accessing the data the owners of the device stored on it.
    2. Apple will also give the FBI some way to send passcodes (guesses of the PIN that was used to lock the phone) to the device. This sounds strange but will make sense later.
    3. Apple will disable all software features that introduce delays for entering more passcodes. You know the drill: You type the wrong passcode and the device just waits for a few seconds before you can try a new one.

    Apple is compelled to write a little piece of software that runs only on the specified iPhone (the text is very clear on that) and that disables the 2 security features explained in 1 and 3. Because the court actually recognizes the dangers of having that kind of software in the wild it explicitly allows Apple to do all of this within its own facilities: the Phone would be sent to an Apple facility, the software loaded to the RAM of the device. This is where 2 comes in: When the device has been modified by loading the Apple-signed software into its RAM the FBI needs a way to send PIN code guesses to the device. The court order even explicitly states that Apple’s new software package is only supposed to go to RAM and not change the device in other ways. Potentially dangerous software would never leave Apple’s premises, Apple also doesn’t have to introduce or weaken the security of all its devices and if Apple can fulfill the tasks described in some other way the court is totally fine with it. The government, any government doesn’t get a generic backdoor to all iPhones or all Apple products. In a more technical article than this on Dan Guido outlines that what the court order asks for would work on the iPhone in question but not on most newer ones.

    So while Apple’s PR evokes the threat of big government’s boots marching on to step on everybody’s individual freedoms, the text of the court order and the technical facts make the case ultra specific: Apple isn’t supposed to build a back door for iPhones but help law enforcement to open up one specific phone within their possession connected not to a theoretical crime in the future but the actual murder of 14 people.

    We could just attribute it all to Apple effectively taking a PR opportunity to strengthen the image it has been developing after realizing that they just couldn’t really do data and services, the image of the protector of privacy and liberty. An image that they kicked into overdrive post-Snowden. But that would be too simple because the questions here are a lot more fundamental.

    How do we – as globally networked individuals living in digitally connected and mutually overlaying societies – define the relationship of transnational corporations and the rules and laws we created?

    Cause here’s the fact: Apple was ordered by a democratically legitimate court to help in the investigation of a horrible, capital crime leading to the murder of 14 people by giving it a way to potentially access one specific phone of the more than 700 million phones Apple has made. And Apple refuses.

    Which – don’t get me wrong – is their right as an entity in the political system of the US: They can fight the court order using the law. They can also just refuse and see what the government, what law enforcement will do to make them comply. Sometimes the cost of breaking that kind of resistance overshadow the potential value so the request gets dropped. But where do we as individuals stand whose liberty is supposedly at stake? Where is our voice?

    One of the main functions of political systems is generating legitimacy for power. While some less-than-desirable systems might generate legitimacy by being the strongest, in modern times less physical legitimizations of power were established: a king for example often is supposed to rule because one or more god(s) say so. Which generates legitimacy especially if you share the same belief. In democracies legitimacy is generated by elections or votes: by giving people the right to speak their mind, elect representatives and be elected the power (and structural violence) that a government exerts is supposedly legitimized.

    Some people dispute the legitimacy of even democratically distributed power, and it’s not like they have no point, but let’s not dive into the teachings of Anarchism here. The more mainstream position is that there is a rule of law and that the institutions of the United States as a democracy are legitimized as the representation of US citizens. They represent every US citizen, they each are supposed to keep the political structure, the laws and rules and rights that come with being a US citizen (or living there) intact. And when that system speaks to a company it’s supposed to govern and the company just gives it the finger (but in a really nice letter) how does the public react? They celebrate.

    But what’s to celebrate? This is not some clandestine spy network gathering everybody’s every waking move to calculate who might commit a crime in 10 years and assassinate them. This is a concrete case, a request confirmed by a court in complete accordance with the existing practices in many other domains. If somebody runs around and kills people, the police can look into their mail, enter their home. That doesn’t abolish the protections of the integrity of your mail or home but it’s an attempt to balance the rights and liberties of the individual as well as the rights and needs of all others and the social system they form.

    Rights hardly ever are absolute, some might even argue that no right whatsoever is absolute: you have the right to move around freely. But I can still lock you out of my home and given certain crimes you might be locked up in prison. You have the right to express yourself but when you start threatening others, limits kick in. This balancing act that I also started this essay with has been going on publicly for ages and it will go on for a lot longer. Because the world changes. New needs might emerge, technology might create whole new domains of life that force us to rethink how we interact and which restrictions we apply. But that’s nothing that one company just decides.

    In unconditionally celebrating Cook’s letter a dangerous “apolitical” understanding of politics shows its ugly face: An ideology so obsessed with individual liberty that it happily embraces its new unelected overlords. Code is Law? More like “Cook is Law”.

    This isn’t saying that Apple (or any other company in that situation) just has to automatically do everything a government tells them to. It’s quite obvious that many of the big tech companies are not happy about the idea of establishing precedent in helping government authorities. Today it’s the FBI but what if some agency from some dictatorship wants the data from some dissident’s phone? Is a company just supposed to pick and choose?

    The world might not grow closer together but it gets connected a lot more and that leads to inconsistent laws, regulations, political ideologies etc colliding. And so far we as mankind have no idea how to deal with it. Facebook gets criticized in Europe for applying very puritanic standards when it comes to nudity but it does follow as a US company established US traditions. Should they apply German traditions which are a lot more open when it comes to depictions of nudity as well? What about rules of other countries? Does Facebook need to follow all? Some? If so which ones?

    While this creates tough problems for international law makers, governments and us more mortal people, it does concern companies very little as they can – when push comes to shove – just move their base of operation somewhere else. Which they already do to “optimize” avoid taxes, about which Cook also recently expressed indignant refusal to comply with US government requirements as “total political crap” – is this also a cause for all of us across the political spectrum to celebrate Apple’s protection of individual liberty? I wonder how the open letter would have looked if Ireland, which is a tax haven many technology companies love to use, would have asked for the same thing California did?

    This is not specifically about Apple. Or Facebook. Or Google. Or Volkswagen. Or Nestle. This is about all of them and all of us. If we uncritically accept that transnational corporations decide when and how to follow the rules we as societies established just because right now their (PR) interests and ours might superficially align how can we later criticize when the same companies don’t pay taxes or decide to not follow data protection laws? Especially as a kind of global digital society (albeit of a very small elite) we have between cat GIFs and shaking the fist at all the evil that governments do (and there’s lots of it) dropped the ball on forming reasonable and consistent models for how to integrate all our different inconsistent rules and laws. How we gain any sort of politically legitimized control over corporations, governments and other entities of power.

    Tim Cook’s letter starts with the following words:

    This moment calls for public discussion, and we want our customers and people around the country to understand what is at stake.

    On that he and I completely agree.


    _____

    Jürgen Geuter (@tante) is a political computer scientist living in Germany. For about 10 years he has been speaking and writing about technology, digitalization, digital culture and the way these influence mainstream society. His writing has been featured in Der Spiegel, Wired Germany and other publications as well as his own blog Nodes in a Social Network, on which an earlier version of this post first appeared.

    Back to the essay

  • Coding Bootcamps and the New For-Profit Higher Ed

    Coding Bootcamps and the New For-Profit Higher Ed

    By Audrey Watters
    ~
    After decades of explosive growth, the future of for-profit higher education might not be so bright. Or, depending on where you look, it just might be…

    In recent years, there have been a number of investigations – in the media, by the government – into the for-profit college sector and questions about these schools’ ability to effectively and affordably educate their students. Sure, advertising for for-profits is still plastered all over the Web, the airwaves, and public transportation, but as a result of journalistic and legal pressures, the lure of these schools may well be a lot less powerful. If nothing else, enrollment and profits at many for-profit institutions are down.

    Despite the massive amounts of money spent by the industry to prop it up – not just on ads but on lobbying and legal efforts, the Obama Administration has made cracking down on for-profits a centerpiece of its higher education policy efforts, accusing these schools of luring students with misleading and overblown promises, often leaving them with low-status degrees sneered at by employers and with loans students can’t afford to pay back.

    But the Obama Administration has also just launched an initiative that will make federal financial aid available to newcomers in the for-profit education sector: ed-tech experiments like “coding bootcamps” and MOOCs. Why are these particular for-profit experiments deemed acceptable? What do they do differently from the much-maligned for-profit universities?

    School as “Skills Training”

    In many ways, coding bootcamps do share the justification for their existence with for-profit universities. That is, they were founded in order to help to meet the (purported) demands of the job market: training people with certain technical skills, particularly those skills that meet the short-term needs of employers. Whether they meet students’ long-term goals remains to be seen.

    I write “purported” here even though it’s quite common to hear claims that the economy is facing a “STEM crisis” – that too few people have studied science, technology, engineering, or math and employers cannot find enough skilled workers to fill jobs in those fields. But claims about a shortage of technical workers are debatable, and lots of data would indicate otherwise: wages in STEM fields have remained flat, for example, and many who graduate with STEM degrees cannot find work in their field. In other words, the crisis may be “a myth.”

    But it’s a powerful myth, and one that isn’t terribly new, dating back at least to the launch of the Sputnik satellite in 1957 and subsequent hand-wringing over the Soviets’ technological capabilities and technical education as compared to the US system.

    There are actually a number of narratives – some of them competing narratives – at play here in the recent push for coding bootcamps, MOOCs, and other ed-tech initiatives: that everyone should go to college; that college is too expensive – “a bubble” in the Silicon Valley lexicon; that alternate forms of credentialing will be developed (by the technology sector, naturally); that the tech sector is itself a meritocracy, and college degrees do not really matter; that earning a degree in the humanities will leave you unemployed and burdened by student loan debt; that everyone should learn to code. Much like that supposed STEM crisis and skill shortage, these narratives might be powerful, but they too are hardly provable.

    Nor is the promotion of a more business-focused education that new either.

    Image credits

    Career Colleges: A History

    Foster’s Commercial School of Boston, founded in 1832 by Benjamin Franklin Foster, is often recognized as the first school established in the United States for the specific purpose of teaching “commerce.” Many other commercial schools opened on its heels, most located in the Atlantic region in major trading centers like Philadelphia, Boston, New York, and Charleston. As the country expanded westward, so did these schools. Bryant & Stratton College was founded in Cleveland in 1854, for example, and it established a chain of schools, promising to open a branch in every American city with a population of more than 10,000. By 1864, it had opened more than 50, and the chain is still in operation today with 18 campuses in New York, Ohio, Virginia, and Wisconsin.

    The curriculum of these commercial colleges was largely based around the demands of local employers alongside an economy that was changing due to the Industrial Revolution. Schools offered courses in bookkeeping, accounting, penmanship, surveying, and stenography. This was in marketed contrast to those universities built on a European model, which tended to teach topics like theology, philosophy, and classical language and literature. If these universities were “elitist,” the commercial colleges were “popular” – there were over 70,000 students enrolled in them in 1897, compared to just 5800 in colleges and universities – something that highlights what’s a familiar refrain still today: that traditional higher ed institutions do not meet everyone’s needs.

    Image credits

    The existence of the commercial colleges became intertwined in many success stories of the nineteenth century: Andrew Carnegie attended night school in Pittsburgh to learn bookkeeping, and John D. Rockefeller studied banking and accounting at Folsom’s Commercial College in Cleveland. The type of education offered at these schools was promoted as a path to become a “self-made man.”

    That’s the story that still gets told: these sorts of classes open up opportunities for anyone to gain the skills (and perhaps the certification) that will enable upward mobility.

    It’s a story echoed in the ones told about (and by) John Sperling as well. Born into a working class family, Sperling worked as a merchant marine, then attended community college during the day and worked as a gas station attendant at night. He later transferred to Reed College, went on to UC Berkeley, and completed his doctorate at Cambridge University. But Sperling felt as though these prestigious colleges catered to privileged students; he wanted a better way for working adults to be able to complete their degrees. In 1976, he founded the University of Phoenix, one of the largest for-profit colleges in the US which at its peak in 2010 enrolled almost 600,000 students.

    Other well-known names in the business of for-profit higher education: Walden University (founded in 1970), Capella University (founded in 1993), Laureate Education (founded in 1999), Devry University (founded in 1931), Education Management Corporation (founded in 1962), Strayer University (founded in 1892), Kaplan University (founded in 1937 as The American Institute of Commerce), and Corinthian Colleges (founded in 1995 and defunct in 2015).

    It’s important to recognize the connection of these for-profit universities to older career colleges, and it would be a mistake to see these organizations as distinct from the more recent development of MOOCs and coding bootcamps. Kaplan, for example, acquired the code school Dev Bootcamp in 2014. Laureate Education is an investor in the MOOC provider Coursera. The Apollo Education Group, the University of Phoenix’s parent company, is an investor in the coding bootcamp The Iron Yard.

    Image credits

    Promises, Promises

    Much like the worries about today’s for-profit universities, even the earliest commercial colleges were frequently accused of being “purely business speculations” – “diploma mills” – mishandled by administrators who put the bottom line over the needs of students. There were concerns about the quality of instruction and about the value of the education students were receiving.

    That’s part of the apprehension about for-profit universities’ (almost most) recent manifestations too: that these schools are charging a lot of money for a certification that, at the end of the day, means little. But at least the nineteenth century commercial colleges were affordable, UC Berkley history professor Caitlin Rosenthal argues in a 2012 op-ed in Bloomberg,

    The most common form of tuition at these early schools was the “life scholarship.” Students paid a lump sum in exchange for unlimited instruction at any of the college’s branches – $40 for men and $30 for women in 1864. This was a considerable fee, but much less than tuition at most universities. And it was within reach of most workers – common laborers earned about $1 per day and clerks’ wages averaged $50 per month.

    Many of these “life scholarships” promised that students who enrolled would land a job – and if they didn’t, they could always continue their studies. That’s quite different than the tuition at today’s colleges – for-profit or not-for-profit – which comes with no such guarantee.

    Interestingly, several coding bootcamps do make this promise. A 48-week online program at Bloc will run you $24,000, for example. But if you don’t find a job that pays $60,000 after four months, your tuition will be refunded, the startup has pledged.

    Image credits

    According to a recent survey of coding bootcamp alumni, 66% of graduates do say they’ve found employment (63% of them full-time) in a job that requires the skills they learned in the program. 89% of respondents say they found a job within 120 days of completing the bootcamp. Yet 21% say they’re unemployed – a number that seems quite high, particularly in light of that supposed shortage of programming talent.

    For-Profit Higher Ed: Who’s Being Served?

    The gulf between for-profit higher ed’s promise of improved job prospects and the realities of graduates’ employment, along with the price tag on its tuition rates, is one of the reasons that the Obama Administration has advocated for “gainful employment” rules. These would measure and monitor the debt-to-earnings ratio of graduates from career colleges and in turn penalize those schools whose graduates had annual loan payments more than 8% of their wages or 20% of their discretionary earnings. (The gainful employment rules only apply to those schools that are eligible for Title IV federal financial aid.)

    The data is still murky about how much debt attendees at coding bootcamps accrue and how “worth it” these programs really might be. According to the aforementioned survey, the average tuition at these programs is $11,852. This figure might be a bit deceiving as the price tag and the length of bootcamps vary greatly. Moreover, many programs, such as App Academy, offer their program for free (well, plus a $5000 deposit) but then require that graduates repay up to 20% of their first year’s salary back to the school. So while the tuition might appear to be low in some cases, the indebtedness might actually be quite high.

    According to Course Report’s survey, 49% of graduates say that they paid tuition out of their own pockets, 21% say they received help from family, and just 1.7% say that their employer paid (or helped with) the tuition bill. Almost 25% took out a loan.

    That percentage – those going into debt for a coding bootcamp program – has increased quite dramatically over the last few years. (Less than 4% of graduates in the 2013 survey said that they had taken out a loan). In part, that’s due to the rapid expansion of the private loan industry geared towards serving this particular student population. (Incidentally, the two ed-tech companies which have raised the most money in 2015 are both loan providers: SoFi and Earnest. The former has raised $1.2 billion in venture capital this year; the latter $245 million.)

    Image credits

    The Obama Administration’s newly proposed “EQUIP” experiment will open up federal financial aid to some coding bootcamps and other ed-tech providers (like MOOC platforms), but it’s important to underscore some of the key differences here between federal loans and private-sector loans: federal student loans don’t have to be repaid until you graduate or leave school; federal student loans offer forbearance and deferment if you’re struggling to make payments; federal student loans have a fixed interest rate, often lower than private loans; federal student loans can be forgiven if you work in public service; federal student loans (with the exception of PLUS loans) do not require a credit check. The latter in particular might help to explain the demographics of those who are currently attending coding bootcamps: if they’re having to pay out-of-pocket or take loans, students are much less likely to be low-income. Indeed, according to Course Report’s survey, the cost of the bootcamps and whether or not they offered a scholarship was one of the least important factors when students chose a program.

    Here’s a look at some coding bootcamp graduates’ demographic data (as self-reported):

    Age
    Mean Age 30.95
    Gender
    Female 36.3%
    Male 63.1%
    Ethnicity
    American Indian 1.0%
    Asian American 14.0%
    Black 5.0%
    Other 17.2%
    White 62.8%
    Hispanic Origin
    Yes 20.3%
    No 79.7%
    Citizenship
    Yes, born in the US 78.2%
    Yes, naturalized 9.7%
    No 12.2%
    Education
    High school dropout 0.2%
    High school graduate 2.6%
    Some college 14.2%
    Associate’s degree 4.1%
    Bachelor’s degree 62.1%
    Master’s degree 14.2%
    Professional degree 1.5%
    Doctorate degree 1.1%

    (According to several surveys of MOOC enrollees, these students also tend to be overwhelmingly male from more affluent neighborhoods, and MOOC students also tend to already possess Bachelor’s degrees. The median age of MITx registrants is 27.)

    It’s worth considering how the demographics of students in MOOCs and coding bootcamps may (or may not) be similar to those enrolled at other for-profit post-secondary institutions, particularly since all of these programs tend to invoke the rhetoric about “democratizing education” and “expanding access.” Access for whom?

    Some two million students were enrolled in for-profit colleges in 2010, up from 400,000 a decade earlier. These students are disproportionately older, African American, and female when compared to the entire higher ed student population. While one in 20 of all students are enrolled in a for-profit college, 1 in 10 African American students, 1 in 14 Latino students, and 1 in 14 first-generation college students are enrolled at a for-profit. Students at for-profits are more likely to be single parents. They’re less likely to enter with a high school diploma. Dependent students in for-profits have about half as much family income as students in not-for-profit schools. (This demographic data is drawn from the NCES and from Harvard University researchers David Deming, Claudia Goldin, and Lawrence Katz in their 2013 study on for-profit colleges.)

    Deming, Goldin, and Katz argue that

    The snippets of available evidence suggest that the economic returns to students who attend for-profit colleges are lower than those for public and nonprofit colleges. Moreover, default rates on student loans for proprietary schools far exceed those of other higher-education institutions.

    Image credits

    According to one 2010 report, just 22% of first- and full-time students pursuing Bachelor’s degrees at for-profit colleges in 2008 graduated, compared to 55% and 65% of students at public and private non-profit universities respectively. Of the more than 5000 career programs that the Department of Education tracks, 72% of those offered by for-profit institutions produce graduates who earn less than high school dropouts.

    For their part, today’s MOOCs and coding bootcamps also boast that their students will find great success on the job market. Coursera, for example, recently surveyed its students who’d completed one of its online courses and 72% who responded said they had experienced “career benefits.” But without the mandated reporting that comes with federal financial aid, a lot of what we know about their student population and student outcomes remains pretty speculative.

    What kind of students benefit from coding bootcamps and MOOC programs, the new for-profit education? We don’t really know… although based on the history of higher education and employment, we can guess.

    EQUIP and the New For-Profit Higher Ed

    On October 14, the Obama Administration announced a new initiative, the Educational Quality through Innovative Partnerships (EQUIP) program, which will provide a pathway for unaccredited education programs like coding bootcamps and MOOCs to become eligible for federal financial aid. According to the Department of Education, EQUIP is meant to open up “new models of education and training” to low income students. In a press release, it argues that “Some of these new models may provide more flexible and more affordable credentials and educational options than those offered by traditional higher institutions, and are showing promise in preparing students with the training and education needed for better, in-demand jobs.”

    The EQUIP initiative will partner accredited institutions with third-party providers, loosening the “50% rule” that prohibits accredited schools from outsourcing more than 50% of an accredited program. Since bootcamps and MOOC providers “are not within the purview of traditional accrediting agencies,” the Department of Education says, “we have no generally accepted means of gauging their quality.” So those organizations that apply for the experiment will have to provide an outside “quality assurance entity,” which will help assess “student outcomes” like learning and employment.

    By making financial aid available for bootcamps and MOOCs, one does have to wonder if the Obama Administration is not simply opening the doors for more of precisely the sort of practices that the for-profit education industry has long been accused of: expanding rapidly, lowering the quality of instruction, focusing on marketing to certain populations (such as veterans), and profiting off of taxpayer dollars.

    Who benefits from the availability of aid? And who benefits from its absence? (“Who” here refers to students and to schools.)

    Shawna Scott argues in “The Code School-Industrial Complex” that without oversight, coding bootcamps re-inscribe the dominant beliefs and practices of the tech industry. Despite all the talk of “democratization,” this is a new form of gatekeeping.

    Before students are even accepted, school admission officers often select for easily marketable students, which often translates to students with the most privileged characteristics. Whether through intentionally targeting those traits because it’s easier to ensure graduates will be hired, or because of unconscious bias, is difficult to discern. Because schools’ graduation and employment rates are their main marketing tool, they have a financial stake in only admitting students who are at low risk of long-term unemployment. In addition, many schools take cues from their professional developer founders and run admissions like they hire for their startups. Students may be subjected to long and intensive questionnaires, phone or in-person interviews, or be required to submit a ‘creative’ application, such as a video. These requirements are often onerous for anyone working at a paid job or as a caretaker for others. Rarely do schools proactively provide information on alternative application processes for people of disparate ability. The stereotypical programmer is once again the assumed default.

    And so, despite the recent moves to sanction certain ed-tech experiments, some in the tech sector have been quite vocal in their opposition to more regulations governing coding schools. It’s not just EQUIP either; there was much outcry last year after several states, including California, “cracked down” on bootcamps. Many others have framed the entire accreditation system as a “cabal” that stifles innovation. “Innovation” in this case implies alternate certificate programs – not simply Associate’s or Bachelor’s degrees – in timely, technical topics demanded by local/industry employers.

    Image credits

    The Forgotten Tech Ed: Community Colleges

    Of course, there is an institution that’s long offered alternate certificate programs in timely, technical topics demanded by local/industry employers, and that’s the community college system.

    Vox’s Libby Nelson observed that “The NYT wrote more about Harvard last year than all community colleges combined,” and certainly the conversations in the media (and elsewhere) often ignore that community colleges exist at all, even though these schools educate almost half of all undergraduates in the US.

    Like much of public higher education, community colleges have seen their funding shrink in recent decades and have been tasked to do more with less. For community colleges, it’s a lot more with a lot less. Open enrollment, for example, means that these schools educate students who require more remediation. Yet despite many community colleges students being “high need,” community colleges spend far less per pupil than do four-year institutions. Deep budget cuts have also meant that even with their open enrollment policies, community colleges are having to restrict admissions. In 2012, some 470,000 students in California were on waiting lists, unable to get into the courses they need.

    This is what we know from history: as the funding for public higher ed decreased – for two- and four-year schools alike, for-profit higher ed expanded, promising precisely what today’s MOOCs and coding bootcamps now insist they’re the first and the only schools to do: to offer innovative programs, training students in the kinds of skills that will lead to good jobs. History tells us otherwise…
    _____

    Audrey Watters is a writer who focuses on education technology – the relationship between politics, pedagogy, business, culture, and ed-tech. She has worked in the education field for over 15 years: teaching, researching, organizing, and project-managing. Although she was two chapters into her dissertation (on a topic completely unrelated to ed-tech), she decided to abandon academia, and she now happily fulfills the one job recommended to her by a junior high aptitude test: freelance writer. Her stories have appeared on NPR/KQED’s education technology blog MindShift, in the data section of O’Reilly Radar, on Inside Higher Ed, in The School Library Journal, in The Atlantic, on ReadWriteWeb, and Edutopia. She is the author of the recent book The Monsters of Education Technology (Smashwords, 2014) and working on a book called Teaching Machines. She maintains the widely-read Hack Education blog, on which an earlier version of this essay first appeared, and writes frequently for The b2 Review Digital Studies magazine on digital technology and education.

    Back to the essay

  • The Ground Beneath the Screens

    The Ground Beneath the Screens

    Jussi Parikka, A Geology of Media (University of Minnesota Press, 2015)Jussi Parikka, The Anthrobscene (University of Minnesota Press, 2015)a review of Jussi Parikka, A Geology of Media (University of Minnesota Press, 2015) and The Anthrobscene (University of Minnesota Press, 2015)
    by Zachary Loeb

    ~

     

     

     

     

    Despite the aura of ethereality that clings to the Internet, today’s technologies have not shed their material aspects. Digging into the materiality of such devices does much to trouble the adoring declarations of “The Internet Is the Answer.” What is unearthed by digging is the ecological and human destruction involved in the creation of the devices on which the Internet depends—a destruction that Jussi Parikka considers an obscenity at the core of contemporary media.

    Parikka’s tale begins deep below the Earth’s surface in deposits of a host of different minerals that are integral to the variety of devices without which you could not be reading these words on a screen. This story encompasses the labor conditions in which these minerals are extracted and eventually turned into finished devices, it tells of satellites, undersea cables, massive server farms, and it includes a dark premonition of the return to the Earth which will occur following the death (possibly a premature death due to planned obsolescence) of the screen at which you are currently looking.

    In a connected duo of new books, The Anthrobscene (referenced below as A) and A Geology of Media (referenced below as GM), media scholar Parikka wrestles with the materiality of the digital. Parikka examines the pathways by which planetary elements become technology, while considering the transformations entailed in the anthropocene, and artistic attempts to render all of this understandable. Drawing upon thinkers ranging from Lewis Mumford to Donna Haraway and from the Situationists to Siegfried Zielinski – Parikka constructs a way of approaching media that emphasizes that it is born of the Earth, borne upon the Earth, and fated eventually to return to its place of origin. Parikka’s work demands that materiality be taken seriously not only by those who study media but also by all of those who interact with media – it is a demand that the anthropocene must be made visible.

    Time is an important character in both The Anthrobscene and A Geology of Media for it provides the context in which one can understand the long history of the planet as well as the scale of the years required for media to truly decompose. Parikka argues that materiality needs to be considered beyond a simple focus upon machines and infrastructure, but instead should take into account “the idea of the earth, light, air, and time as media” (GM 3). Geology is harnessed as a method of ripping open the black box of technology and analyzing what the components inside are made of – copper, lithium, coltan, and so forth. The engagement with geological materiality is key for understanding the environmental implications of media, both in terms of the technologies currently in circulation and in terms of predicting the devices that will emerge in the coming years. Too often the planet is given short shrift in considerations of the technical, but “it is the earth that provides for media and enables it”, it is “the affordances of its geophysical reality that make technical media happen” (GM 13). Drawing upon Mumford’s writings about “paleotechnics” and “neotechnics” (concepts which Mumford had himself adapted from the work of Patrick Geddes), Parikka emphasizes that both the age of coal (paleotechnics) and the age of electricity (neotechnics) are “grounded in the wider mobilization of the materiality of the earth” (GM 15). Indeed, electric power is often still quite reliant upon the extraction and burning of coal.

    More than just a pithy neologism, Parikka introduces the term “anthrobscene” to highlight the ecological violence inherent in “the massive changes human practices, technologies, and existence have brought across the ecological board” (GM 16-17) shifts that often go under the more morally vague title of “the anthropocene.” For Parikka, “the addition of the obscene is self-explanatory when one starts to consider the unsustainable, politically dubious, and ethically suspicious practices that maintain technological culture and its corporate networks” (A 6). Like a curse word beeped out by television censors, much of the obscenity of the anthropocene goes unheard even as governments and corporations compete with ever greater élan for the privilege of pillaging portions of the planet – Parikka seeks to reinscribe the obscenity.

    The world of high tech media still relies upon the extraction of metals from the earth and, as Parikka shows, a significant portion of the minerals mined today are destined to become part of media technologies. Therefore, in contemplating geology and media it can be fruitful to approach media using Zielinski’s notion of “deep time” wherein “durations become a theoretical strategy of resistance against the linear progress myths that impose a limited context for understanding technological change” (GM 37, A 23). Deploying the notion of “deep time” demonstrates the ways in which a “metallic materiality links the earth to the media technological” while also emphasizing the temporality “linked to the nonhuman earth times of decay and renewal” (GM 44, A 30). Thus, the concept of “deep time” can be particularly useful in thinking through the nonhuman scales of time involved in media, such as the centuries required for e-waste to decompose.

    Whereas “deep time” provides insight into media’s temporal quality, “psychogeophysics” presents a method for thinking through the spatial. “Psychogeophysics” is a variation of the Situationist idea of “the psychogeographical,” but where the Situationists focused upon the exploration of the urban environment, “psychogeophysics” (which appeared as a concept in a manifesto in Mute magazine) moves beyond the urban sphere to contemplate the oblate spheroid that is the planet. What the “geophysical twist brings is a stronger nonhuman element that is nonetheless aware of the current forms of exploitation but takes a strategic point of view on the nonorganic too” (GM 64). Whereas an emphasis on the urban winds up privileging the world built by humans, the shift brought by “psychogeophysics” allows people to bear witness to “a cartography of architecture of the technological that is embedded in the geophysical” (GM 79).

    The material aspects of media technology consist of many areas where visibility has broken down. In many cases this is suggestive of an almost willful disregard (ignoring exploitative mining and labor conditions as well as the harm caused by e-waste), but in still other cases it is reflective of the minute scales that materiality can assume (such as metallic dust that dangerously fills workers’ lungs after they shine iPad cases). The devices that are surrounded by an optimistic aura in some nations, thus obtain this sheen at the literal expense of others: “the residue of the utopian promise is registered in the soft tissue of a globally distributed cheap labor force” (GM 89). Indeed, those who fawn with religious adoration over the newest high-tech gizmo may simply be demonstrating that nobody they know personally will be sickened in assembling it, or be poisoned by it when it becomes e-waste. An emphasis on geology and materiality, as Parikka demonstrates, shows that the era of digital capitalism contains many echoes of the exploitation characteristic of bygone periods – appropriation of resources, despoiling of the environment, mistreatment of workers, exportation of waste, these tragedies have never ceased.

    Digital media is excellent at creating a futuristic veneer of “smart” devices and immaterial sounding aspects such as “the cloud,” and yet a material analysis demonstrates the validity of the old adage “the more things change the more they stay the same.” Despite efforts to “green” digital technology, “computer culture never really left the fossil (fuel) age anyway” (GM 111). But beyond relying on fossil fuels for energy, these devices can themselves be considered as fossils-to-be as they go to rest in dumps wherein they slowly degrade, so that “we can now ask what sort of fossil layer is defined by the technical media condition…our future fossils layers are piling up slowly but steadily as an emblem of an apocalypse in slow motion” (GM 119). We may not be surrounded by dinosaurs and trilobites, but the digital media that we encounter are tomorrow’s fossils – which may be quite mysterious and confounding to those who, thousands of years hence, dig them up. Businesses that make and sell digital media thrive on a sense of time that consists of planned obsolescence, regular updates, and new products, but to take responsibility for the materiality of these devices requires that “notions of temporality must escape any human-obsessed vocabulary and enter into a closer proximity with the fossil” (GM 135). It requires a woebegone recognition that our technological detritus may be present on the planet long after humanity has vanished.

    The living dead that lurch alongside humanity today are not the zombies of popular entertainment, but the undead media devices that provide the screens for consuming such distractions. Already fossils, bound to be disposed of long before they stop working, it is vital “to be able to remember that media never dies, but remains as toxic residue,” and thus “we should be able to repurpose and reuse solutions in new ways, as circuit bending and hardware hacking practices imply” (A 41). We live with these zombies, we live among them, and even when we attempt to pack them off to unseen graveyards they survive under the surface. A Geology of Media is thus “a call for further materialization of media not only as media but as that bit which it consists of: the list of the geophysical elements that give us digital culture” (GM 139).

    It is not simply that “machines themselves contain a planet” (GM 139) but that the very materiality of the planet is becoming riddled with a layer of fossilized machines.

    * * *

    The image of the world conjured up by Parikka in A Geology of Media and The Anthrobscene is far from comforting – after all, Parikka’s preference for talking about “the anthrobscene” does much to set a funereal tone. Nevertheless, these two books by Parikka do much to demonstrate that “obscene” may be a very fair word to use when discussing today’s digital media. By emphasizing the materiality of media, Parikka avoids the thorny discussions of the benefits and shortfalls of various platforms to instead pose a more challenging ethical puzzle: even if a given social media platform can be used for ethical ends, to what extent is this irrevocably tainted by the materiality of the device used to access these platforms? It is a dark assessment which Parikka describes without much in the way of optimistic varnish, as he describes the anthropocene (on the first page of The Anthrobscene) as “a concept that also marks the various violations of environmental and human life in corporate practices and technological culture that are ensuring that there won’t be much of humans in the future scene of life” (A 1).

    And yet both books manage to avoid the pitfall of simply coming across as wallowing in doom. Parikka is not pining for a primal pastoral fantasy, but is instead seeking to provide new theoretical tools with which his readers can attempt to think through the materiality of media. Here, Parikka’s emphasis on the way that digital technology is still heavily reliant upon mining and fossil fuels acts as an important counter to gee-whiz futurism. Similarly Parikka’s mobilization of the notion of “deep time” and fossils acts as an important contribution to thinking through the lifecycles of digital media. Dwelling on the undeath of a smartphone slowly decaying in an e-waste dump over centuries is less about evoking a fearful horror than it is about making clear the horribleness of technological waste. The discussion of “deep time” seems like it can function as a sort of geological brake on accelerationist thinking, by emphasizing that no matter how fast humans go, the planet has its own sense of temporality. Throughout these two slim books, Parikka draws upon a variety of cultural works to strengthen his argument: ranging from the earth-pillaging mad scientist of Arthur Conan Doyle’s Professor Challenger, to the Coal Fired Computers of Yokokoji-Harwood (YoHa), to Molleindustria’s smartphone game “Phone Story” which plays out on a smartphone’s screen the tangles of extraction, assembly, and disposal that are as much a part of the smartphone’s story as whatever uses for which the final device is eventually used. Cultural and artistic works, when they intend to, may be able to draw attention to the obscenity of the anthropocene.

    The Anthrobscene and A Geology of Media are complementary texts, but one need not read both in order to understand the other. As part of the University of Minnesota Press’s “Forerunners” series, The Anthrobscene is a small book (in terms of page count and physical size) which moves at a brisk pace, in some ways it functions as a sort of greatest hits version of A Geology of Media – containing many of the essential high points, but lacking some of the elements that ultimately make A Geology of Media a satisfying and challenging book. Yet the duo of books work wonderfully together as The Anthrobscene acts as a sort of primer – that a reader of both books will detect many similarities between the two is not a major detraction, for these books tell a story that often goes unheard today.

    Those looking for neat solutions to the anthropocene’s quagmire will not find them in either of these books – and as these texts are primarily aimed at an academic audience this is not particularly surprising. These books are not caught up in offering hope – be it false or genuine. At the close of A Geology of Media when Parikka discusses the need “to repurpose and reuse solutions in new ways, as circuit bending and hardware hacking practices imply” (A 41) – this does not appear as a perfect panacea but as way of possibly adjusting. Parikka is correct in emphasizing the ways in which the extractive regimes that characterized the paleotechnic continue on in the neotechnic era, and this is a point which Mumford himself made regarding the way that the various “technic” eras do not represent clean breaks from each other. As Mumford put it, “the new machines followed, not their own pattern, but the pattern laid down by previous economic and technical structures” (Mumford 2010, 236) – in other words, just as Parikka explains, the paleotechnic survives well into the neotechnic. The reason this is worth mentioning is not to challenge Parikka, but to highlight that the “neotechnic” is not meant as a characterization of a utopian technical epoch that has parted ways with the exploitation that had characterized the preceding period. For Mumford the need was to move beyond the anthropocentrism of the neotechnic period and move towards what he called (in The Culture of Cities) the “biotechnic” a period wherein “technology itself will be oriented toward the culture of life” (Mumford 1938, 495). Granted, as Mumford’s later work and as these books by Parikka make clear – instead of arriving at the “biotechnic” what we might get is instead the anthrobscene. And reading these books by Parikka makes it clear that one could not characterize the anthrobscene as being “oriented toward the culture of life” – indeed, it may be exactly the opposite. Or, to stick with Mumford a bit longer, it may be that the anthrobscene is the result of the triumph of “authoritarian technics” over “democratic” ones. Nevertheless, the true dirge like element of Parikka’s books is that they raise the possibility that it may well be too late to shift paths – that the neotechnic was perhaps just a coat of fresh paint applied to hide the rusting edifice of paleotechnics.

    A Geology of Media and The Anthrobscene are conceptual toolkits, they provide the reader with the drills and shovels they need to dig into the materiality of digital media. But what these books make clear is that along with the pickaxe and the archeologist’s brush, if one is going to dig into the materiality of media one also needs a gasmask if one is to endure the noxious fumes. Ultimately, what Parikka shows is that the Situationist inspired graffiti of May 1968 “beneath the streets – the beach” needs to be rewritten in the anthrobscene.

    Perhaps a fitting variation for today would read: “beneath the streets – the graveyard.”
    _____

    Zachary Loeb is a writer, activist, librarian, and terrible accordion player. He earned his MSIS from the University of Texas at Austin, and is currently working towards an MA in the Media, Culture, and Communications department at NYU. His research areas include media refusal and resistance to technology, ethical implications of technology, infrastructure and e-waste, as well as the intersection of library science with the STS field. Using the moniker “The Luddbrarian,” Loeb writes at the blog Librarian Shipwreck. He is a frequent contributor to The b2 Review Digital Studies section.

    Back to the essay
    _____

    Works Cited

    Mumford, Lewis. 2010. Technics and Civilization. Chicago: University of Chicago Press.

    Mumford, Lewis. 1938. The Culture of Cities. New York: Harcourt, Brace and Company.

  • How Ex Machina Abuses Women of Color and Nobody Cares Cause It's Smart

    How Ex Machina Abuses Women of Color and Nobody Cares Cause It's Smart

    Alex Garland, dir. & writer, Ex Machina (A24/Universal Films, 2015)a review of Alex Garland, dir. & writer, Ex Machina (A24/Universal Films, 2015)
    by Sharon Chang
    ~

    In April of this year British science fiction thriller Ex Machina opened in the US to almost unanimous rave reviews. The film was written and directed by Alex Garland, author of bestselling 1996 novel The Beach (also made into a movie) and screenwriter of 28 Days Later (2002) and Never Let Me Go (2010). Ex Machina is Garland’s directorial debut. It’s about a young white coder named Caleb who gets the opportunity to visit the secluded mountain home of his employer Nathan, pioneering programmer of the world’s most powerful search engine (Nathan’s appearance is ambiguous but he reads non-white and the actor who plays him is Guatemalan). Caleb believes the trip innocuous but quickly learns that Nathan’s home is actually a secret research facility in which the brilliant but egocentric and obnoxious genius has been developing sophisticated artificial intelligence. Caleb is immediately introduced to Nathan’s most upgraded construct–a gorgeous white fembot named Ava. And the mind games ensue.

    As the week unfolds the only things we know for sure are (a) imprisoned Ava wants to be free, and, (b) Caleb becomes completely enamored and wants to “rescue” her. Other than that, nothing is clear. What are Ava’s true intentions? Does she like Caleb back or is she just using him to get out? Is Nathan really as much an asshole as he seems or is he putting on a show to manipulate everyone? Who should we feel sorry for? Who should we empathize with? Who should we hate? Who’s the hero? Reviewers and viewers alike are melting in intellectual ecstasy over this brain-twisty movie. The Guardian calls it “accomplished, cerebral film-making”; Wired calls it “one of the year’s most intelligent and thought-provoking films”; Indiewire calls it “gripping, brilliant and sensational”. Alex Garland apparently is the smartest, coolest new director on the block. “Garland understands what he’s talking about,” says RogerEbert.com, and goes “to the trouble to explain more abstract concepts in plain language.”

    Right.

    I like sci-fi and am a fan of Garland’s previous work so I was excited to see his new flick. But let me tell you, my experience was FAR from “brilliant” and “heady” like the multitudes of moonstruck reviewers claimed it would be. Actually, I was livid. And weeks later–I’m STILL pissed. Here’s why…

    *** Spoiler Alert ***

    You wouldn’t know it from the plethora of glowing reviews out there cause she’s hardly mentioned (telling in and of itself) but there’s another prominent fembot in the film. Maybe fifteen minutes into the story we’re introduced to Kyoko, an Asian servant sex slave played by mixed-race Japanese/British actress Sonoya Mizuno. Though bound by abusive servitude, Kyoko isn’t physically imprisoned in a room like Ava because she’s compliant, obedient, willing.

    I recognized the trope of servile Asian woman right away and, how quickly Asian/whites are treated as non-white when they look ethnic in any way.

    Kyoko first appears on screen demure and silent, bringing a surprised Caleb breakfast in his room. Of course I recognized the trope of servile Asian woman right away and, as I wrote in February, how quickly Asian/whites are treated as non-white when they look ethnic in any way. I was instantly uncomfortable. Maybe there’s a point, I thought to myself. But soon after we see Kyoko serving sushi to the men. She accidentally spills food on Caleb. Nathan loses his temper, yells at her, and then explains to Caleb she can’t understand which makes her incompetence even more infuriating. This is how we learn Kyoko is mute and can’t speak. Yep. Nathan didn’t give her a voice. He further programmed her, purportedly, not to understand English.

    kyoko
    Sex slave “Kyoko” played by Japanese/British actress Sonoya Mizuno (image source: i09.com)

    I started to get upset. If there was a point, Garland had better get to it fast.

    Unfortunately the treatment of Kyoko’s character just keeps spiraling. We continue to learn more and more about her horrible existence in a way that feels gross only for shock value rather than for any sort of deconstruction, empowerment, or liberation of Asian women. She is always at Nathan’s side, ready and available, for anything he wants. Eventually Nathan shows Caleb something else special about her. He’s coded Kyoko to love dancing (“I told you you’re wasting your time talking to her. However you would not be wasting your time–if you were dancing with her”). When Nathan flips a wall switch that washes the room in red lights and music then joins a scantily-clad gyrating Kyoko on the dance floor, I was overcome by disgust:

    [youtube https://www.youtube.com/watch?v=hGY44DIQb-A?feature=player_embedded]

    I recently also wrote about Western exploitation of women’s bodies in Asia (incidentally also in February), in particular noting it was US imperialistic conquest that jump-started Thailand’s sex industry. By the 1990s several million tourists from Europe and the U.S. were visiting Thailand annually, many specifically for sex and entertainment. Writer Deena Guzder points out in “The Economics of Commercial Sexual Exploitation” for the Pulitzer Center on Crisis Reporting that Thailand’s sex tourism industry is driven by acute poverty. Women and girls from poor rural families make up the majority of sex workers. “Once lost in Thailand’s seedy underbelly, these women are further robbed of their individual agency, economic independence, and bargaining power.” Guzder gloomily predicts, “If history repeats itself, the situation for poor Southeast Asian women will only further deteriorate with the global economic downturn.”

    caption
    Red Light District, Phuket (image source: phuket.com)

    You know who wouldn’t be a stranger to any of this? Alex Garland. His first novel, The Beach, is set in Thailand and his second novel, The Tesseract, is set in the Philippines, both developing nations where Asian women continue to be used and abused for Western gain. In a 1999 interview with journalist Ron Gluckman, Garland said he made his first trip to Asia as a teenager in high school and had been back at least once or twice almost every year since. He also lived in the Philippines for 9 months. In a perhaps telling choice of words, Gluckman wrote that Garland had “been bitten by the Asian bug, early and deep.” At the time many Asian critics were criticizing The Beach as a shallow look at the region by an uniformed outsider but Garland protested in his interview:

    A lot of the criticism of The Beach is that it presents Thais as two dimensional, as part of the scenery. That’s because these people I’m writing about–backpackers–really only see them as part of the scenery. They don’t see them or the Thai culture. To them, it’s all part of a huge theme park, the scenery for their trip. That’s the point.

    I disagree severely with Garland. In insisting on his right to portray people of color one way while dismissing how those people see themselves, he not only centers his privileged perspective (i.e. white, male) but shows determined disinterest in representing oppressed people transformatively. Leads me to wonder how much he really knows or cares about inequity and uplifting marginalized voices. Indeed in Ex Machina the only point that Garland ever seems to make is that racist/sexist tropes exists, not that we’re going to do anything about them. And that kind of non-critical non-resistant attitude does more to reify and reinforce than anything else. Take for instance in a recent interview with Cinematic Essential (one of few where the interviewer asked about race), Garland had this to say about stereotypes in his new film:

    Sometimes you do things unconsciously, unwittingly, or stupidly, I guess, and the only embedded point that I knew I was making in regards to race centered around the tropes of Kyoko [Sonoya Mizuno], a mute, very complicit Asian robot, or Asian-appearing robot, because of course, she, as a robot, isn’t Asian. But, when Nathan treats the robot in the discriminatory way that he treats it, I think it should be ambivalent as to whether he actually behaves this way, or if it’s a very good opportunity to make him seem unpleasant to Caleb for his own advantage.

    First, approaching race “unconsciously” or “unwittingly” is never a good idea and moreover a classic symptom of white willful ignorance. Second, Kyoko isn’t Asian because she’s a robot? Race isn’t biological or written into human DNA. It’s socio-politically constructed and assigned usually by those in power. Kyoko is Asian because she ha been made that way not only by her oppressor, Nathan, but by Garland himself, the omniscient creator of all. Third, Kyoko represents the only embedded race point in the movie? False. There are two other women of color who play enslaved fembots in Ex Machina and their characters are abused just as badly. “Jasmine” is one of Nathan’s early fembots. She’s Black. We see her body twice. Once being instructed how to write and once being dragged lifeless across the floor. You will never recognize real-life Black model and actress Symara A. Templeman in the role however. Why? Because her always naked body is inexplicably headless when it appears. That’s right. One of the sole Black bodies/persons in the entire film does not have (per Garland’s writing and direction) a face, head, or brain.

    caption
    Symara A. Templeman, who played “Jasmine” in Ex Machina (image source: Templeman on Google+)

    “Jade” played by Asian model and actress Gana Bayarsaikhan, is presumably also a less successful fembot predating Kyoko but perhaps succeeding Jasmine. She too is always shown naked but, unlike Jasmine, she has a head, and, unlike Kyoko, she speaks. We see her being questioned repeatedly by Nathan while trapped behind glass. Jade is resistant and angry. She doesn’t understand why Nathan won’t let her out and escalates to the point we are lead to believe she is decommissioned for her defiance.

    It’s significant that Kyoko, a mixed-race Asian/white woman, later becomes the “upgraded” Asian model. It’s also significant that at the movie’s end white Ava finds Jade’s decommissioned body in a closet in Nathan’s room and skins it to cover her own body. (Remember when Katy Perry joked in 2012 she was obsessed with Japanese people and wanted to skin one?). Ava has the option of white bodies but after examining them meticulously she deliberately chooses Jade. Despite having met Jasmine previously, her Black body is conspicuously missing from the closets full of bodies Nathan has stored for his pleasure and use. And though Kyoko does help Ava kill Nathan in the end, she herself is “killed” in the process (i.e. never free) and Ava doesn’t care at all. What does all this show? A very blatant standard of beauty/desire that is not only male-designed but clearly a light, white, and violently assimilative one.

    caption
    Gana Bayarsaikhan, who played “Jade” in Ex Machina (image source: profile-models.com)

    I can’t even being to tell you how offended and disturbed I was by the treatment of women of color in this movie. I slept restlessly the night after I saw Ex Machina, woke up muddled at 2:45 AM and–still clinging to the hope that there must have been a reason for treating women of color this way (Garland’s brilliant right?)–furiously went to work reading interviews and critiques. Aside from a few brief mentions of race/gender, I found barely anything addressing the film’s obvious deployment of racialized gender stereotypes for its own benefit. For me this movie will be joining the long list of many so-called film classics I will never be able to admire. Movies where supposed artistry and brilliance are acceptable excuses for “unconscious” “unwitting” racism and sexism. Ex Machina may be smart in some ways, but it damn sure isn’t in others.

    Correction (8/1/2015): An earlier version of this post incorrectly stated that actress Symara A. Templeman was the only Black person in the film. The post has been updated to indicate that the movie also featured at least one other Black actress, Deborah Rosan, in an uncredited role as Office Manager.

    _____

    Sharon H. Chang is an author, scholar, sociologist and activist. She writes primarily on racism, social justice and the Asian American diaspora with a feminist lens. Her pieces have appeared in Hyphen Magazine, ParentMap Magazine, The Seattle Globalist, on AAPI Voices and Racism Review. Her debut book, Raising Mixed Race: Multiracial Asian Children in a Post-Racial World, is forthcoming through Paradigm Publishers as part of Joe R. Feagin’s series “New Critical Viewpoints on Society.” She also sits on the board for Families of Color Seattle and is on the planning committee for the biennial Critical Mixed Race Studies Conference. She blogs regularly at Multiracial Asian Families, where an earlier version of this post first appeared.

    The editors thank Dorothy Kim for referring us to this essay.

    Back to the essay

  • Poetics of Control

    Poetics of Control

    a review of Alexander R. Galloway, The Interface Effect (Polity, 2012)

    by Bradley J. Fest

    ~

    This summer marks the twenty-fifth anniversary of the original French publication of Gilles Deleuze’s seminal essay, “Postscript on the Societies of Control” (1990). A strikingly powerful short piece, “Postscript” remains, even at this late date, one of the most poignant, prescient, and concise diagnoses of life in the overdeveloped digital world of the twenty-first century and the “ultrarapid forms of apparently free-floating control that are taking over from the old disciplines.”[1] A stylistic departure from much of Deleuze’s other writing in its clarity and straightforwardness, the essay describes a general transformation from the modes of disciplinary power that Michel Foucault famously analyzed in Discipline and Punish (1975) to “societies of control.” For Deleuze, the late twentieth century is characterized by “a general breakdown of all sites of confinement—prisons, hospitals, factories, schools, the family.”[2] The institutions that were formerly able to strictly organize time and space through perpetual surveillance—thereby, according to Foucault, fabricating the modern individual subject—have become fluid and modular, “continually changing from one moment to the next.”[3] Individuals have become “dividuals,” “dissolv[ed] . . . into distributed networks of information.”[4]

    Over the past decade, media theorist Alexander R. Galloway has extensively and rigorously elaborated on Deleuze’s suggestive pronouncements, probably devoting more pages in print to thinking about the “Postscript” than has any other single writer.[5] Galloway’s most important work in this regard is his first book, Protocol: How Control Exists after Decentralization (2004). If the figure for the disciplinary society was Jeremy Bentham’s panopticon, a machine designed to induce a sense of permanent visibility in prisoners (and, by extension, the modern subject), Galloway argues that the distributed network, and particularly the distributed network we call the internet, is an apposite figure for control societies. Rhizomatic and flexible, distributed networks historically emerged as an alternative to hierarchical, rigid, centralized (and decentralized) networks. But far from being chaotic and unorganized, the protocols that organize our digital networks have created “the most highly controlled mass media hitherto known. . . . While control used to be a law of society, now it is more like a law of nature. Because of this, resisting control has become very challenging indeed.”[6] To put it another way: if in 1980 Deleuze and Félix Guattari complained that “we’re tired of trees,” Galloway and philosopher Eugene Thacker suggest that today “we’re tired of rhizomes.”[7]

    The imperative to think through the novel challenges presented by control societies and the urgent need to develop new methodologies for engaging the digital realities of the twenty-first century are at the heart of The Interface Effect (2012), the final volume in a trio of works Galloway calls Allegories of Control.[8] Guiding the various inquiries in the book is his provocative claim that “we do not yet have a critical or poetic language in which to represent the control society.”[9] This is because there is an “unrepresentability lurking within information aesthetics” (86). This claim for unrepresentability, that what occurs with digital media is not representation per se, is The Interface Effect’s most significant departure from previous media theory. Rather than rehearse familiar media ecologies, Galloway suggests that “the remediation argument (handed down from McLuhan and his followers including Kittler) is so full of holes that it is probably best to toss it wholesale” (20). The Interface Effect challenges thinking about mimesis that would place computers at the end of a line of increasingly complex modes of representation, a line extending from Plato, through Erich Auerbach, Marshall McLuhan, and Friedrich Kittler, and terminating in Richard Grusin, Jay David Bolter, and many others. Rather than continue to understand digital media in terms of remediation and representation, Galloway emphasizes the processes of computational media, suggesting that the inability to productively represent control societies stems from misunderstandings about how to critically analyze and engage with the basic materiality of computers.

    The book begins with an introduction polemically positioning Galloway’s own media theory directly against Lev Manovich’s field-defining book, The Language of New Media (2001). Contra Manovich, Galloway stresses that digital media are not objects but actions. Unlike cinema, which he calls an ontology because it attempts to bring some aspect of the phenomenal world nearer to the viewer—film, echoing Oedipa Maas’s famous phrase, “projects worlds” (11)—computers involve practices and effects (what Galloway calls an “ethic”) because they are “simply on a world . . . subjecting it to various forms of manipulation, preemption, modeling, and synthetic transformation. . . . The matter at hand is not that of coming to know a world, but rather that of how specific, abstract definitions are executed to form a world” (12, 13, 23). Or to take two other examples Galloway uses to positive effect: the difference can be understood as that between language, which describes and represents, encoding a world, versus calculus, which does or simulates doing something to the world; calculus is a “system of reasoning, an executable machine” (22). Though Galloway does more in Gaming: Essays on Algorithmic Culture (2006) to fully develop a way of analyzing computational media that privileges action over representation, The Interface Effect theoretically grounds this important distinction between mimesis and action, description and process.[10] Further, it constitutes a bold methodological step away from some of the dominant ways of thinking about digital media that simultaneously offers its readers new ways to connect media studies more firmly to politics.

    Further distinguishing himself from writers like Manovich, Galloway says that there has been a basic misunderstanding regarding media and mediation, and that the two systems are “violently unconnected” (13). Galloway demonstrates, in contrast to such thinkers as Kittler, that there is an old line of thinking about mediation that can be traced very far back and that is not dependent on thinking about media as exclusively tied to nineteenth and twentieth century communications technology:

    Doubtless certain Greek philosophers had negative views regarding hypomnesis. Yet Kittler is reckless to suggest that the Greeks had no theory of mediation. The Greeks indubitably had an intimate understanding of the physicality of transmission and message sending (Hermes). They differentiated between mediation as immanence and mediation as expression (Iris versus Hermes). They understood the mediation of poetry via the Muses and their techne. They understood the mediation of bodies through the “middle loving” Aphrodite. They even understood swarming and networked presence (in the incontinent mediating forms of the Eumenides who pursued Orestes in order to “process” him at the procès of Athena). Thus we need only look a little bit further to shed this rather vulgar, consumer-electronics view of media, and instead graduate into the deep history of media as modes of mediation. (15)

    Galloway’s point here is that the larger contemporary discussion of mediation that he is pursuing in The Interface Effect should not be restricted to merely the digital artifacts that have occasioned so much recent theoretical activity, and that there is an urgent need for deeper histories of mediation. Though the book appears to be primarily concerned with the twentieth and twenty-first century, this gesture toward the Greeks signals the important work of historicization that often distinguishes much of Galloway’s work. In “Love of the Middle” (2014), for example, which appears in the book Excommunication (2014), co-authored with Thacker and McKenzie Wark, Galloway fully develops a rigorous reading of Greek mediation, suggesting that in the Eumenides, or what the Romans called the Furies, reside a notable historical precursor for understanding the mediation of distributed networks.[11]

    In The Interface Effect these larger efforts at historicization allow Galloway to always understand “media as modes of mediation,” and consequently his big theoretical step involves claiming that “an interface is not a thing, an interface is an effect. It is always a process or a translation” (33). There are a variety of positive implications for the study of media understood as modes of mediation, as a study of interface effects. Principal amongst these are the rigorous methodological possibilities Galloway’s focus emphasizes.

    In this, methodologically and otherwise, Galloway’s work in The Interface Effect resembles and extends that of his teacher Fredric Jameson, particularly the kind of work found in The Political Unconscious (1981). Following Jameson’s emphasis on the “poetics of social forms,” Galloway’s goal is “not to reenact the interface, much less to ‘define’ it, but to identify the interface itself as historical. . . . This produces . . . a perspective on how cultural production and the socio-historical situation take form as they are interfaced together” (30). The Interface Effect firmly ties the cultural to the social, economic, historical, and political, finding in a variety of locations ways that interfaces function as allegories of control. “The social field itself constitutes a grand interface, an interface between subject and world, between surface and source, and between critique and the objects of criticism. Hence the interface is above all an allegorical device that will help us gain some perspective on culture in the age of information” (54). The power of looking at the interface as an allegorical device, as a “control allegory” (30), is demonstrated throughout the book’s relatively wide-ranging analyses of various interface effects.

    Chapter 1, “The Unworkable Interface,” historicizes some twentieth century transformations of the interface, concisely summarizing a history of mediation by moving from Norman Rockwell’s “Triple Self-Portrait” (1960), through Mad Magazine’s satirization of Rockwell, to World of Warcraft (2004-2015). Viewed from the level of the interface, with all of its nondiegetic menus and icons and the ways it erases the line between play and labor, Galloway demonstrates both here and in the last chapter that World of Warcraft is a powerful control allegory: “it is not an avant-garde image, but, nevertheless, it firmly delivers an avant-garde lesson in politics” (44).[12] Further exemplifying the importance of historicizing interfaces, Chapter 2 continues to demonstrate the value of approaching interface effects allegorically. Galloway finds “a formal similarity between the structure of ideology and the structure of software” (55), arguing that software “is an allegorical figure for the way in which . . . political and social realities are ‘resolved’ today: not through oppression or false consciousness . . . but through the ruthless rule of code” (76). Chapter 4 extends such thinking toward a masterful reading of the various mediations at play in a show such as 24 (2001-2010, 2014), arguing that 24 is political not because of its content but “because the show embodies in its formal technique the essential grammar of the control society, dominated as it is by specific network and informatic logics” (119). In short, The Interface Effect continually demonstrates the potent critical tools approaching mediation as allegory can provide, reaffirming the importance of a Jamesonian approach to cultural production in the digital age.

    Whether or not readers are convinced, however, by Galloway’s larger reworking of the field of digital media studies, his emphasis on attending to contemporary cultural artifacts as allegories of control, or his call in the book’s conclusion for a politics of “whatever being” probably depends upon their thoughts about the unrepresentability of today’s global networks in Chapter 3, “Are Some Things Unrepresentable?” His answer to the chapter’s question is, quite simply, “Yes.” Attempts to visualize the World Wide Web only result in incoherent repetition: “every map of the internet looks the same,” and as a result “no poetics is possible in this uniform aesthetic space” (85). He argues that, in the face of such an aesthetic regime, what Jacques Rancière calls a “distribution of the sensible”[13]:

    The point is not so much to call for a return to cognitive mapping, which of course is of the highest importance, but to call for a poetics as such for this mysterious new machinic space. . . . Today’s systemics have no contrary. Algorithms and other logical structures are uniquely, and perhaps not surprisingly, monolithic in their historical development. There is one game in town: a positivistic dominant of reductive, systemic efficiency and expediency. Offering a counter-aesthetic in the face of such systematicity is the first step toward building a poetics for it, a language of representability adequate to it. (99)

    There are, to my mind, two ways of responding to Galloway’s call for a poetics as such in the face of the digital realities of contemporaneity.

    On the one hand, I am tempted to agree with him. Galloway is clearly signaling his debt to some of Jameson’s more important large claims and is reviving the need “to think the impossible totality of the contemporary world system,” what Jameson once called the “technological” or “postmodern sublime.”[14] But Galloway is also signaling the importance of poesis for this activity. Not only is Jamesonian “cognitive mapping” necessary, but the totality of twenty-first century digital networks requires new imaginative activity, a counter-aesthetics commensurate with informatics. This is an immensely attractive position, at least to me, as it preserves a space for poetic, avant-garde activity, and indeed, demands that, all evidence to the contrary, the imagination still has an important role to play in the face of societies of control. (In other words, there may be some “humanities” left in the “digital humanities.”[15]) Rather than suggesting that the imagination has been utterly foreclosed by the cultural logic of late capitalism—that we can no longer imagine any other world, that it is easier to imagine the end of the world than a better one—Galloway says that there must be a reinvestment in the imagination, in poetics as such, that will allow us to better represent, understand, and intervene in societies of control (though not necessarily to imagine a better world; more on this below). Given the present landscape, how could one not be attracted to such a position?

    On the other hand, Galloway’s argument hinges on his claim that such a poetics has not emerged and, as Patrick Jagoda and others have suggested, one might merely point out that such a claim is demonstrably false.[16] Though I hope I hardly need to list some of the significant cultural products across a range of media that have appeared over the last fifteen years that critically and complexly engage with the realities of control (e.g., The Wire [2002-08]), it is not radical to suggest that art engaged with pressing contemporary concerns has appeared and will continue to appear, that there are a variety of significant artists who are attempting to understand, represent, and cope with the distributed networks of contemporaneity. One could obviously suggest Galloway’s argument is largely rhetorical, a device to get his readers to think about the different kinds of poesis control societies, distributed networks, and interfaces call for, but this blanket statement threatens to shut down some of the vibrant activity that is going on all over the world commenting upon the contemporary situation. In other words, yes we need a poetics of control, but why must the need for such a poetics hinge on the claim that there has not yet emerged “a critical or poetic language in which to represent the control society”? Is not Galloway’s own substantial, impressive, and important decade-long intellectual project proof that people have developed a critical language that is capable of representing the control society? I would certainly answer in the affirmative.

    There are some other rhetorical choices in the conclusion of The Interface Effect that, though compelling, deserve to be questioned, or at least highlighted. I am referring to Galloway’s penchant—following another one of his teachers at Duke, Michael Hardt—for invoking a Bartlebian politics, what Galloway calls “whatever being,” as an appropriate response to present problems.[17] In Hardt and Antonio Negri’s Empire (2000), in the face of the new realities of late capitalism—the multitude, the management of hybridities, the non-place of Empire, etc.—they propose that Herman Melville’s “Bartleby in his pure passivity and his refusal of any particulars presents us with a figure of generic being, being as such, being and nothing more. . . . This refusal certainly is the beginning of a liberatory politics, but it is only a beginning.”[18] Bartleby, with his famous response of “‘I would prefer not to,’”[19] has been frequently invoked by such substantial figures as Giorgio Agamben in the 1990s and Slavoj Žižek in the 2000s (following Hardt and Negri). Such thinkers have frequently theorized Bartleby’s passive negativity as a potentially radical political position, and perhaps the only one possible in the face of global economic realities.[20] (And indeed, it is easy enough to read, say, Occupy Wall Street as a Bartlebian political gesture.) Galloway’s response to the affective postfordist labor of digital networks, that “each and every day, anyone plugged into a network is performing hour after hour of unpaid micro labor” (136), is similarly to withdraw, to “demilitarize being. Stand down. Cease participating” (143).

    Like Hardt and Negri and so many others, Galloway’s “whatever being” is a response to the failures of twentieth century emancipatory politics. He writes:

    We must stress that it is not the job of politics to invent a new world. On the contrary it is the job of politics to make all these new worlds irrelevant. . . . It is time now to subtract from this world, not add to it. The challenge today is not one of political or moral imagination, for this problem was solved ages ago—kill the despots, surpass capitalism, inclusion of the excluded, equality for all of humanity, end exploitation. The world does not need new ideas. The challenge is simply to realize what we already know to be true. (138-39)

    And thus the tension of The Interface Effect is between this call for withdrawal, to work with what there is, to exploit protocological possibility, etc., and the call for a poetics of control, a poesis capable of representing control societies, which to my mind implies imagination (and thus, inevitably, something different, if not new). If there is anything wanting about the book it is its lack of clarity about how these two critical projects are connected (or indeed, if they are perhaps the same thing!). Further, it is not always clear what exactly Galloway means by “poetics” nor how a need for a poetics corresponds to the book’s emphasis on understanding mediation as process over representation, action over objects. This lack of clarity may be due in part to the fact that, as Galloway indicates in his most recent work, Laruelle: Against the Digital (2014), there is some necessary theorization that he needs to do before he can adequately address the digital head-on. As he writes in the conclusion to that book: “The goal here has not been to elucidate, promote, or disparage contemporary digital technologies, but rather to draft a simple prolegomenon for future writing on digitality and philosophy.”[21] In other words, it seems like Allegories of Control, The Exploit: A Theory of Networks (2007), and Laruelle may constitute the groundwork for an even more ambitious confrontation with the digital, one where the kinds of tensions just noted might dissolve. As such, perhaps the reinvocation of a Bartlebian politics of withdrawal at the end of The Interface Effect is merely a kind of stop-gap, a place-holder before a more coherent poetics of control can emerge (as seems to be the case for the Hardt and Negri of Empire). Although contemporary theorists frequently invoke Bartleby, he remains a rather uninspiring figure.

    These criticisms aside, however, Galloway’s conclusion of the larger project that is Allegories of Control reveals him to be a consistently accessible and powerful guide to the control society and the digital networks of the twenty-first century. If the new directions in his recent work are any indication, and Laruelle is merely a prolegomenon to future projects, then we should perhaps not despair at all about the present lack of a critical language for representing control societies.

    _____

    Bradley J. Fest teaches literature at the University of Pittsburgh. At present he is working on The Nuclear Archive: American Literature Before and After the Bomb, a book investigating the relationship between nuclear and information technology in twentieth and twenty-first century American literature. He has published articles in boundary 2, Critical Quarterly, and Studies in the Novel; and his essays have appeared in David Foster Wallace and “The Long Thing” (2014) and The Silence of Fallout (2013). The Rocking Chair, his first collection of poems, is forthcoming from Blue Sketch Press. He blogs at The Hyperarchival Parallax.

    Back to the essay
    _____

    [1] Though best-known in the Anglophone world via the translation that appeared in 1992 in October as “Postscript on the Societies of Control,” the piece appears as “Postscript on Control Societies,” in Gilles Deleuze, Negotiations: 1972-1990, trans. Martin Joughin (New York: Columbia University Press, 1995), 178. For the original French see Gilles Deleuze, “Post-scriptum sur des sociétés de contrôle,” in Pourparlers, 1972-1990 (Paris: Les Éditions de Minuit, 1990), 240-47. The essay originally appeared as “Les sociétés de contrôle,” L’Autre Journal, no. 1 (May 1990). Further references are to the Negotiations version.

    [2] Ibid.

    [3] Ibid., 179.

    [4] Alexander R. Galloway, Protocol: How Control Exists after Decentralization (Cambridge, MA: MIT Press, 2004), 12n18.

    [5] In his most recent book, Galloway even goes so far as to ask about the “Postscript”: “Could it be that Deleuze’s most lasting legacy will consist of 2,300 words from 1990?” (Alexander R. Galloway, Laruelle: Against the Digital [Minneapolis: University of Minnesota Press, 2014], 96, emphases in original). For Andrew Culp’s review of Laruelle for The b2 Review, see “From the Decision to the Digital.”

    [6] Galloway, Protocol, 147.

    [7] Gilles Deleuze and Félix Guattari, A Thousand Plateaus: Capitalism and Schizophrenia, trans. Brian Massumi (Minneapolis: University of Minnesota Press, 1987), 15; and Alexander R. Galloway and Eugene Thacker, The Exploit: A Theory of Networks (Minneapolis: University of Minnesota Press, 2007), 153. For further discussions of networks see Alexander R. Galloway, “Networks,” in Critical Terms for Media Studies, ed. W. J. T. Mitchell and Mark B. N. Hansen (Chicago: University of Chicago Press), 280-96.

    [8] The other books in the trilogy include Protocol and Alexander R. Galloway, Gaming: Essays on Algorithmic Culture (Minneapolis: University of Minnesota Press, 2006).

    [9] Alexander R. Galloway, The Interface Effect (Malden, MA: Polity, 2012), 98. Hereafter, this work is cited parenthetically.

    [10] See especially Galloway’s masterful first chapter of Gaming, “Gamic Action, Four Moments,” 1-38. To my mind, this is one of the best primers for critically thinking about videogames, and it does much to fundamentally ground the study of videogames in action (rather than, as had previously been the case, in either ludology or narratology).

    [11] See Alexander R. Galloway, “Love of the Middle,” in Excommunication: Three Inquiries in Media and Mediation, by Alexander R. Galloway, Eugene Thacker, and McKenzie Wark (Chicago: University of Chicago Press, 2014), 25-76.

    [12] This is also something he touched on in his remarkable reading of Donald Rumsfeld’s famous “unknown unknowns.” See Alexander R. Galloway, Warcraft and Utopia,” Ctheory.net (16 February 2006). For a discussion of labor in World of Warcraft, see David Golumbia, “Games Without Play,” in “Play,” special issue, New Literary History 40, no. 1 (Winter 2009): 179-204.

    [13] See the following by Jacques Rancière: The Politics of Aesthetics: The Distribution of the Sensible, trans. Gabriel Rockhill (New York: Continuum, 2004), and “Are Some Things Unrepresentable?” in The Future of the Image, trans. Gregory Elliott (New York: Verso, 2007), 109-38.

    [14] Fredric Jameson, Postmodernism; or, the Cultural Logic of Late Capitalism (Durham, NC: Duke University Press, 1991), 38.

    [15] For Galloway’s take on the digital humanities more generally, see his “Everything Is Computational,” Los Angeles Review of Books (27 June 2013), and “The Cybernetic Hypothesis,” differences 25, no. 1 (Spring 2014): 107-31.

    [16] See Patrick Jagoda, introduction to Network Aesthetics (Chicago: University of Chicago Press, forthcoming 2015).

    [17] Galloway’s “whatever being” is derived from Giorgio Agamben, The Coming Community, trans. Michael Hardt (Minneapolis: University of Minnesota Press, 1993).

    [18] Michael Hardt and Antonio Negri, Empire (Cambridge, MA: Harvard University Press, 2000), 203, 204.

    [19] Herman Melville, “Bartleby, The Scrivener: A Story of Wall-street,” in Melville’s Short Novels, critical ed., ed. Dan McCall (New York: W. W. Norton, 2002), 10.

    [20] See Giorgio Agamben, “Bartleby, or On Contingency,” in Potentialities: Collected Essays in Philosophy, trans. and ed. Daniel Heller-Roazen (Stanford: Stanford University Press, 1999), 243-71; and see the following by Slavoj Žižek: Iraq: The Borrowed Kettle (New York: Verso, 2004), esp. 71-73, and The Parallax View (New York: Verso, 2006), esp. 381-85.

    [21] Galloway, Laruelle, 220.

  • Men (Still) Explain Technology to Me: Gender and Education Technology

    Men (Still) Explain Technology to Me: Gender and Education Technology

    By Audrey Watters
    ~

    Late last year, I gave a similarly titled talk—“Men Explain Technology to Me”—at the University of Mary Washington. (I should note here that the slides for that talk were based on a couple of blog posts by Mallory Ortberg that I found particularly funny, “Women Listening to Men in Art History” and “Western Art History: 500 Years of Women Ignoring Men.” I wanted to do something similar with my slides today: find historical photos of men explaining computers to women. Mostly I found pictures of men or women working separately, working in isolation. Mostly pictures of men and computers.)

    Men Explain Technology

    So that University of Mary Washington talk: It was the last talk I delivered in 2014, and I did so with a sigh of relief, but also more than a twinge of frightened nausea—nausea that wasn’t nerves from speaking in public. I’d had more than a year full of public speaking under my belt—exhausting enough as I always try to write new talks for each event, but a year that had become complicated quite frighteningly in part by an ongoing campaign of harassment against women on the Internet, particularly those who worked in video game development.

    Known as “GamerGate,” this campaign had reached a crescendo of sorts in the lead-up to my talk at UMW, some of its hate aimed at me because I’d written about the subject, demanding that those in ed-tech pay attention and speak out. So no surprise, all this colored how I shaped that talk about gender and education technology, because, of course, my gender shapes how I experience working in and working with education technology. As I discussed then at the University of Mary Washington, I have been on the receiving end of threats and harassment for stories I’ve written about ed-tech—almost all the women I know who have a significant online profile have in some form or another experienced something similar. According to a Pew Research survey last year, one in 5 Internet users reports being harassed online. But GamerGate felt—feels—particularly unhinged. The death threats to Anita Sarkeesian, Zoe Quinn, Brianna Wu, and others were—are—particularly real.

    I don’t really want to rehash all of that here today, particularly my experiences being on the receiving end of the harassment; I really don’t. You can read a copy of that talk from last November on my website. I will say this: GamerGate supporters continue to argue that their efforts are really about “ethics in journalism” not about misogyny, but it’s quite apparent that they have sought to terrorize feminists and chase women game developers out of the industry. Insisting that video games and video game culture retain a certain puerile machismo, GamerGate supporters often chastise those who seek to change the content of videos games, change the culture to reflect the actual demographics of video game players. After all, a recent industry survey found women 18 and older represent a significantly greater portion of the game-playing population (36%) than boys age 18 or younger (17%). Just over half of all games are men (52%); that means just under half are women. Yet those who want video games to reflect these demographics are dismissed by GamerGate as “social justice warriors.” Dismissed. Harassed. Shouted down. Chased out.

    And yes, more mildly perhaps, the verb that grew out of Rebecca Solnit’s wonderful essay “Men Explain Things to Me” and the inspiration for the title to this talk, mansplained.

    Solnit first wrote that essay back in 2008 to describe her experiences as an author—and as such, an expert on certain subjects—whereby men would presume she was in need of their enlightenment and information—in her words “in some sort of obscene impregnation metaphor, an empty vessel to be filled with their wisdom and knowledge.” She related several incidents in which men explained to her topics on which she’d published books. She knew things, but the presumption was that she was uninformed. Since her essay was first published the term “mansplaining” has become quite ubiquitous, used to describe the particular online version of this—of men explaining things to women.

    I experience this a lot. And while the threats and harassment in my case are rare but debilitating, the mansplaining is more insidious. It is overpowering in a different way. “Mansplaining” is a micro-aggression, a practice of undermining women’s intelligence, their contributions, their voice, their experiences, their knowledge, their expertise; and frankly once these pile up, these mansplaining micro-aggressions, they undermine women’s feelings of self-worth. Women begin to doubt what they know, doubt what they’ve experienced. And then, in turn, women decide not to say anything, not to speak.

    I speak from experience. On Twitter, I have almost 28,000 followers, most of whom follow me, I’d wager, because from time to time I say smart things about education technology. Yet regularly, men—strangers, typically, but not always—jump into my “@-mentions” to explain education technology to me. To explain open source licenses or open data or open education or MOOCs to me. Men explain learning management systems to me. Men explain the history of education technology to me. Men explain privacy and education data to me. Men explain venture capital funding of education startups to me. Men explain the business of education technology to me. Men explain blogging and journalism and writing to me. Men explain online harassment to me.

    The problem isn’t just that men explain technology to me. It isn’t just that a handful of men explain technology to the rest of us. It’s that this explanation tends to foreclose questions we might have about the shape of things. We can’t ask because if we show the slightest intellectual vulnerability, our questions—we ourselves—lose a sort of validity.

    Yet we are living in a moment, I would contend, when we must ask better questions of technology. We neglect to do so at our own peril.

    Last year when I gave my talk on gender and education technology, I was particularly frustrated by the mansplaining to be sure, but I was also frustrated that those of us who work in the field had remained silent about GamerGate, and more broadly about all sorts of issues relating to equity and social justice. Of course, I do know firsthand that it can difficult if not dangerous to speak out, to talk critically and write critically about GamerGate, for example. But refusing to look at some of the most egregious acts easily means often ignoring some of the more subtle ways in which marginalized voices are made to feel uncomfortable, unwelcome online. Because GamerGate is really just one manifestation of deeper issues—structural issues—with society, culture, technology. It’s wrong to focus on just a few individual bad actors or on a terrible Twitter hashtag and ignore the systemic problems. We must consider who else is being chased out and silenced, not simply from the video game industry but from the technology industry and a technological world writ large.

    I know I have to come right out and say it, because very few people in education technology will: there is a problem with computers. Culturally. Ideologically. There’s a problem with the internet. Largely designed by men from the developed world, it is built for men of the developed world. Men of science. Men of industry. Military men. Venture capitalists. Despite all the hype and hope about revolution and access and opportunity that these new technologies will provide us, they do not negate hierarchy, history, privilege, power. They reflect those. They channel it. They concentrate it, in new ways and in old.

    I want us to consider these bodies, their ideologies and how all of this shapes not only how we experience technology but how it gets designed and developed as well.

    There’s that very famous New Yorker cartoon: “On the internet, nobody knows you’re a dog.” The cartoon was first published in 1993, and it demonstrates this sense that we have long had that the Internet offers privacy and anonymity, that we can experiment with identities online in ways that are severed from our bodies, from our material selves and that, potentially at least, the internet can allow online participation for those denied it offline.

    Perhaps, yes.

    But sometimes when folks on the internet discover “you’re a dog,” they do everything in their power to put you back in your place, to remind you of your body. To punish you for being there. To hurt you. To threaten you. To destroy you. Online and offline.

    Neither the internet nor computer technology writ large are places where we can escape the materiality of our physical worlds—bodies, institutions, systems—as much as that New Yorker cartoon joked that we might. In fact, I want to argue quite the opposite: that computer and Internet technologies actually re-inscribe our material bodies, the power and the ideology of gender and race and sexual identity and national identity. They purport to be ideology-free and identity-less, but they are not. If identity is unmarked it’s because there’s a presumption of maleness, whiteness, and perhaps even a certain California-ness. As my friend Tressie McMillan Cottom writes, in ed-tech we’re all supposed to be “roaming autodidacts”: happy with school, happy with learning, happy and capable and motivated and well-networked, with functioning computers and WiFi that works.

    By and large, all of this reflects who is driving the conversation about, if not the development of these technology. Who is seen as building technologies. Who some think should build them; who some think have always built them.

    And that right there is already a process of erasure, a different sort of mansplaining one might say.

    Last year, when Walter Isaacson was doing the publicity circuit for his latest book, The Innovators: How a Group of Hackers, Geniuses, and Geeks Created the Digital Revolution (2014), he’d often relate of how his teenage daughter had written an essay about Ada Lovelace, a figure whom Isaacson admitted that he’d never heard of before. Sure, he’d written biographies of Steve Jobs and Albert Einstein and Benjamin Franklin and other important male figures in science and technology, but the name and the contributions of this woman were entirely unknown to him. Ada Lovelace, daughter of Lord Byron and the woman whose notes on Charles Babbage’s proto-computer the Analytical Engine are now recognized as making her the world’s first computer programmer. Ada Lovelace, the author of the world’s first computer algorithm. Ada Lovelace, the person at the very beginning of the field of computer science.

    Ada Lovelace
    Augusta Ada King, Countess of Lovelace, now popularly known as Ada Lovelace, in a painting by Alfred Edward Chalon (image source: Wikipedia)

    “Ada Lovelace defined the digital age,” Isaacson said in an interview with The New York Times. “Yet she, along with all these other women, was ignored or forgotten.” (Actually, the world has been celebrating Ada Lovelace Day since 2009.)

    Isaacson’s book describes Lovelace like this: “Ada was never the great mathematician that her canonizers claim…” and “Ada believed she possessed special, even supernatural abilities, what she called ‘an intuitive perception of hidden things.’ Her exalted view of her talents led her to pursue aspirations that were unusual for an aristocratic woman and mother in the early Victorian age.” The implication: she was a bit of an interloper.

    A few other women populate Isaacson’s The Innovators: Grace Hopper, who invented the first computer compiler and who developed the programming language COBOL. Isaacson describes her as “spunky,” not an adjective that I imagine would be applied to a male engineer. He also talks about the six women who helped program the ENIAC computer, the first electronic general-purpose computer. Their names, because we need to say these things out loud more often: Jean Jennings, Marilyn Wescoff, Ruth Lichterman, Betty Snyder, Frances Bilas, Kay McNulty. (I say that having visited Bletchley Park where civilian women’s involvement has been erased, as they were forbidden, thanks to classified government secrets, from talking about their involvement in the cryptography and computing efforts there).

    In the end, it’s hard not to read Isaacson’s book without coming away thinking that, other than a few notable exceptions, the history of computing is the history of men, white men. The book mentions education Seymour Papert in passing, for example, but assigns the development of Logo, a programming language for children, to him alone. No mention of the others involved: Daniel Bobrow, Wally Feurzeig, and Cynthia Solomon.

    Even a book that purports to reintroduce the contributions of those forgotten “innovators,” that says it wants to complicate the story of a few male inventors of technology by looking at collaborators and groups, still in the end tells a story that ignores if not undermines women. Men explain the history of computing, if you will. As such it tells a story too that depicts and reflects a culture that doesn’t simply forget but systematically alienates women. Women are a rediscovery project, always having to be reintroduced, found, rescued. There’s been very little reflection upon that fact—in Isaacson’s book or in the tech industry writ large.

    This matters not just for the history of technology but for technology today. And it matters for ed-tech as well. (Unless otherwise noted, the following data comes from diversity self-reports issued by the companies in 2014.)

    • Currently, fewer than 20% of computer science degrees in the US are awarded to women. (I don’t know if it’s different in the UK.) It’s a number that’s actually fallen over the past few decades from a high in 1983 of 37%. Computer science is the only field in science, engineering, and mathematics in which the number of women receiving bachelor’s degrees has fallen in recent years. And when it comes to the employment not just the education of women in the tech sector, the statistics are not much better. (source: NPR)
    • 70% of Google employees are male. 61% are white and 30% Asian. Of Google’s “technical” employees. 83% are male. 60% of those are white and 34% are Asian.
    • 70% of Apple employees are male. 55% are white and 15% are Asian. 80% of Apple’s “technical” employees are male.
    • 69% of Facebook employees are male. 57% are white and 34% are Asian. 85% of Facebook’s “technical” employees are male.
    • 70% of Twitter employees are male. 59% are white and 29% are Asian. 90% of Twitter’s “technical” employees are male.
    • Only 2.7% of startups that received venture capital funding between 2011 and 2013 had women CEOs, according to one survey.
    • And of course, Silicon Valley was recently embroiled in the middle of a sexual discrimination trial involving the storied VC firm Kleiner, Smith, Perkins, and Caulfield filed by former executive Ellen Pao who claimed that men at the firm were paid more and promoted more easily than women. Welcome neither as investors nor entrepreneurs nor engineers, it’s hardly a surprise that, as The Los Angeles Times recently reported, women are leaving the tech industry “in droves.”

    This doesn’t just matter because computer science leads to “good jobs” or that tech startups lead to “good money.” It matters because the tech sector has an increasingly powerful reach in how we live and work and communicate and learn. It matters ideologically. If the tech sector drives out women, if it excludes people of color, that matters for jobs, sure. But it matters in terms of the projects undertaken, the problems tackled, the “solutions” designed and developed.

    So it’s probably worth asking what the demographics look like for education technology companies. What percentage of those building ed-tech software are men, for example? What percentage are white? What percentage of ed-tech startup engineers are men? Across the field, what percentage of education technologists—instructional designers, campus IT, sysadmins, CTOs, CIOs—are men? What percentage of “education technology leaders” are men? What percentage of education technology consultants? What percentage of those on the education technology speaking circuit? What percentage of those developing not just implementing these tools?

    And how do these bodies shape what gets built? How do they shape how the “problem” of education gets “fixed”? How do privileges, ideologies, expectations, values get hard-coded into ed-tech? I’d argue that they do in ways that are both subtle and overt.

    That word “privilege,” for example, has an interesting dual meaning. We use it to refer to the advantages that are are afforded to some people and not to others: male privilege, white privilege. But when it comes to tech, we make that advantage explicit. We actually embed that status into the software’s processes. “Privileges” in tech refer to whomever has the ability to use or control certain features of a piece of software. Administrator privileges. Teacher privileges. (Students rarely have privileges in ed-tech. Food for thought.)

    Or take how discussion forums operate. Discussion forums, now quite common in ed-tech tools—in learning management systems (VLEs as you call them), in MOOCs, for example—often trace their history back to the earliest Internet bulletin boards. But even before then, education technologies like PLATO, a programmed instruction system built by the University of Illinois in the 1970s, offered chat and messaging functionality. (How education technology’s contributions to tech are erased from tech history is, alas, a different talk.)

    One of the new features that many discussion forums boast: the ability to vote up or vote down certain topics. Ostensibly this means that “the best” ideas surface to the top—the best ideas, the best questions, the best answers. What it means in practice often is something else entirely. In part this is because the voting power on these sites is concentrated in the hands of the few, the most active, the most engaged. And no surprise, “the few” here is overwhelmingly male. Reddit, which calls itself “the front page of the Internet” and is the model for this sort of voting process, is roughly 84% male. I’m not sure that MOOCs, who’ve adopted Reddit’s model of voting on comments, can boast a much better ratio of male to female participation.

    What happens when the most important topics—based on up-voting—are decided by a small group? As D. A. Banks has written about this issue,

    Sites like Reddit will remain structurally incapable of producing non-hegemonic content because the “crowd” is still subject to structural oppression. You might choose to stay within the safe confines of your familiar subreddit, but the site as a whole will never feel like yours. The site promotes mundanity and repetition over experimentation and diversity by presenting the user with a too-accurate picture of what appeals to the entrenched user base. As long as the “wisdom of the crowds” is treated as colorblind and gender neutral, the white guy is always going to be the loudest.

    How much does education technology treat its users similarly? Whose questions surface to the top of discussion forums in the LMS (the VLE), in the MOOC? Who is the loudest? Who is explaining things in MOOC forums?

    Ironically—bitterly ironically, I’d say, many pieces of software today increasingly promise “personalization,” but in reality, they present us with a very restricted, restrictive set of choices of who we “can be” and how we can interact, both with our own data and content and with other people. Gender, for example, is often a drop down menu where one can choose either “male” or “female.” Software might ask for a first and last name, something that is complicated if you have multiple family names (as some Spanish-speaking people do) or your family name is your first name (as names in China are ordered). Your name is presented how the software engineers and designers deemed fit: sometimes first name, sometimes title and last name, typically with a profile picture. Changing your username—after marriage or divorce, for example—is often incredibly challenging, if not impossible.

    You get to interact with others, similarly, based on the processes that the engineers have determined and designed. On Twitter, you cannot direct message people, for example, that do not follow you. All interactions must be 140 characters or less.

    This restriction of the presentation and performance of one’s identity online is what “cyborg anthropologist” Amber Case calls the “templated self.” She defines this as “a self or identity that is produced through various participation architectures, the act of producing a virtual or digital representation of self by filling out a user interface with personal information.”

    Case provides some examples of templated selves:

    Facebook and Twitter are examples of the templated self. The shape of a space affects how one can move, what one does and how one interacts with someone else. It also defines how influential and what constraints there are to that identity. A more flexible, but still templated space is WordPress. A hand-built site is much less templated, as one is free to fully create their digital self in any way possible. Those in Second Life play with and modify templated selves into increasingly unique online identities. MySpace pages are templates, but the lack of constraints can lead to spaces that are considered irritating to others.

    As we—all of us, but particularly teachers and students—move to spend more and more time and effort performing our identities online, being forced to use preordained templates constrains us, rather than—as we have often been told about the Internet—lets us be anyone or say anything online. On the Internet no one knows you’re a dog unless the signup process demanded you give proof of your breed. This seems particularly important to keep in mind when we think about students’ identity development. How are their identities being templated?

    While Case’s examples point to mostly “social” technologies, education technologies are also “participation architectures.” Similarly they produce and restrict a digital representation of the learner’s self.

    Who is building the template? Who is engineering the template? Who is there to demand the template be cracked open? What will the template look like if we’ve chased women and people of color out of programming?

    It’s far too simplistic to say “everyone learn to code” is the best response to the questions I’ve raised here. “Change the ratio.” “Fix the leaky pipeline.” Nonetheless, I’m speaking to a group of educators here. I’m probably supposed to say something about what we can do, right, to make ed-tech more just not just condemn the narratives that lead us down a path that makes ed-tech less son. What we can do to resist all this hard-coding? What we can do to subvert that hard-coding? What we can do to make technologies that our students—all our students, all of us—can wield? What we can do to make sure that when we say “your assignment involves the Internet” that we haven’t triggered half the class with fears of abuse, harassment, exposure, rape, death? What can we do to make sure that when we ask our students to discuss things online, that the very infrastructure of the technology that we use privileges certain voices in certain ways?

    The answer can’t simply be to tell women to not use their real name online, although as someone who started her career blogging under a pseudonym, I do sometimes miss those days. But if part of the argument for participating in the open Web is that students and educators are building a digital portfolio, are building a professional network, are contributing to scholarship, then we have to really think about whether or not promoting pseudonyms is a sufficient or an equitable solution.

    The answer can’t be simply be “don’t blog on the open Web.” Or “keep everything inside the ‘safety’ of the walled garden, the learning management system.” If nothing else, this presumes that what happens inside siloed, online spaces is necessarily “safe.” I know I’ve seen plenty of horrible behavior on closed forums, for example, from professors and students alike. I’ve seen heavy-handed moderation, where marginalized voices find their input are deleted. I’ve seen zero-moderation, where marginalized voices are mobbed. We recently learned, for example, that Walter Lewin, emeritus professor at MIT, one of the original rockstar professors of YouTube—millions have watched the demonstrations from his physics lectures, has been accused of sexually harassing women in his edX MOOC.

    The answer can’t simply be “just don’t read the comments.” I would say that it might be worth rethinking “comments” on student blogs altogether—or rather the expectation that they host them, moderate them, respond to them. See, if we give students the opportunity to “own their own domain,” to have their own websites, their own space on the Web, we really shouldn’t require them to let anyone that can create a user account into that space. It’s perfectly acceptable to say to someone who wants to comment on a blog post, “Respond on your own site. Link to me. But I am under no obligation to host your thoughts in my domain.”

    And see, that starts to hint at what I think the answer here to this question about the unpleasantness—by design—of technology. It starts to get at what any sort of “solution” or “alternative” has to look like: it has to be both social and technical. It also needs to recognize there’s a history that might help us understand what’s done now and why. If, as I’ve argued, the current shape of education technologies has been shaped by certain ideologies and certain bodies, we should recognize that we aren’t stuck with those. We don’t have to “do” tech as it’s been done in the last few years or decades. We can design differently. We can design around. We can use differently. We can use around.

    One interesting example of this dual approach that combines both social and technical—outside the realm of ed-tech, I recognize—are the tools that Twitter users have built in order to address harassment on the platform. Having grown weary of Twitter’s refusal to address the ways in which it is utilized to harass people (remember, its engineering team is 90% male), a group of feminist developers wrote The Block Bot, an application that lets you block, en masse, a large list of Twitter accounts who are known for being serial harassers. That list of blocked accounts is updated and maintained collaboratively. Similarly, Block Together lets users subscribe to others’ block lists. Good Game Autoblocker, a tool that blocks the “ringleaders” of GamerGate.

    That gets, just a bit, at what I think we can do in order to make education technology habitable, sustainable, and healthy. We have to rethink the technology. And not simply as some nostalgia for a “Web we lost,” for example, but as a move forward to a Web we’ve yet to ever see. It isn’t simply, as Isaacson would posit it, rediscovering innovators that have been erased, it’s about rethinking how these erasures happen all throughout technology’s history and continue today—not just in storytelling, but in code.

    Educators should want ed-tech that is inclusive and equitable. Perhaps education needs reminding of this: we don’t have to adopt tools that serve business goals or administrative purposes, particularly when they are to the detriment of scholarship and/or student agency—technologies that surveil and control and restrict, for example, under the guise of “safety”—that gets trotted out from time to time—but that have never ever been about students’ needs at all. We don’t have to accept that technology needs to extract value from us. We don’t have to accept that technology puts us at risk. We don’t have to accept that the architecture, the infrastructure of these tools make it easy for harassment to occur without any consequences. We can build different and better technologies. And we can build them with and for communities, communities of scholars and communities of learners. We don’t have to be paternalistic as we do so. We don’t have to “protect students from the Internet,” and rehash all the arguments about stranger danger and predators and pedophiles. But we should recognize that if we want education to be online, if we want education to be immersed in technologies, information, and networks, that we can’t really throw students out there alone. We need to be braver and more compassionate and we need to build that into ed-tech. Like Blockbot or Block Together, this should be a collaborative effort, one that blends our cultural values with technology we build.

    Because here’s the thing. The answer to all of this—to harassment online, to the male domination of the technology industry, the Silicon Valley domination of ed-tech—is not silence. And the answer is not to let our concerns be explained away. That is after all, as Rebecca Solnit reminds us, one of the goals of mansplaining: to get us to cower, to hesitate, to doubt ourselves and our stories and our needs, to step back, to shut up. Now more than ever, I think we need to be louder and clearer about what we want education technology to do—for us and with us, not simply to us.
    _____

    Audrey Watters is a writer who focuses on education technology – the relationship between politics, pedagogy, business, culture, and ed-tech. She has worked in the education field for over 15 years: teaching, researching, organizing, and project-managing. Although she was two chapters into her dissertation (on a topic completely unrelated to ed-tech), she decided to abandon academia, and she now happily fulfills the one job recommended to her by a junior high aptitude test: freelance writer. Her stories have appeared on NPR/KQED’s education technology blog MindShift, in the data section of O’Reilly Radar, on Inside Higher Ed, in The School Library Journal, in The Atlantic, on ReadWriteWeb, and Edutopia. She is the author of the recent book The Monsters of Education Technology (Smashwords, 2014) and working on a book called Teaching Machines. She maintains the widely-read Hack Education blog, on which an earlier version of this review first appeared, and writes frequently for The b2 Review Digital Studies magazine on digital technology and education.

    Back to the essay