boundary 2

Tag: new media

  • Gretchen Soderlund — Futures of Journalisms Past (or, Pasts of Journalism’s Future)

    Gretchen Soderlund — Futures of Journalisms Past (or, Pasts of Journalism’s Future)

    Gretchen Soderlund

    Journalists might be chroniclers of the present, but two decades of books, conferences, symposia, interviews, talks, special issues, and end-of-year features on the future of news suggests they are also preoccupied with what lies ahead. Still, few of today’s media workers are as prescient as William T. Stead, the English journalist and amateur occultist who came close to predicting the 1912 Titanic disaster twenty years before he died in it. In his 1893 short story, “From the Old World to the New,” a transatlantic ocean liner collides with an iceberg and erupts in flames, leaving the vessel’s desperate passengers clinging to a sheet of ice. Unlike the Titanic, everyone in the story lives. Two passengers on a nearby ship receive telepathic distress signals. One has haunting visions of the accident in her sleep, and the other finds a written plea for help in the handwriting of a friend travelling aboard the sinking ship. The clairvoyants relay this information to their captain, who steers a perilous course through the icebergs and rescues the shipwrecked passengers. In 1893 wireless telegraphy, the early term for radio, did not yet exist (even if, as an idea, it electrified the Victorian imagination). By the time of the Titanic’s maiden voyage, radio was a standard maritime communication device. The technology helped, but was no panacea: the closest ship to receive the Titanic’s SOS signals arrived too late for Stead and many of his fellow passengers.

    Stead was at the forefront of thinking about new technologies as well as his own demise. He also had a keen interest in journalism’s future, one shared by many of today’s news workers. Even people who failed to predict the collision of twentieth-century news models with the Web are now regularly called upon to forecast the profession’s future. Answering the future-of-news question requires experts to project past experience and current knowledge onto a forthcoming period of time. But does this question have a history of its own? Did earlier news workers prognosticate as often and with the same urgency? What anxieties or opportunities provoked past future thought? To answer these questions, I explore some future-oriented predictions, assessments, and directives of nineteenth and twentieth-century reporters, editors, and media entrepreneurs in the United States and England. Their claims about the future of journalism serve as windows into the relationship between technology and news work at different historical moments and offer insights into today’s prognoses.

    The Current Crisis

    In the U.S., mainstream news agencies have been dealt a series of technological, economic, and political blows that have changed the way news is written, distributed, consumed, funded, and understood. Anxiety about the future can be understood in light of three interrelated challenges to the post-World War II information order: twenty years of digital technological disruption, the 2008 economic crisis, and politically and economically motivated challenges to the industrial news media.

    By now it is a truism that screen-based digital technologies have transformed journalism. Newspapers, in particular, have experienced an advertising and readership decline more existentially threatening than the threat posed to print from radio in the 1920s or from television in the 1950s. The net presented a challenge to print media even before it became a major platform for news; in the mid-1990s, Craigslist disrupted the long-standing classified ad revenue streams of daily papers and newspapers (Seamans and Zhu 2013). The incorporation of print news functions into the digital has only intensified since then. Internet saturation in U.S. households is at 84 percent and climbing (Pew Research Center 2015). News consumers are no longer tethered to a small set of news organizations; sixty-two percent read disparate stories they happen across on social media and Twitter feeds and do not subscribe to a single newspaper or news magazine (Gottfried and Shearer 2016).

    Newspapers were already on shaky ground when the 2008 financial crisis struck. Economic downturn coupled with technological displacement led to a crisis of near Darwinian proportions for an industry that had seen outsized profit margins for much of the twentieth century. Closures, bankruptcies, and mergers ensued. Historic papers like the Rocky Mountain News and Ann Arbor News shut their doors, and many other dailies and weeklies reverted to web-only formats (Rogers 2009). Over a hundred papers ceased publication between 2004 and 2016 (Barthel 2016). Papers that endured the techno-economic struggles of the 2000s had to rethink the nature of the news enterprise from the ground up, devising survival strategies in a new Mad Max-style advertising and subscriber-depleted media terrain.

    Journalism never regained its footing after the financial crisis. As a Pew Research Center study suggests, “2015 might as well have been a recession year” for the traditional news media (Barthel 2016). The study paints a grim picture of the news industry. In 2014 and 2015, the number of print media consumers continued to drop. Even revenue from digital ads fell as advertisers migrated to social media sites like Facebook. And full-time jobs in journalism continued their steady decline: today there are 39 percent fewer positions than there were two decades ago. News consumption also began to shift from personal computers to mobile devices. Readers increasingly access news items on their phones, while standing in line, waiting at red lights, and at other spare moments of the day. In a metric-driven world, mobile news consumption has a silver lining: many sites are receiving more visits than before. However, the average mobile-device reader spends less time with each article than they did on PCs (Barthel 2016). Demand for news exists, albeit in ever-smaller and dislocated chunks.

    At the same time, insurgent news entrepreneurs have altered the media field by leveraging weaknesses in the system and taking advantage of emerging technological possibilities. Just as the most successful nineteenth-century “startups” were enabled by new technologies like the steam press that sped up and lowered the cost of printing,[1] today’s media insurgents – people like Matt Drudge, Steve Bannon, the late Andrew Brietbart, and others – moved straight to digital news and data formats without prior institutional baggage. Since initial start-up costs on the Web are low and news production and dissemination is relatively easy, they were able to offer a trimmed-down model of news production that did not require reporting in the strict sense.

    Some of these insurgents imagine a future for news unfettered by past or existing structures. They claim they want to take a sledgehammer to old media, but it really serves as their foil. In the current context, the terms old media, establishment media, and mainstream media are thrown around by new media players jockeying for position in a changing media field. The White House is currently engaged in a hostile yet mutually beneficial battle with mainstream news outlets, and it echoes the position that the news media is a liberal monolith that censors alternative positions.[2] At the same time, establishment journalism is enjoying a period of unpredicted growth due to the Trump bubble, and has been reinventing and reimagining itself as the Fourth Estate in the wake of the 2016 election.

    Future-of news experts reduce professional and public uncertainty in times of flux (Lowery and Shan, 2016). But it is important to note that not all contemporary observers are worried. The late David Carr, for instance, believed Web startups like Buzzfeed would eventually become more like traditional news outlets. “The first thing they do when they get a little money is hire some journalists,” he said in 2014. He was confident news audiences had an intrinsic desire for quality and that the business end of things would eventually sort itself out.

    Similarly, people who express anxieties about the state of journalism are more likely to have experienced journalism as a stable and predictable field, and to have lost something when the old model collapsed. Those who are concerned worry that a digital-age business model will never arise to solve journalism’s funding problem. They worry that automation will replace journalists. They fear ideological bubbles and distracted audiences. They lament eroding legitimacy and credibility in an era of so-called fake news. And they hope prognosticators possess special knowledge or have more crystalline vision than others in the profession. But did past reporters and editors worry about the fate of their profession in the same way?

    The Nineteenth Century

    In the nineteenth century, journalism was a wide-open, experimental field on both sides of the Atlantic. Literacy rates were climbing. Print technologies had improved. Paper was cheaper to produce than ever before. Newspapers, book publishers, and the public were experiencing the power of mass dissemination. By the second half of the nineteenth century, newspapers’ social standing had improved. Some observers believed they were institutions on the ascent that would eventually play a social role on par with educators, clergy, or government officials.

    However, concerns about the accelerated pace of newspaper work, the constant demand for “newness,” and the unremitting imperative to scoop rival papers were refrains in nineteenth-century journalistic commentary. In his biography of Henry Raymond, the journalist and author Augustus Maverick characterized news work in 1840s New York as an unceasing “treadmill”:

    Only those who have been placed upon the treadmill of a daily newspaper in New York know the severity of the strain it imposes on the mental and physical powers. ‘There is no cessation,’ one newsman explained. ‘A good newspaper never publishes that which is technically denominated ‘old news,’ – a phrase so significant in journalism as to be invested with untold horrors. All must be daily fresh, daily complete, daily polished and perfect; else the journal falls into disrepute, is distanced by its rivals, and, becoming ‘dull,’ dies. (1870, 220)

    I will return to the issue of acceleration later in the paper. For now, it is important to note that perceptions of speedup and fears of being outmoded were embedded in the experience of journalism as early as the 1840s.

    Despite journalism’s daily stresses, Maverick felt the quality and legitimacy of papers was on the rise. The press had successfully overcome early-nineteenth century threats to credibility like partisanship and the sensationalism of the penny press, which printed fantastical, fabricated stories like the New York Sun’s Great Moon Hoax. Maverick believed this progress would continue unabated:

    Accepting the promise of the Present, the prospect of the Future brightens. For, as men come to know each other better, through the rapid annihilation of time and space, they will be plunged deeper into affairs of trade and finance and commerce, and be burdened with a thousand cares, – and the Press, as the reflector of the popular mind, will then take a broader view, and reach forth towards a higher aim; becoming, even more than now, the living photograph of the time, the sympathetic adviser, the conservator, regulator, and guide of American society. (1870, 358)

    Maverick envisioned a future in which the press would both facilitate and temper the social changes wrought by connectivity (changes that he analyzed in his 1858 book on the telegraph).

    The same year Maverick predicted a role for the press as guide and advisor in an increasingly complex and interconnected world, William T. Stead began his career as a fledgling reporter. Few journalists tested, challenged, and wielded the power of the press quite like Stead. In his essay “The Future of Journalism” (1887), he envisioned radical and expansive new plans for the press. His own journalistic experiments had convinced him that editors “could become the most permanently influential Englishmen in the Empire.” But to ascend to this level one had to become a “master of the facts – especially the most dominant fact of all, the state of public opinion.” Editors guessed at public opinion, but had no way of gauging it. To remedy this, Stead suggested journalists be allowed twenty-four hour access to everyone “from the Queen downward.” His news workers of the future would be intimately connected to public opinion across the social system. They would have unfettered access to powerful people, which would diminish the unquestioned authority and privacy of the aristocracy.

    Since the system Stead imagined would be impossible for one person to manage, it would be held in place by travelers who would preach the importance of journalistic work with a missionary zeal. The travelers would eventually be “entrusted the further and more delicate duty of collecting the opinions of those who form the public opinion of their locality.” Stead was certain the enactment of his plan would result in the greatest “spiritual and educational and governing agency which England has yet seen.”

    “The Future of Journalism” demonstrates a keen awareness of print’s power in an era of mass distribution and rapid news diffusion. It was grandiose because it imagined a far greater political role for journalists than they would ever possess. In some respects, though, Stead was a superior prognosticator. In 1887, the communications field was undifferentiated. His journalistic travelers and major-generals would ultimately manifest themselves in the twentieth century as pollsters, social scientists, and public relations specialists. But the editor would not sit at the helm, overseeing these efforts. Instead, journalist/editors would report their findings and beliefs, and serve as conduits in the flow of ideas between these professionals and the public. Despite their inadequacies, Stead’s writings on the future were more prescriptive and imaginative than many of today’s commentaries on the topic.

    Twentieth-Century Futures

    Nineteenth-century commentators on the news profession lamented acceleration, railed against partisanship, and decried certain forms of sensationalism, but they also believed in progress. This changed in the twentieth century. Frank Munsey’s career began by selling low-cost magazines and pulp fiction. In 1889 he launched the popular general-interest magazine Munsey’s Magazine, and he went on to amass a fortune between 1900 and 1920 purchasing and selling ten different newspapers, including The New York Daily News, The Boston Journal, and The Washington Times. He was a businessman first and journalist second. Munsey’s contemporaries viewed him as journalism’s undertaker: his very appearance on the scene heralded a newspaper’s demise. His contemporary, Oswald Garrison Villard, described him as “a dealer in dailies – little else and little more” (1923, 81).[3]

    Munsey’s “Journalism of the Future” appeared in 1903 in Munsey’s Magazine. In it, he suggests that the common editors’ refrain about “lack of good men” misses the real problem. The threat facing journalism is not a lack of well-trained workers, but the size of daily papers. Newspapers, which had been expanding since the 1890s, contained more sections, lengthier features, and larger Sunday editions than ever before. As papers grew, readers became rushed. The problem with news circa 1903 was that there was too much to write about and too much to read. Because they had to absorb so much, readers’ attention was at all all-time low (a concern that resonates with today’s news producers). For Munsey, the solution to the problem of the rushed and inattentive reader lay in condensation and conglomeration. Predicting extreme media consolidation long before it occurred, Munsey speculated that within four years (i.e., by 1907) the entire media field would be whittled down to three or four firms that would publish every newspaper, periodical, magazine, and book:

    The journalism of the future will be of a higher order than the journalism of the past or the present. Existing conditions of competition and waste, under individual ownership, make the ideal newspaper impossible. But with a central ownership big enough and strong enough to encompass the whole country, our newspapers can afford to be independent, fearless, and honest. (1903, 830)

    For Munsey, consolidation, quality, and independence are linked through the efficiency and scope of large-scale production and the nationalization of mass audiences. He does not foresee problems caused by monopolization or threats to newspapers from radio. He imagines technology only as it relates to its effects on the productive capacity of print news, which he thought was fettered by local ownership.

    Writing during World War I, Willard Grosvenor Bleyer, founder of the University of Wisconsin journalism school and advocate of professional training, took a more modest view of journalism’s future. His primary concern was wartime press censorship and the spread of propaganda through semi-official news agencies. However, he considered these developments temporary deviations from the normal function of the press in a democratic society: eventually the profession would return to its pre-war normalcy. “The world war,” he wrote, “has given rise to peculiar problems, none of which, however, seems likely to have permanent effects on our newspapers” (1918, 14). Wartime austerity, especially the high price of paper, posed problems for the news industry. But there was a bright side. People wanted news from Europe, so the higher cost of newspapers had not decreased circulation rates.

    Some early-twentieth century observers were concerned about sensationalism and editorial independence or the effects of war on the press, while others worried about the future of democracy in the context of Munsey-wrought newspaper industry mergers. Oswald Villard, writer for The Nation and The NY Evening Post, founder of the American Anti-Imperialist League, and the first treasurer of the National Association for the Advancement of Colored People, argued that consolidation threatened democracy. Most newspapers lacked commercial independence and were beholden to advertisers who limited what they could publish. He was also concerned about the political implications of audience fragmentation: “Not today can one, no matter how trenchant their pen, be in a garret and expect to reach the conscience of a public by seventy millions larger than the America of Garrison and Lincoln.” Villard, however, held out hope that the views of ‘great men’ would find an audience, even if it meant bypassing the press. He did not predict new media forms, but looked back at old ones: “the prophet of the future will make his message heard, if not by a daily, then by a weekly; if not by a weekly, then by pamphleteering in the manner of Alexander Hamilton; if not by pamphleteering then by speech in the market-place” (1923, 315).

    After World War II, journalism experienced a period of stability that gave it an aura of permanence, as if media institutions were constants amidst other economic, social, and cultural changes. Future concerns during this period centered on issues of technology and media consolidation. In 1947, for example, the Hutchins Commission on Freedom of the Press predicted that newspapers would soon be sent from FM radio stations to personal facsimile machines. These devices would print, fold, and deposit them in the hands of U.S. householders each morning (34-45). News workers and industry analysts predicted that technologies as diverse as citizens band radio, cable TV, camcorders, and CD ROMS would, for better or worse, alter the production or consumption of news and either enhance or impede democratic processes (Curran 2010a). In the 1980s and 90s, journalists and media critics pointed to the pernicious effects of monopolization in national and regional markets. They feared the one-newspaper town and the absorption of local newspapers by media franchises. Michael Kinsley recalls that, in the pre-Internet period, “at symposia and seminars on the Future of Newspapers, professional worriers used to worry that these monopoly or near-monopoly newspapers were too powerful for society’s good” (2014).

    Time, Space, and Journalism

    Time is not a natural resource that springs from the Earth, but a cultural and social construct imagined and experienced in multiple ways (Fabien 1983).[4] Some social theorists argue that the sensation of rapid acceleration is a key feature of the modern experience of time (Crary 2013; Rosa 2013). Harmut Rosa, for example, has argued that time compression has reached a point where the hamster wheel or treadmill has become an apt metaphor for modern life. Work speedups and technological immersion are necessary just to maintain social stasis, without the possibility of advancement or breaking free (Rosa 2010). For Rosa and other accelerationists, acceleration leaves you mired in the present, anticipating the future with a sense of dread. The reality is that there is no uniform experience of time; our experience depends upon our position within circuits of information and capital (Sharma 2014). But when it comes to technological and economic speedup, journalism may be the canary in the mine. Reporters like Maverick experienced this treadmill effect as early as the 1840s. In 1918, Francis Leupp described the quickening pace of news work in the electric age:

    We must reckon with the progressive acceleration of the pace of our 20th century life generally. Where we walked in the old times we run in these; where we ambled then, we gallop now. In the age of electric power, high explosives, articulated steel frames, in the larger world; of the long-distance telephone, the taxicab, and the card-index, in the narrower. The problem of existence is reduced to terms of time-measurement. (39)

    Like Maverick, Leupp experienced the dynamism of modern life and the dual pressures of accuracy and speed in journalism.

    It makes sense that journalism would experience the present this way. As the quintessential modern form, news embodies planned obsolescence (Schwartz 1999). Journalism has undergone two centuries of shrinking intervals of newness and relevance: six-months, a week, a day, an hour. With the rise of social media and Twitter, the intervals between news cycles have grown even shorter. In the twentieth century, edition release times and broadcast schedules helped carve the day into identifiable units with firm deadlines. But in a context where news can be posted around the clock and updated every minute, the clock is no longer a structuring device for journalism. Minutes, seconds, and the calendar click-over from one day to the next are the only salient units of time. News stories that were relevant and new last week often seem ancient a week later. A newsworthy event like President Trump pulling out of the Paris climate agreement can feel as distant as the Vietnam War the following week. New communication forms like Twitter coupled with strategies of disinformation and the routinization of scandal shatter perceptions of continuity. What we are experiencing now is not the death of history, as was proclaimed after the fall of the Berlin Wall, but the death of the present. In news, rapid acceleration has amnestic effects, similar to the experience of sleep deprivation.

    If the main time/space vectors in journalism used to be deadlines and beats, the latter may also be losing their importance, giving way to a more fluid cut-and-run style of journalism. For example, the Washington Post’s Chris Cilizza suggests that young reporters should not decline stories saying, “that’s not my beat” (2016). Rather, in a context of dwindling opportunities, journalists should pursue any story available, whether or not it fits into the old-fashioned logic of beat work or the range of competence of individual journalists.[5] But while traditional beats may be losing their cogency, reporters must add a new online “beat” to their repertoire that entails close surveillance of social media and online news, a dynamic that some critics have argued creates a house of mirrors effect in the news industry (Reinemann and Baugut 2013).

    Technology and Uncertainty in the Professions

    Journalism may be the paradigmatic case of a profession imperiled by a new technology, but its concerns about time and technological displacement cannot be generalized to other spheres. Take lawyers, social workers, and physicians. Uncertainty within the legal profession is largely unrelated to the digital. It was caused by the recent financial crisis coupled with the overtraining of new professionals. Jobs for newly minted JDs evaporated during the recession, leading to a decline in the number of law school applicants after 2010. With enrollment down, the future of smaller law schools became uncertain, and many schools lowered admission standards to stay afloat (Olson 2015; Pistone and Horn 2016). The profession has been in crisis, but not because of the Internet, and there is even some evidence that law positions are coming back (Solomon 2015). Uncertainty for social workers began even earlier, when the Clinton administration began dismantling the welfare state. Despite the obvious need for such professionals, government, non-profit, and other social service jobs have seen a quarter-century decline because of deep budgetary cuts that began in the 1990s (Reisch 2013).

    Physicians seem least concerned with the future. They worry more about burnout than they do the fate of their profession. The future is typically invoked in discussions about labor shortages and descriptions of new developments at the intersection of medicine and technology. Articles on the future of medicine routinely tout new developments like 3D printers that can form living cells into new organs (Mellgard 2015). Digitalization has changed many aspects of medicine: electronic medical records and charting alters the way nurses and physicians access information, for instance. But it has not led to credible speculation about replacing physicians with bots. Contrast this with some news workers’ worries about replacement by computer programs like Automated Insight’s narrative generation system, Wordsmith. The Associated Press now employs Wordsmith to do its quarterly earnings reports and other stories, and has become so confident in these auto-generated stories that it runs many of them without prior vetting (the rare human-edited AI story is said to have had “the human touch”) (Miller 2015). Nor have drones been proposed as a viable alternative to human physicians, as they have been for newsgatherer/photojournalists (Etzler 2016).[6]

    In none of these other cases is technology the primary motor of destabilization. The character of future angst in the professions, therefore, is occupation dependent. And journalism, it seems, is uniquely sensitive and vulnerable to technology. Every widely-adopted communications technology – the steam press, radio, the net – has restructured news and led to audience expansion or contraction. In this sense, there is nothing new to journalist’s dependence on and transformation by technologies. The one constant is that journalists work in a field of technological contingency.

    Conclusion: Euphoria and Dysphoria in Journalism

    Visions of the future are also statements about the present. Political and economic conditions, labor concerns, and beliefs about the nature of time are contained within predictive thought. The future of journalism has been asked when a number of possibilities are on the table and when fewer options are imaginable. Sometimes predictions are made when a journalist has a stake in seeing a particular vision enacted. There was no social stasis or treadmill for Munsey, who saw conglomeration as the key to good journalism, or for Stead, who imagined himself as the heroic journalist proselytizer. Both saw themselves as leaders of the free world. Feelings of euphoria and dysphoria, therefore, come and go and are not unique to one era. Nineteenth-century journalists like Stead and Maverick imagined their field’s future and the journalist’s future roles in society. Both were “feeling it,” riding high on the wave of mechanization.

    William T. Stead, 1909 (image source: https://it.wikipedia.org/wiki/William_Thomas_Stead, https://giphy.com/gifs/3XH3YqPpfwmPMxx5Xr)
    William T. Stead, 1909 (image source: Wikipedia and GIPHY

    Social roles are also embedded within occupational visions of the future. Will tomorrow’s journalists be tellers of truth, interpreters of data, shapers of public opinion, informers of policy makers, imaginers of social utopias? Some commentators insist that news must change to remain relevant in the digital age. In a world of abundant facts, reporters should be master interpreters, explaining the “what” and “how” to the public rather than reciting basic information (Cilizza 2016; Stephens 2014). As older models of journalism become outmoded, either by the Web or by computer programs, the hope is that professional journalists will find a niche explaining events. A similar impulse lies behind data-driven journalism, but in this case the journalists refashion themselves as computer workers, scraping the Web for reams of data, interpreting it, and presenting it to audiences in visually and narratively compelling ways. In solutions-based journalism, the reporter is a meta-social worker or public policy specialist, proposing potential solutions to local social problems based on what other locales have found successful.

    There is also an emerging patronage system in which billionaires, foundations, and small donations prop up capital-intensive journalistic forms like investigative journalism. This is a good stopgap measure, and much of the work that has been supported by tech giants like Jeff Bezos, Pierre Omidyar, and others has typically been of high quality. But it begs the question: can journalists write exposés today about the very people and their tech companies who are sponsoring our journalism the way the Ida Tarbell wrote about Standard Oil?

    The social roles future of news experts imagine might come to pass, but not always in the way they expect. Stead’s call for government by journalism, for instance, is certainly embodied in a figure like Breitbart’s Steve Bannon. Although Stead would disagree with his political vision and journalistic practices, Bannon is also “feeling it,” envisioning a future of infinite possibilities.

    Occupational forecasting serves both psychological and pragmatic ends: it reduces anxieties at the same time that it identifies trends to guide present-day action. Because the future is speculative and can only be imagined or modeled, not recreated from memory, artifact, or written record, prediction-based advice runs a high risk of misdirection. We can safely assume that prognosticators will not determine the actual future of journalism. If Stead were really clairvoyant, the Titanic would have been spared and journalism saved. As Robert Heilbroner suggests, prediction is an exercise in futility. It is better to “ask whether it is imaginable… to exercise effective control over the future-shaping forces of Today” (1995, 95). It is only in this sense that discussions of the future and the social experiments they generate do, in fact, transform the field.

    _____

    Gretchen Soderlund is Associate Professor of Media History in the University of Oregon’s School of Journalism and Communication. She is the author of Sex Trafficking, Scandal, and the Transformation of Journalism, 1885-1917 (University of Chicago Press) and editor of Charting, Tracking, and Mapping: New Technologies, Labor, and Surveillance, a special issue of Social Semiotics. Her articles have appeared in such journals as American Quarterly, Feminist Formations, The Communication Review, Humanity, and Critical Studies in Media Communication.

    Back to the essay

    _____

    Acknowledgments

    The author would like to thank Patrick Jones for his comments on an earlier draft of this essay.

    _____

    Notes

    [1] The tremendous success of nineteenth-century self-made owner-editors like Benjamin Day or S.S. McClure can be attributed to innovations in content and funding models. In the 1830s, Day lowered the cost of his newspaper to only a penny, making it affordable to more New Yorkers, and made up for the decreased revenue by selling more advertising space. McClure did the same thing for magazines in the 1890s, selling his publication for a nickel instead of the standard quarter while increasing ad revenue. In doing so, both took advantage of untapped opportunities to reshape the news field in their respective eras.

    [2] Before the 2016 election, this rhetoric united the libertarian left and the right. In a 2014 interview on Democracy Now that, not coincidentally, got positive play in the rightwing media, Glenn Greenwald lambasted Washington Post editors as, “old-style, old-media, pro-government journalists… the kind who have essentially made journalism in the U.S. neutered and impotent and obsolete” (Watson 2014).

    [3] Villard also said of Munsey: “There is not a drop of the reformer’s blood in him; there is in him nothing that cries out in pain in response to the travails of multitudes” (1923, 72).

    [4] The representational features of future thought are also culturally and historically specific (Rosenberg and Harding 2005).

    [5] This more mobile, targeted approach to news production with fewer fixed duties or beats may offer a more varied work experience. But it has labor implications as well: it edges toward freelancing and it may be difficult to say no for reasons beyond beats. Further, reporters may find themselves over their heads in reporting on topics around which they can claim no expertise.

    [6] Indeed, the FAA changed its policy on August 29, 2016 so that journalists do not need pilot’s licenses to fly drones, which will precipitate the increased use of the tool in the future (Etzler 2016).

    _____

    Works Cited

    • Barthel, Michael. 2016. “Newspapers: Fact Sheet.” Pew Research Center (Jun 15).
    • Carr, David. 2014. “NYT’s David Carr on the Future of Journalism.” Youtube interview.
    • Cilizza, Chris. 2016. “The Future of Journalism Is Saying ‘Yes.’ A Lot.” Washington Post (May 23).
    • Crary, Jonathan. 2013. 24/7: Late Capitalism and the Ends of Sleep. Brooklyn, NY: Verso.
    • Curran, James. 2010. “Technology Foretold.” In Natalie Fenton, ed., New Media, Old News: Journalism and Democracy in the Digital Age. London: Sage.
    • Etzler, Allen. 2016. “Exploring the Use of Drones in Journalism.” News Media Alliance (Sep).
    • Fabien, Johannes. 2002. Time and the Other: How Anthropology Makes Its Object. New York: Columbia University Press.
    • Friedhoff, Stefanie. 2015. “David Carr on Teaching and the Future of Journalism.” Boston Globe (Feb 13).
    • Gottfried, Jeffrey and Eliza Shearer. 2016. “News Use Across Social Media Platforms 2016.” Pew Research Center (May 26).
    • Heilbroner, Robert. 1995. Visions of the Future: The Distant Past, Yesterday, Today, Tomorrow. Oxford: Oxford University Press.
    • Kinsley, Michael. 2014. “The Front Page 2.0.” Vanity Fair (Apr 10).
    • Lowrey, Wilson and Zhou Shan. 2016. “Journalism’s Fortune Tellers: Constructing the Future of News.” Journalism. 1-17.
    • Maverick, Augustus. 1870. Henry J. Raymond and the New York Press for Thirty Years: Progress of American Journalism from 1840 to 1870. Hartford, CT: A.S. Hale and Company.
    • McCaskill, Nolan. 2017. “Trump Backs Bannon: ‘The Media is the Opposition Party.’Politico (Jan 27).
    • Mellgard, Peter. 2015. “Medical 3 D Printing Will ‘Enable a New Kind of Future.” The World Post (Jun 22).
    • Miller, Ross. 2015. “AP’s ‘Robot Journalists’ are Writing their own Stories Now.” The Verge (Jan 29).
    • Munsey, Frank. 1903. “Journalism of the Future.” Munsey’s Magazine 28. 823-830.
    • Neel, Patel V. 2015. “Dronalism is the Future of Journalism: The End of Privacy Cuts Both Ways.” Inverse (Sep).
    • Pew Research Center. 2015. “Americans’ Internet Access: Percent of Adults 2000-2015.”
    • Olson, Elizabeth. 2015. “Study Cites Lower Standards in Law School Admissions.” The New York Times (Oct. 26).
    • Pistone, Michele and Michael Horn. 2016. Disrupting Law School: How Disruptive Innovation will Revolutionize the Legal World. Clayton Christenson Institute.
    • Reinemann, Carsten and Philip Baugut. 2014. “German Political Journalism Between Change and Stability.” In Raymond Kuhn and Rasmus Kleis Nielson, eds., Political Journalism in Transition: Western Europe in a Comparative Perspective. New York: Palgrave Macmillan.
    • Reisch, Michael. 2013. “Social Work Education and the Neo-liberal Challenge: The U.S. Response to Increasing Global Inequality.” Social Work Education 32. 715-733.
    • Rescher, Nicholas. 1998. Predicting the Future: An Introduction to the Theory of Forecasting. Albany, NY: State University of New York Press.
    • Rogers, Tony. 2009. “A Timeline of Newspaper Closings and Calamities.” About.com.
    • Rosa, Harmut. 2010. “Full Speed Burnout? From the Pleasures of the Motorcycle to the Bleakness of the Treadmill: The Dual Face of Social Acceleration.” International Journal of Motorcycle Studies 6:1.
    • Rosa, Harmut. 2013. Social Acceleration – A New Theory of Modernity. New York: Columbia University Press.
    • Rosenberg, Daniel & Sandra Harding. 2005. In Daniel Rosenberg and Sandra Harding, eds., “Introduction: Histories of the Future.” Histories of the Future. Durham, NC:Duke University Press.
    • Seamans, Robert & Feng Zhu. 2013. “Responses to Entry in Multi-Sided Markets: The Impact of Craigslist on Local Newspapers.” Management Science 60. 476-493.
    • Sharma, Sarah. 2014. In the Meantime: Temporality and Cultural Politics. Durham, NC: Duke University Press.
    • Schwartz, Vanessa. 1999. Spectacular Realities: Early Mass Culture in Fin-de-Siécle Paris. Oakland, CA: University of California Press.
    • Solomon, Steven Davidoff. 2015. “Law Schools and Industry Show Signs of Life Despite Forecasts of Doom.” The New York Times (Mar 31).
    • Stead, William. 1887. “The Future of Journalism.” Contemporary Review 50. 664-679.
    • Stead, William. 1893. “From Old World to New: or, A Christmas Story of the Chicago Exhibition.” Review of Reviews.
    • Stephens, Mitchell. 2014. Beyond News: The Future of Journalism. New York: Columbia University Press.
    • Villard, Oswald Garrison. 1923. Some Newspapers and Newspaper-Men. New York: Alfred A. Knopf.
    • Watson, Steve. 2014. “Greenwald Slams ‘Neutered And Impotent and Obsolete Media.’Infowars.
  • Quinn DuPont – Ubiquitous Computing, Intermittent Critique

    Quinn DuPont – Ubiquitous Computing, Intermittent Critique

    a review of Ulrik Ekman, Jay David Bolter, Lily Díaz, Morten Søndergaard, and Maria Engberg, eds., Ubiquitous Computing, Complexity, and Culture (Routledge 2016)

    by Quinn DuPont

    ~

    It is a truism today that digital technologies are ubiquitous in Western society (and increasingly so for the rest of the globe). With this ubiquity, it seems, comes complexity. This is the gambit of Ubiquitous Computing, Complexity, and Culture (Routledge 2016), a new volume edited by Ulrik Ekman, Jay David Bolter, Lily Díaz, Morten Søndergaard, and Maria Engberg.

    There are of course many ways to approach such a large and important topic: from the study of political economy, technology (sometimes leaning towards technological determinism or instrumentalism), discourse and rhetoric, globalization, or art and media. This collection focuses on art and media. In fact, only a small fraction of the chapters do not deal either entirely or mostly with art, art practices, and artists. Similarly, the volume includes a significant number of interviews with artists (six out of the forty-three chapters and editorial introductions). This focus on art and media is both the volume’s strength, and one of its major weaknesses.

    By focusing on art, Ubiquitous Computing, Complexity, and Culture pushes the bounds of how we might commonly understand contemporary technology practice and development. For example, in their chapter, Dietmar Offenhuber and Orkan Telhan develop a framework for understanding, and potentially deploying, indexical visualizations for complex interfaces. Offenhuber and Telhan use James Turrell’s art installation Meeting as an example of the conceptual shortening of causal distance between object and representation, as a kind of Peircean index, and one such way to think about systems of representation. Another example of theirs, Natalie Jermijenko’s One Trees installation of one hundred cloned trees, strengthens and complicates the idea of the causal index, since the trees are from identical genetic stock, yet develop in natural and different ways. The uniqueness of the fully-grown trees is a literal “visualization” of their different environments, not unlike a seismograph, a characteristic indexical visualization technology. From these examples, Offenhuber and Telhan conclude that indexical visualizations may offer a fruitful “set of constraints” (300) that the information designer might draw on when developing new interfaces that deal with massive complexity. Many other examples and interrogations of art and art practices throughout the chapters offer unexpected and penetrating analysis into facets of ubiquitous and complex technologies.

    James Turrell, Meeting 2016
    MoMA PS1 | James Turrell, Meeting 2016, Photos by Pablo Enriquez

    A persistent challenge with art and media analyses of digital technology and computing, however, is that the familiar and convenient epistemological orientation, and the ready comparisons that result, are often to film, cinema, and theater. Studies reliant on this epistemology tend to make a range of interesting yet ultimately illusory observations, which fail to explain the richness and uniqueness of modern information technologies. In my opinion, there are many important ways that film, cinema, and theater are simply not like modern digital technologies. Such an epistemological orientation is, arguably, a consequence of the history of disciplinary allegiances—symptomatic of digital studies and new media studies originating from screen studies—and a proximate cause of Lev Manovich’s agenda-setting Language of New Media (2001), which relished in the mimetic connections resulting from the historical quirk that the most obvious computing technologies tend to have screens.

    Because of this orientation, some of the chapters fail to critically engage with technologies, events, and practices largely affecting lived society. A very good artwork may go a long way to exposing social and political activities that might otherwise be invisible or known only to specialists, but it is the role of the critic and the academic to concretize these activities, and draw thick connections between art and “conventional” social issues. Concrete specificity, while avoiding reductionist traps, is the key to avoiding what amounts to belated criticism.

    This specificity about social issues might come in the form of engagement with normative aspects of ubiquitous and complex digital technologies. Instead of explaining why surveillance is a feature of modern life (as several chapters do, which is, by now, well-worn academic ground), it might be more useful to ask why consumers and policy-makers alike have turned so quickly to privacy-enhancing technologies as a solution (to be sold by the high-technology industry). In a similar vein, unsexy aspects of wearable technologies (accessibility) now offer potential assistance and perceptual, physical, or cognitive enhancement (as described in Ellis and Goggin’s chapter), alongside unprecedented surveillance and monetization opportunities. Digital infrastructures—both active and failing—now drive a great deal of modern society, but despite their ubiquity, they are hard to see, and therefore, tend not to get much attention. These kinds of banal and invisible—ubiquitous—cases tend not to be captured in the boundary-pushing work of artists, and are underrepresented (though not entirely absent) in the analyses here.

    A number of chapters also trade on old canards, such as worrying about information overload, “junk” data whizzing across the Internet, time “wasted” online, online narcissism, business models based on solely on data collection, and “declining” privacy. To the extent that any of these things are empirically true—when viewed contextually and precisely—is somewhat beside the point if we are not offered new analyses or solutions. Otherwise, these kinds of criticisms run the risk of sounding like old people nostalgically complaining about an imagined world before technological or informational ubiquity and complexity. “Traditional” human values might be an important form of study, but not as the pile-on Left-leaning liberal romanticism prevalent in far too many humanistic inquiries of the digital.

    Another issue is that some of the chapters seem to be oddly antiquated for a book published in 2016. As we all know, the publication of edited collections can often take longer than anyone would like, but for several chapters, the examples, terminology, and references feel unusually dated. These dated chapters do not necessarily have the advantage of critical distance (in the way that properly historical study does), and neither do they capture the pulse of the current situation—they just feel old.

    Before turning to a sample of the truly excellent chapters in this volume, I must pause to make a comment about the book’s physical production. On the back cover, Jussi Parikka calls Ubiquitous Computing, Complexity, and Culture a “massively important volume.” This assessment might have been simplified by just calling it “a massive volume.” Indeed, using some back-of-the-napkin calculations, the 406 dense pages amounts to about 330,000 words. Like cheesecake, sometimes a little bit of something is better than a lot. And, while such a large book might seem like good value, the pragmatics of putting an estimated 330,000 words into a single volume requires considerable care to typesetting and layout, which unfortunately is not the case here. At about 90 characters per line, and 46 lines per page—all set in a single column—the tiny text set on extremely long lines strains even this relatively young reviewer’s eyes and practical comprehension. When trudging through already-dense theory and the obfuscated rhetoric that typically accompanies it (common in this edited collection), the reading experience is often painful. On the positive side, in the middle of the 406 pages of text there are an additional 32 pages of full-color plates, a nice addition and an effective way to highlight the volume’s sympathies in art and media. An extensive index is also included.

    Despite my criticisms of the approach of many of the chapters, the book’s typesetting and layout, and the editors’ decision to attempt to collocate so much material in a single volume, there are a number of outstanding chapters, which more than redeem any other weaknesses.

    Elaborating on a theme from her 2011 book Programmed Visions (MIT), Wendy H.K. Chun describes why memory, and the ability to forget, is an important aspect to Mark Weiser’s original notion of ubiquitous computing (in his 1991 Scientific American article). (Chun also notes that the word “ubiquitous” comes from “Ubiquitarians,” a Lutherans sect who believed Christ was present ‘everywhere at once’ and therefore invisible.) According to Chun’s reading of Weiser, to get to a state of ubiquitous computing, machines must lose their individualized identity or importance. Therefore, unindividuated computers had to remember, by tracking users, so that users could correspondingly forget (about the technology) and “thus think and live” (161). The long history of computer memory, and its rhetorical emergence out of technical “storage” is an essential aspect to the origins of our current technological landscape. Chun notes that prior to the EDVAC machine (and its strategic alignment to cognitive models of computation), storage was a well understood word, which etymologically suggested an orientation to the future (“stores look toward a future”). Memory, on the other hand, contained within it the act of recall and repetition (recall Meno’s slave in Plato’s dialogue). So, when EDVAC embedded memory within the machine, it changed “memory by making memory storage” (162). In doing so, if we wanted to rehabilitate Weiser’s original image, of being able to “think and live,” we would need to refuse the “deadening of the world brought about by memory as storage and realize the fundamentally collective nature of memory and writing” (162).

    Sean Cubitt does an excellent job of exposing the political economy of ubiquitous technologies by focusing on the ways that enclosure and externalization occur in information environments, interrogating the term “information economy.” Cubitt traces the history of enclosures from the alienation of fifteenth-century peasants from their land, the enclosure of skills to produce dead labour in nineteenth-century factories, to the conversion of knowledge into information today, which is subsequently stored in databases and commercialized as intellectual property—alienating individuals from their own knowledge. Accompanying this process are a range of externalizations, predominantly impacting the poor and the indigenous. One of the insightful examples Cubitt offers of this process of externalization is the regulation of radio spectrum in New Zealand, and the subsequent challenge by Maori people who, under the Waitangi Treaty, are entitled to “all forms of commons that pre-existed the European arrival” (218). According to the Maori, radio spectrum is a form of commons, and therefore, the New Zealand government is not permitted to claim exclusive authority to manage the spectrum (as practically all Western governments do). Not content to simply offer critique, Cubitt concludes his chapter with a (very) brief discussion of potential solutions, focusing on the reimagining of peer-to-peer technology by Robert Verzola of the Philippines Green Party. Peer to peer technology, Cubitt tentatively suggests, may help reassert the commons as commonwealth, which might even salvage traditional knowledge from information capitalism.

    Katie Ellis and Gerard Goggin discuss the mechanisms of locative technologies for differently-abled people. Ellis and Goggin conclude that devices like the later-model iPhone (not the first release), and the now-maligned Google Glass offer unique value propositions for those engaged in a spectrum of impairment and “complex disability effects” (274). For people who rely on these devices for day-to-day assistance and wayfinding, these devices are ubiquitous in the sense Weiser originally imagined—disappearing from view and becoming integrated into individual lifeworlds.

    John Johnston ends the volume as strongly as N. Katherine Hayles’s short foreword opened it, describing the dynamics of “information events” in a world of viral media, big data, and, as he elaborates in an extended example, complex and high-speed financial instruments. Johnston describes how events like the 2010 “Flash Crash,” when the Dow fell nearly a thousand points and lost a trillion dollars and rebounded within five minutes, are essentially uncontrollable and unpredictable. This narrative, Johnston points out, has been detailed before, but Johnston twists the narrative and argues that such a financial system, in its totality, may be “fundamentally resistant to stability and controllability” (389). The reason for this fundamental instability and uncontrollability is that the financial market cannot be understood as a systematic, efficient system of exchange events, which just happens to be problematically coded by high-frequency, automated, and limit-driven technologies today. Rather, the financial market is a “series of different layers of coded flows that are differentiated according to their relative power” (390). By understanding financialization as coded flows, of both power and information, we gain new insight into critical technology that is both ubiquitous and complex.

    _____

    Quinn DuPont studies the roles of cryptography, cybersecurity, and code in society, and is an active researcher in digital studies, digital humanities, and media studies. He also writes on Bitcoin, cryptocurrencies, and blockchain technologies, and is currently involved in Canadian SCC/ISO blockchain standardization efforts. He has nearly a decade of industry experience as a Senior Information Specialist at IBM, IT consultant, and usability and experience designer.

    Back to the essay

  • Bradley J. Fest – The Function of Videogame Criticism

    Bradley J. Fest – The Function of Videogame Criticism

    a review of Ian Bogost, How to Talk about Videogames (University of Minnesota Press, 2015)

    by Bradley J. Fest

    ~

    Over the past two decades or so, the study of videogames has emerged as a rigorous, exciting, and transforming field. During this time there have been a few notable trends in game studies (which is generally the name applied to the study of video and computer games). The first wave, beginning roughly in the mid-1990s, was characterized by wide-ranging debates between scholars and players about what they were actually studying, what aspects of videogames were most fundamental to the medium.[1] Like arguments about whether editing or mise-en-scène was more crucial to the meaning-making of film, the early, sometimes heated conversations in the field were primarily concerned with questions of form. Scholars debated between two perspectives known as narratology and ludology, and asked whether narrative or play was more theoretically important for understanding what makes videogames unique.[2] By the middle of the 2000s, however, this debate appeared to be settled (as perhaps ultimately unproductive and distracting—i.e., obviously both narrative and play are important). Over the past decade, a second wave of scholars has emerged who have moved on to more technical, theoretical concerns, on the one hand, and more social and political issues, on the other (frequently at the same time). Writers such as Patrick Crogan, Nick Dyer-Witherford, Alexander R. Galloway, Patrick Jagoda, Lisa Nakamura, Greig de Peuter, Adrienne Shaw, McKenzie Wark, and many, many others write about how issues such as control and empire, race and class, gender and sexuality, labor and gamification, networks and the national security state, action and procedure can pertain to videogames.[3] Indeed, from a wide sampling of contemporary writing about games, it appears that the old anxieties regarding the seriousness of its object have been put to rest. Of course games are important. They are becoming a dominant cultural medium; they make billions of dollars; they are important political allegories for life in the twenty-first century; they are transforming social space along with labor practices; and, after what many consider a renaissance in independent game development over the past decade, some of them are becoming quite good.

    Ian Bogost has been one of the most prominent voices in this second wave of game criticism. A media scholar, game designer, philosopher, historian, and professor of interactive computing at the Georgia Institute of Technology, Bogost has published a number of influential books. His first, Unit Operations: An Approach to Videogame Criticism (2006), places videogames within a broader theoretical framework of comparative media studies, emphasizing that games deserve to be approached on their own terms, not only because they are worthy of attention in and of themselves but also because of what they can show us about the ways other media operate. Bogost argues that “any medium—poetic, literary, cinematic, computational—can be read as a configurative system, an arrangement of discrete, interlocking units of expressive meaning. I call these general instances of procedural expression, unit operations” (2006, 9). His second book, Persuasive Games: The Expressive Power of Videogames (2007), extends his emphasis on the material, discrete processes of games, arguing that they can and do make arguments; that is, games are rhetorical, and they are rhetorical by virtue of what they and their operator can do, their procedures: games make arguments through “procedural rhetoric.”[4] The publication of Persuasive Games in particular—which he promoted with an appearance on The Colbert Report (2005–14)—saw Bogost emerge as a powerful voice in the broad cohort of second wave writers and scholars.

    But I feel that the publication of Bogost’s most recent book, How to Talk about Videogames (2015), might very well end up signaling the beginning of a third phase of videogame criticism. If the first task of game criticism was to formally define its object, and the second wave of game studies involved asking what games can and do say about the world, the third phase might see critics reflecting on their own processes and procedures, thinking, not necessarily about what videogames are and do, but about what videogame criticism is and does. How to Talk about Videogames is a book that frequently poses the (now quite old) question: what is the function of criticism at the present time? In an industry dominated by multinational media megaconglomerates, what should the role of (academic) game criticism be? What can a handful of researchers and scholars possibly do or say in the face of such a massive, implacable, profit-driven industry, where every announcement about future games further stokes its rabid fan base of slobbering, ravening hordes to spend hundreds of dollars and thousands of hours consuming a form known for its spectacular violence, ubiquitous misogyny, and myopic tribalism? What is the point of writing about games when the videogame industry appears to happily carry on as if nothing is being said at all, impervious to any conversation that people may be having about its products beyond what “fans” demand?

    To read the introduction and conclusion of Bogost’s most recent book, one might think that, suggestions about their viability aside, both the videogame industry and the critical writing surrounding it are in serious crisis, and the matter of the cultural status of the videogame has hardly been put to rest. As a scholar, critic, and designer who has been fairly consistent in positively exploring what digital games can do, what they can uniquely accomplish as a process-based medium, it is striking, at least to this reviewer, that Bogost begins by anxiously admitting,

    whenever I write criticism of videogames, someone strongly invested in games as a hobby always asks the question “is this parody?” as if only a miscreant or a comedian or a psychopath would bother to invest the time and deliberateness in even thinking, let alone writing about videogames with the seriousness that random, anonymous Internet users have already used to write about toasters, let alone deliberate intellectuals about film or literature! (Bogost 2015, xi–xii)

    Bogost calls this kind of attention to the status of his critical endeavor in a number of places in How to Talk about Videogames. The book shows him involved in that untimely activity of silently but implicitly assessing his body of work, reflectively approaching his critical task with cautious trepidation. In a variety of moments from the opening and closing of the book, games and criticism are put into serious question. Videogames are puerile, an “empty diversion” (182), and without value; “games are grotesque. . . . [they] are gross, revolting, heaps of arbitrary anguish” (1); “games are stupid” (9); “that there could be a game criticism [seems] unlikely and even preposterous” (181). In How to Talk about Videogames, Bogost, at least in some ways, is giving up his previous fight over whether or not videogames are serious aesthetic objects worthy of the same kind of hermeneutic attention given to more established art forms.[5] If games are predominantly treated as “perversion, excess” (183), a symptom of “permanent adolescence” (180), as unserious, wasteful, unproductive, violently sadistic entertainments—perhaps there is a reason. How to Talk about Videogames shows Bogost turning an intellectual corner toward a decidedly ironic sense of his role as a critic and the worthiness of his critical object.

    Compare Bogost’s current pessimism with the optimism of his previous volume, How to Do Things with Videogames (2011), to which How to Talk about Videogames functions as a kind of sequel or companion. In this earlier book, he is rather more affirmative about the future of the videogame industry (and, by proxy, videogame criticism):

    What if we allowed that videogames have many possible goals and purposes, each of which couples with many possible aesthetics and designs to create many possible player experiences, none of which bears any necessary relationship to the commercial videogame industry as we currently know it. The more games can do, the more the general public will become accepting of, and interested in, the medium in general. (Bogost 2011, 153)

    2011’s How to Do Things with Videogames aims to bring to the table things that previous popular and scholarly approaches to videogames had ignored in order to show all the other ways that videogames operate, what they are capable of beyond mere mimetic simulation or entertaining distraction, and how game criticism might allow their audiences to expand beyond the province of the “gamer” to mirror the diversified audiences of other media. Individual chapters are devoted to how videogames produce empathy and inspire reverence; they can be vehicles for electioneering and promotion; games can relax, titillate, and habituate; they can be work. Practicing what he calls “media microecology,” a critical method that “seeks to reveal the impact of a medium’s properties on society . . . through a more specialized, focused attention . . . digging deep into one dark, unexplored corner of a media ecosystem” (2011, 7), Bogost argues that game criticism should be attentive to more than simply narrative or play. The debates that dominated the early days of critical game studies, in this regard, only account for a rather limited view of what games can do. Appearing at a time when many were arguing that the medium was beginning to reach aesthetic maturity, Bogost’s 2011 book sounds a note of hope and promise for the future of game studies and the many unexplored possibilities for game design.

    How to Talk about Videogames

    I cannot really overstate, however, the ways in which How to Talk about Videogames, published four years later, shows Bogost reversing tack, questioning his entire enterprise.[6] Even with the appearance of such a serious, well-received game as Gone Home (2013)—to which he devotes a particularly scathing chapter about what the celebration of an ostensibly adolescent game tells us about contemporaneity—this is a book that repeatedly emphasizes the cultural ghetto in which videogames reside. Criticism devoted exclusively to this form risks being “subsistence criticism. . . . God save us from a future of game critics, gnawing on scraps like the zombies that fester in our objects of study” (188). Despite previous claims about videogames “[helping] us expose and interrogate the ways we engage the world in general, not just the ways that computational systems structure or limit that experience” (Bogost 2006, 40), How to Talk about Videogames is, at first glance, a book that raises the question of not only how videogames should be talked about, but whether they have anything to say in the first place.

    But it is difficult to gauge the seriousness of Bogost’s skepticism and reluctance given a book filled with twenty short essays of highly readable, informative, and often compelling criticism. (The disappointingly short essay, “The Blue Shell Is Everything That’s Wrong with America”—in which he writes: “This is the Blue Shell of collapse, the Blue Shell of financial hubris, the Blue Shell of the New Gilded Age” [26]—particularly stands out in the way that it reads an important if overlooked aspect of a popular game in terms of larger social issues.) For it is, really, somewhat unthinkable that someone who has written seven books on the subject would arrive at the conclusion that “videogames are a lot like toasters. . . . Like a toaster, a game is both appliance and hearth, both instrument and aesthetic, both gadget and fetish. It’s preposterous to do game criticism, like it’s preposterous to do toaster criticism” (ix and xii).[7] Bogost’s point here is rhetorical, erring on the side of hyperbole in order to emphasize how videogames are primarily process-based—that they work and function like toasters perhaps more than they affect and move like films or novels (a claim with which I imagine many would disagree), and that there is something preposterous in writing criticism about a process-based technology. A decade after emphasizing videogames’ procedurality in Unit Operations, this is a way for him to restate and reemphasize these important claims for the more popular audience intended for How to Talk about Videogames. Games involve actions, which make them different from other media that can be more passively absorbed. This is why videogames are often written about in reviews “full of technical details and thorough testing and final, definitive scores delivered on improbably precise numerical scales” (ix). Bogost is clear. He is not a reviewer. He is not assessing games’ ability to “satisfy our need for leisure [as] their only function.” He is a critic and the critic’s activity, even if his object resembles a toaster, is different.

    But though it is apparent why games might require a different kind of criticism than other media, what remains unclear is what Bogost believes the role of the critic ought to be. He says, contradicting the conclusion of How to Do Things with Videogames, that “criticism is not conducted to improve the work or the medium, to win over those who otherwise would turn up their noses at it. . . . Rather, it is conducted to get to the bottom of something, to grasp its form, context, function, meaning, and capacities” (xii). This seems like somewhat of a mistake, and a mistake that ignores both the history of criticism and Bogost’s own practice as a critic. Yes, of course criticism should investigate its object, but even Matthew Arnold, who emphasized “disinterestedness . . . keeping aloof from . . . ‘the practical view of things,’” also understood that such an approach could establish “a current of fresh and true ideas” (Arnold 1993 [1864], 37 and 49). No matter how disinterested, criticism can change the ways that art and the world are conceived and thought about. Indeed, only a sentence later it is difficult to discern what precisely Bogost believes the function of videogame criticism to be if not for improving the work, the medium, the world, if not for establishing a current from which new ideas might emerge. He writes that criticism can “venture so far from ordinariness of a subject that the terrain underfoot gives way from manicured path to wilderness, so far that the words that we would spin tousle the hair of madness. And then, to preserve that wilderness and its madness, such that both the works and our reflections on them become imbricated with one another and carried forward into the future where others might find them anew” (xii; more on this in a moment). It is clear that Bogost understands the mode of the critic to be disinterested and objective, to answer ‘the question ‘What is even going on here?’” (x), but it remains unclear why such an activity would even be necessary or worthwhile, and indeed, there is enough in the book that points to criticism being a futile, unnecessary, parodic, parasitic, preposterous endeavor with no real purpose or outcome. In other words, he may say how to talk about videogames, but not why anyone would ever really want to do so.

    I have at least partially convinced myself that Bogost’s claims about videogames being more like toasters than other art forms, along with the statements above regarding the disreputable nature of videogames, are meant as rhetorical provocations, ironic salvos to inspire from others more interesting, rigorous, thoughtful, and complex critical writing, both of the popular and academic stripe. I also understand that, as he did in Unit Operations, Bogost balks at the idea of a critical practice wholly devoted to videogames alone: “the era of fields and disciplines ha[s] ended. The era of critical communities ha[s] ended. And the very idea of game criticism risks Balkanizing games writing from other writing, severing it from the rivers and fields that would sustain it” (187). But even given such an understanding, it is unclear who precisely is suggesting that videogame criticism should be a hermetically sealed niche cut off from the rest of the critical tradition. It is also unclear why videogame criticism is so preposterous, why writing it—even if a critic’s task is limited to getting “to the bottom of something”—is so divorced from the current of other works of cultural criticism. And finally, given what are, at the end of the day, some very good short essays on games that deserve a thoughtful readership, it is unclear why Bogost has framed his activity in such a negatively self-aware fashion.

    So, rather than pursue a discussion about the relative merits and faults of Bogost’s critical self-reflexivity, I think it worth asking what changed between his 2011 and 2015 books, what took him from being a cheerleader—albeit a reticent, tempered, and disinterested one—to questioning the very value of videogame criticism itself. Why does he change from thinking about the various possibilities for doing things with videogames to thinking that “entering a games retail outlet is a lot like entering a sex shop or a liquor store . . . game shops are still vaguely unseemly” (182)?[8] I suspect that such events as 2014’s Gamergate—when independent game designer Zoe Quinn, critic Anita Sarkeesian, and others were threatened and harassed for their feminist views—the generally execrable level of discourse found on internet comments pages, and the questionable cultural identity of the “gamer,” probably account for some of Bogost’s malaise.[9] Indeed, most of the essays found in How to Talk about Videogames initially appeared online, largely in The Atlantic (where he is an editor) and Gamasutra, and, I have to imagine, suffered for it in their comments sections. With this change in audience and platform, it seems to follow that the opening and closing of How to Talk about Videogames reflect a general exhaustion with the level of discourse from fans, companies, and internet trolls. How can criticism possibly thrive or have an impact in a community that so frequently demonstrates its intolerance and rage toward other modes of thinking and being that might upset its worldview and sense of cultural identity? How does one talk to those who will not listen?

    And if these questions perhaps sound particularly apt today—that the “gamer” might bear an awfully striking resemblance to other headline-grabbing individuals and groups dominating the public discussion in the months after the publication of Bogost’s book, namely Donald J. Trump and his supporters—they should. I agree with Bogost that it can be difficult to see the value of criticism at a time when many United States citizens appear, at least on the surface, to be actively choosing to be uncritical. (As Philip Mirowski argues, the promotion of “ignorance [is] the lynchpin in the neoliberal project” [2013, 96].) Given such a discursive landscape, what is the purpose of writing, even in Bogost’s admirably clear (yet at times maddeningly spare) prose, if no amount of stylistic precision or rhetorical complexity—let alone a mastery of basic facts—can influence one’s audience? How to Talk about Videogames is framed as a response to the anti-intellectual atmosphere of the middle of the second decade of the twenty-first century, and it is an understandably despairing one. As such, it is not surprising that Bogost concludes that criticism has no role to play in improving the medium (or perhaps the world) beyond mere phenomenological encounter and description given the social fabric of life in the 2010s. In a time of vocally racist demagoguery, an era witnessing a rising tide of reactionary nationalism in the US and around the world, a period during which it often seems like no words of any kind can have any rhetorical effect at all—procedurally or otherwise—perhaps the best response is to be quiet. But I also think that this is to misunderstand the function of critical thought, regardless of what its object might be.

    To be sure, videogame creators have probably not yet produced a Citizen Kane (1941), and videogame criticism has not yet produced a work like Erich Auerbach’s Mimesis (1946). I am unconvinced, however, that such future accomplishments remain out of reach, that videogames are barred from profound aesthetic expression, and that writing about games preclude the heights attained by previous criticism simply because of some ill-defined aspect of the medium which prevents it from ever aspiring to anything beyond mere craft. Is a study of the Metal Gear series (1987–2015) similar to Roland Barthes’s S/Z (1970) really all that preposterous? Is Mario forever denied his own Samuel Johnson simply because he is composed of code rather than words? For if anything is unclear about Bogost’s book, it is what precisely prohibits videogames from having the effects and impacts of other art forms, why they are restricted to the realm of toasters, incapable of anything beyond adolescent poiesis. Indeed, Bogost’s informative and incisive discussion about Ms. Pac-Man (1981), his thought-provoking interpretation of Mountain (2014), or the many moments of accomplished criticism in his previous books—for example, his masterful discussion of the “figure of fascination” in Unit Operations—betray such claims.[10]

    Matthew Arnold once famously suggested that creativity and criticism were intimately linked, and I believe it might be worthwhile to remember this for the future of videogame criticism:

    It is the business of the critical power . . . “in all branches of knowledge, theology, philosophy, history, art, science, to see the object as in itself it really is.” Thus it tends, at last, to make an intellectual situation of which the creative power can profitably avail itself. It tends to establish an order of ideas, if not absolutely true, yet true by comparison with that which it displaces; to make the best ideas prevail. Presently these new ideas reach society, the touch of truth is the touch of life, and there is a stir and growth everywhere; out of this stir and growth come the creative epochs of literature. (Arnold 1993 [1864], 29)

    In other words, criticism has a vital role to play in the development of an art form, especially if an art form is experiencing contraction or stagnation. Whatever disagreements I might have with Arnold, I too believe that criticism and creativity are indissolubly linked, and further, that criticism has the power to shape and transform the world. Bogost says that “being a critic is not an enjoyable job . . . criticism is not pleasurable” (x). But I suspect that there may still be many who share Arnold’s view of criticism as a creative activity, and maybe the problem is not that videogame criticism is akin to preposterous toaster criticism, but that the function of videogame criticism at the present time is to expand its own sense of what it is doing, of what it is capable, of how and why it is written. When Bogost says he wants “words that . . . would . . . tousle the hair of madness,” why not write in such a fashion (Bogost’s controlled style rarely approaches madness), expanding criticism beyond mere phenomenological summary at best or zombified parasitism at worst. Consider, for instance, Jonathan Arac: “Criticism is literary writing that begins from previous literary writing. . . . There need not be a literary avant-garde for criticism to flourish; in some cases criticism itself plays a leading cultural role” (1989, 7). If we are to take seriously Bogost’s point about how the overwhelmingly positive reaction to Gone Home reveals the aesthetic and political impoverishment of the medium, then it is disappointing to see someone so well-positioned to take a leading cultural role in shaping the conversation about how videogames might change or transform surrendering the field.

    Forget analogies. What if videogame criticism were to begin not from comparing games to toasters but from previous writing, from the history of criticism, from literature and theory, from theories of art and architecture and music, from rhetoric and communication, from poetry? For, given the complex mediations present in even the simplest games—i.e., games not only involve play and narrative, but raise concerns about mimesis, music, sound, spatiality, sociality, procedurality, interface effects, et cetera—it increasingly makes less and less sense to divorce or sequester games from other forms of cultural study or to think that videogames are so unique that game studies requires its own critical modality. If Bogost implores game critics not to limit themselves to a strictly bound, niche field uninformed by other spheres of social and cultural inquiry, if game studies is to go forward into a metacritical third wave where it can become interested in what makes videogames different from other forms and self-reflexively aware of the variety of established and interconnecting modes of cultural criticism from which the field can only benefit, then thinking about the function of criticism historically should guide how and why games are written about at the present time.

    Before concluding, I should also note that something else perhaps changed between 2011 and 2015, namely, Bogost’s alignment with the philosophical movements of speculative realism and object-oriented ontology. In 2012, he published Alien Phenomenology, or What It’s Like to Be a Thing, a book that picks up some of the more theoretical aspects of Unit Operations and draws upon the work of Graham Harman and other anti-correlationists to pursue a flat ontology, arguing that the job of the philosopher “is to amplify the black noise of objects to make the resonant frequencies of the stuffs inside them hum in credibly satisfying ways. Our job is to write the speculative fictions of their processes, their unit operations” (Bogost 2012, 34). Rather than continue pursuing an anthropocentric, correlationist philosophy that can only think about objects in relation to human consciousness, Bogost claims that “the answer to correlationism is not the rejection of any correlate but the acknowledgment of endless ones, all self-absorbed, obsessed by givenness rather than by turpitude” (78). He suggests that philosophy should extend the possibility of phenomenological encounter to all objects, to all units, in his parlance; let phenomenology be alien and weird; let toasters encounter tables, refrigerators, books, climate change, Pittsburgh, Higgs boson particles, the 2016 Electronic Entertainment Expo, bagels, et cetera.[11]

    Though this is not the venue to pursue a broader discussion of Bogost’s philosophical writing, I mention his speculative turn because it seems important for understanding his changing attitudes about criticism. That is, as Graham Harman’s 2012 essay, “The Well-Wrought Broken Hammer,” negatively demonstrates, it is unclear what a flat ontology has to say, if anything, about art, what such a philosophy can bring to critical, hermeneutic activity.[12] Indeed, regardless of where one stands with regard to object-oriented ontology and other speculative realisms, what these philosophies might offer to critics seems to be one of the more vexing and polarizing intellectual questions of our time. Hermeneutics may very well prove inescapably “correlationist,” and, indeed, no matter how disinterested, historical. It is an open question whether or not one can ground a coherent and worthwhile critical practice upon a flat ontology. I am tempted to suspect not. I also suspect that the current trends in continental philosophy, at the end of the day, may not be really interested in criticism as such, and perhaps that is not really such a big deal. Criticism, theory, and philosophy are not synonymous activities nor must they be. (The question about criticism vis-à-vis alien phenomenology also appears to have motivated the Object Lessons series that Bogost edits.) This is all to say, rather than ground videogame criticism in what may very well turn out to be an intellectual fad whose possibilities for writing worthwhile criticism remain somewhat dubious, perhaps there may be more ripe currents and streams—namely, the history of criticism—that can inform how we write about videogames. Criticism may be steered by keeping in view many polestars; let us not be overly swayed by what, for now, burns brightest. For an area of humanistic inquiry that is still very much emerging, it seems a mistake to assume it can and should be nothing more than toaster criticism.

    In this review I have purposefully made few claims about the state of videogames. This is partly because I do not feel that any more work needs to be done to justify writing about the medium. It is also partly because I feel that any broad statement about the form would be an overgeneralization at this point. There are too many games being made in too many places by too many different people for any all-encompassing statement about the state of videogame art to be all that coherent. (In this, I think Bogost’s sense of the need for a media microecology of videogames is still apropos.) But I will say that the state of videogame criticism—and, strangely enough, particularly the academic kind—is one of the few places where humanistic inquiry seems, at least to me, to be growing and expanding rather than contracting or ossifying. Such a generally positive and optimistic statement about a field of the humanities may not adhere to present conceptions about academic activity (indeed, it might even be unfashionable!), which seem to more generally despair about the humanities, and rightfully so. Admitting that some modes of criticism might be, at least in some ways, exhausted, would be an important caveat, especially given how the past few years have seen a considerable amount of reflection about contemporary modes of academic criticism—e.g., Rita Felski’s The Limits of Critique (2015) or Eric Hayot’s “Academic Writing, I Love You. Really, I Do” (2014). But I think that, given how the anti-intellectual miasma that has long been present in US life has intensified in recent years, creeping into seemingly every discourse, one of the really useful functions of videogame criticism may very well be its potential ability to allow reflection on the function of criticism itself in the twenty-first century. If one of the most prominent videogame critics is calling his activity “preposterous” and his object “adolescent,” this should be a cause for alarm, for such claims cannot but help to perpetuate present views about the worthlessness of the humanities. So, I would like to modestly suggest that, rather than look to toasters and widgets to inform how we talk about videogames, let us look to critics and what they have written. Edward W. Said once wrote: “for in its essence the intellectual life—and I speak here mainly about the social sciences and the humanities—is about the freedom to be critical: criticism is intellectual life and, while the academic precinct contains a great deal in it, its spirit is intellectual and critical, and neither reverential nor patriotic” (1994, 11). If one can approach videogames—of all things!—in such a spirit, perhaps other spheres of human activity can rediscover their critical spirit as well.

    _____

    Bradley J. Fest will begin teaching writing this fall at Carnegie Mellon University. His work has appeared or is forthcoming in boundary 2 (interviews here and here), Critical Quarterly, Critique, David Foster Wallace and “The Long Thing” (Bloomsbury, 2014), First Person Scholar, The Silence of Fallout (Cambridge Scholars, 2013), Studies in the Novel, and Wide Screen. He is also the author of a volume of poetry, The Rocking Chair (Blue Sketch, 2015), and a chapbook, “The Shape of Things,” was selected as finalist for the 2015 Tomaž Šalamun Prize and is forthcoming in Verse. Recent poems have appeared in Empty Mirror, PELT, PLINTH, TXTOBJX, and Small Po(r)tions. He previously reviewed Alexander R. Galloway’s The Interface Effect for The b2 Review “Digital Studies.”

    Back to the essay
    _____

    NOTES

    [1] On some of the first wave controversies, see Aarseth (2001).

    [2] For a representative sample of essays and books in the narratology versus ludology debate from the early days of academic videogame criticism, see Murray (1997 and 2004), Aarseth (1997, 2003, and 2004), Juul (2001), and Frasca (2003).

    [3] For representative texts, see Crogan (2011), Dyer-Witherford and Peuter (2009), Galloway (2006a and 2006b), Jagoda (2013 and 2016), Nakamura (2009), Shaw (2014), and Wark (2007). My claims about the vitality of the field of game studies are largely a result of having read these and other critics. There have also been a handful of interesting “videogame memoirs” published recently. See Bissell (2010) and Clune (2015).

    [4] Bogost defines procedurality as follows: “Procedural representation takes a different form than written or spoken representation. Procedural representation explains processes with other processes. . . . [It] is a form of symbolic expression that uses process rather than language” (2007, 9). For my own discussion of proceduralism, particularly with regard to The Stanley Parable (2013) and postmodern metafiction, see Fest (forthcoming 2016).

    [5] For instance, in the concluding chapter of Unit Operations, Bogost writes powerfully and convincingly about the need for a comparative videogame criticism in conversation with other forms of cultural criticism, arguing that “a structural change in our thinking must take place for videogames to thrive, both commercially and culturally” (2006, 179). It appears that the lack of any structural change in the nonetheless wildly thriving—at least financially—videogame industry has given Bogost serious pause.

    [6] Indeed, at one point he even questions the justification for the book in the first place: “The truth is, a book like this one is doomed to relatively modest sales and an even more modest readership, despite the generous support of the university press that publishes it and despite the fact that I am fortunate enough to have a greater reach than the average game critic” (Bogost 2015, 185). It is unclear why the limited reach of his writing might be so worrisome to Bogost given that, historically, the audience for, say, poetry criticism has never been all that large.

    [7] In addition to those previously mentioned, Bogost has also published Racing the Beam: The Atari Video Computer System (2009) and, with Simon Ferrari and Bobby Schweizer, Newsgames: Journalism at Play (2010). Also forthcoming is Play Anything: The Pleasure of Limits, the Uses of Boredom, and the Secret of Games (2016).

    [8] This is, to be sure, a somewhat confusing point. Are not record stores, book stores, and video stores (if such things still exist), along with tea shops, shoe stores, and clothing stores “retail establishment[s] devoted to a singular practice” (Bogost 2015, 182–83)? Are all such establishments unseemly because of the same logic? What makes a game store any different?

    [9] For a brief overview of Gamergate, see Winfield (2014). For a more detailed discussion of both the cultural and technological underpinnings of Gamergate, with a particular emphasis on the relationship between the algorithmic governance of sites such as Reddit or 4chan and online misogyny and harassment, see Massanari’s (2015) important essay. For links to a number of other articles and essays on gaming and feminism, see Ligman (2014) and The New Inquiry (2014). For essays about contemporary “gamer” culture, see Williams (2014) and Frase (2014). On gamers, Bogost writes in a chapter titled “The End of Gamers” from his previous book: “as videogames broaden in appeal, being a ‘gamer’ will actually become less common, if being a gamer means consuming games as one’s primary media diet or identifying with videogames as a primary part of one’s identity” (2011, 154).

    [10] See Bogost (2006, 73–89). Also, to be fair, Bogost devotes a paragraph of the introduction of How to Talk about Videogames to the considerable affective properties of videogames, but concludes the paragraph by saying that games are “Wagnerian Gesamtkunstwerk-flavored chewing gum” (Bogost 2015, ix), which, I feel, considerably undercuts whatever aesthetic value he had just ascribed to them.

    [11] In Alien Phenomenology Bogost calls such lists “Latour litanies” (2012, 38) and discusses this stylistic aspect of object-oriented ontology at some length in the chapter, “Ontography” (35–59).

    [12] See Harman (2012). Bogost addresses such concerns in the conclusion of Alien Phenomenology, responding to criticism about his study of the Atari 2600: “The platform studies project is an example of alien phenomenology. Yet our efforts to draw attention to hardware and software objects have been met with myriad accusations of human erasure: technological determinism most frequently, but many other fears and outrages about ‘ignoring’ or ‘conflating’ or ‘reducing,’ or otherwise doing violence to ‘the cultural aspects’ of things. This is a myth” (2012, 132).

    Back to the essay

    WORKS CITED

    • Aarseth, Espen. 1997. Cybertext: Perspectives on Ergodic Literature. Baltimore: Johns Hopkins University Press.
    • ———. 2001. “Computer Game Studies, Year One.” Game Studies 1, no. 1. http://gamestudies.org/0101/editorial.html.
    • ———. 2003. “Playing Research: Methodological Approaches to Game Analysis.” Game Approaches: Papers from spilforskning.dk Conference, August 28–29. http://hypertext.rmit.edu.au/dac/papers/Aarseth.pdf.
    • ———. 2004. “Genre Trouble: Narrativism and the Art of Simulation.” In First Person: New Media as Story, Performance, and Game, edited by Noah Wardrip-Fruin and Pat Harrigan, 45–55. Cambridge, MA: MIT Press.
    • Arac, Jonathan. 1989. Critical Genealogies: Historical Situations for Postmodern Literary Studies. New York: Columbia University Press.
    • Arnold, Matthew. 1993 (1864). “The Function of Criticism at the Present Time.” In Culture and Anarchy and Other Writings, edited by Stefan Collini, 26–51. New York: Cambridge University Press.
    • Bissell, Tom. 2010. Extra Lives: Why Video Games Matter. New York: Pantheon.
    • Bogost, Ian. 2006. Unit Operations: An Approach to Videogame Criticism. Cambridge, MA:MIT Press.
    • ———. 2007. Persuasive Games: The Expressive Power of Videogame Criticism. Cambridge, MA: MIT Press.
    • ———. 2009. Racing the Beam: The Atari Video Computer System. Cambridge, MA: MIT
    • Press.
    • ———. 2011. How to Do Things with Videogames. Minneapolis: University of Minnesota Press.
    • ———. 2012. Alien Phenomenology, or What It’s Like to Be a Thing. Minneapolis: University of Minnesota Press.
    • ———. 2015. How to Talk about Videogames. Minneapolis: University of Minnesota Press.
    • ———. Forthcoming 2016. Play Anything: The Pleasure of Limits, the Uses of Boredom, and the Secret of Games. New York: Basic Books.
    • Bogost, Ian, Simon Ferrari, and Bobby Schweizer. 2010. Newsgames: Journalism at Play.
    • Cambridge, MA: MIT Press.
    • Clune, Michael W. 2015. Gamelife: A Memoir. New York: Farrar, Straus and Giroux.
    • Crogan, Patrick. 2011. Gameplay Mode: War, Simulation, and Tehnoculture. Minneapolis: University of Minnesota Press.
    • Dyer-Witherford, Nick, and Greig de Peuter. 2009. Games of Empire: Global Capitalism and Video Games. Minneapolis: University of Minnesota Press.
    • Felski, Rita. 2015. The Limits of Critique. Chicago: University of Chicago Press.
    • Fest, Bradley J. Forthcoming 2016. “Metaproceduralism: The Stanley Parable and the Legacies of Postmodern Metafiction.” “Videogame Adaptation,” edited by Kevin M. Flanagan, special issue, Wide Screen.
    • Frasca, Gonzalo. 2003. “Simulation versus Narrative: Introduction to Ludology.” In The Video Game Theory Reader, edited by Mark J. P. Wolf and Bernard Perron, 221–36. New York: Routledge.
    • Frase, Peter. 2014.  “Gamer’s Revanche.” Peter Frase (blog), September 3. http://www.peterfrase.com/2014/09/gamers-revanche/.
    • Galloway, Alexander R. 2006a. “Warcraft and Utopia.” Ctheory.net, February 16. http://www.ctheory.net/articles.aspx?id=507.
    • ———. 2006b. Gaming: Essays on Algorithmic Culture. Minneapolis: University of Minnesota Press.
    • Harman, Graham. 2012. “The Well-Wrought Broken Hammer: Object-Oriented Literary Criticism.” New Literary History 43, no. 2: 183–203.
    • Hayot, Eric. 2014. “Academic Writing, I Love You. Really, I Do.” Critical Inquiry 41, no. 1: 53–77.
    • Jagoda, Patrick. 2013. “Gamification and Other Forms of Play.” boundary 2 40, no. 2: 113–44.
    • ———. 2016. Network Aesthetics. Chicago: University of Chicago Press.
    • Juul, Jesper. 2001. “Games Telling Stories? A Brief Note on Games and Narratives.” Game Studies 1, no. 1. http://www.gamestudies.org/0101/juul-gts/.
    • Ligman, Chris. 2014. “August 31st.” Critical Distance, August 31. http://www.critical-distance.com/2014/08/31/august-31st/.
    • Massanari, Adrienne . 2015. “#Gamergate and The Fappening: How Reddit’s Algorithm, Governance, and Culture Support Toxic Technocultures.” New Media & Society, OnlineFirst, October 9.
    • Mirowski, Philip. 2013. Never Let a Serious Crisis Go to Waste: How Neoliberalism Survived the Financial Meltdown. New York: Verso.
    • Murray, Janet. 1997. Hamlet on the Holodeck: The Future of Narrative in Cyberspace. Cambridge, MA: MIT Press.
    • ———. 2004. “From Game-Story to Cyberdrama.” In First Person: New Media as Story, Performance, and Game, edited by Noah Wardrip-Fruin and Pat Harrigan, 1–11. Cambridge, MA: MIT Press.
    • Nakamura, Lisa. 2009. “Don’t Hate the Player, Hate the Game: The Racialization of Labor in World of Warcraft.” Critical Studies in Media Communication 26, no. 2: 128–44.
    • The New Inquiry. 2014. “TNI Syllabus: Gaming and Feminism.” New Inquiry, September 2. http://thenewinquiry.com/features/tni-syllabus-gaming-and-feminism/.
    • Said, Edward W. 1994. “Identity, Authority, and Freedom: The Potentate and the Traveler.” boundary 2 21, no. 3: 1–18.
    • Shaw, Adrienne. 2014. Gaming at the Edge: Sexuality and Gender at the Margins of Gamer Culture. Minneapolis: University of Minnesota Press.
    • Wark, McKenzie. 2007. Gamer Theory. Cambridge, MA: Harvard University Press.
    • Williams, Ian. “Death to the Gamer.” Jacobin, September 9. https://www.jacobinmag.com/2014/09/death-to-the-gamer/.
    • Winfield, Nick. 2014. “Feminist Critics of Video Games Facing Threats in ‘GamerGate’ Campaign.” New York Times, October 15. http://www.nytimes.com/2014/10/16/technology/gamergate-women-video-game-threats-anita-sarkeesian.html.

    Back to the essay

  • Michelle Moravec — The Never-ending Night of Wikipedia’s Notable Woman Problem

    Michelle Moravec — The Never-ending Night of Wikipedia’s Notable Woman Problem

    By Michelle Moravec
    ~

    Author’s note: this is the written portion of a talk given at St. Joseph University’s Art + Feminism Wikipedia editathon, February 27, 2016. Thanks to Rachael Sullivan for the invite and  Rosalba Ugliuzza for Wikipedia data culling!

    Millions of the sex whose names were never known beyond the circles of their own home influences have been as worthy of commendation as those here commemorated. Stars are never seen either through the dense cloud or bright sunshine; but when daylight is withdrawn from a clear sky they tremble forth
    — Sarah Josepha Hale, Woman’s Record (1853)

    and others was a womanAs this poetic quote by Sarah Josepha Hale, nineteenth-century author and influential editor reminds us, context is everything.   The challenge, if we wish to write women back into history via Wikipedia, is to figure out how to shift the frame of references so that our stars can shine, since the problem of who precisely is “worthy of commemoration” or in Wikipedia language, who is deemed notable, so often seems to exclude women.

    As as Shannon Mattern asked at last year’s Art + Feminism Wikipedia edit-a-thon, “Could Wikipedia embody some alternative to the ‘Great Man Theory’ of how the world works?” Literary scholar Alison Booth, in How To Make It as a Woman, notes that the first book in praise of women by a woman appeared in 1404 (Christine de Pizan’s Book of the City of Ladies), launching a lengthy tradition of “exemplary biographical collections of women.” Booth identified more than 900 voluanonymous was toomes of prosopography published during what might be termed the heyday of the genre, 1830-1940, when the rise of the middle class and increased literacy combined with relatively cheap production of books to make such volumes both practicable and popular. Booth also points out, that lest we consign the genre to the realm of mere curiosity, predating the invention of “women’s history” the compilers, editrixes or authors of these volumes considered them a contribution to “national history” and indeed Booth concludes that the volumes were “indispensable aids in the formation of nationhood.”

    Booth compiled a list of the most frequently mentioned women in a subset of these books and tracked their frequency over time.  In an exemplary project, she made this data available on the web, allowing for the creation of the visualization below of American figures on that chart.

    booth data by date

    This chart makes clear what historians already know, notability is historically specific and contingent, something Wikipedia does not take into account in formulating guidelines that take this to be a stable concept.

    Only Pocahontas deviates from the great white woman school of history and she too becomes less salient over time.  Furthermore, by the standards of this era, at least as represented by these books, black women were largely considered un-notable. This perhaps explains why, in 1894, Gertrude Mossell published The Work of the Afro-American Woman, a compilation of achievements that she described as “historical in character.” Mossell’s volume itself is a rich source of information of women worthy of commemoration and commendation.

    Looking further into the twentieth-century, the successor to this sort of volume is aptly titled, Notable American Women, a three-volume set that while published in 1971 had its roots in the 1950s when Arthur Schlesinger, as head of Radcliffe’s College council, suggested that a biographical dictionary of women might be a useful thing. Perhaps predictably, a publisher could not be secured, so Radcliffe funded the project itself. The question then becomes does inclusion in a volume declaring women as “notable” mean that these women would meet Wikipedia’s “notability” standards?

    Studies have found varying degrees of bias in coverage of female figures compared to male figures. The latest numbers I found, as of January 2015, concluded that women constituted only 15.5 percent of the biographical entries on the English Wikipedia, and that prior to the 20th century, the problem was wildly exacerbated by “sourcing and notability issues.” Using the “missing” biographies concept borrowed from a 2010 study of Wikipedia’s “completeness,” I compared selected “classified” areas for biographies of Notable American Women (analysis was conducted by hand with tremendous assistance from Rosalba Ugliuzza).

    Working with the digitized copy of Notable American Women in Women and Social Movements, I began compiling a “missing” biographies quotient,  the percentage of entries missing for individuals by the “classified list of biographies” that appeared at the end of the third volume of Notable American Women. Mirroring the well-known category issues of Wikipedia, the editors finessed the difficulties of limiting individuals to one area by including them in multiple, including a section called “Negro Women” and another called “Indian Women”:

    missing for blog

    Initially I had suspected that larger classifications might have a greater percentage of missing entries, but that is not true. Social workers, the classification with the highest percentage of missing entries, is a relatively small classification with only nine individuals. The six classifications with no missing entries ranged in size from five to eleven.  I then created my own meta-categories to summarize what larger classifications might exacerbate this “missing” biographies problem.

    legend missing blog

    Inclusion in Notable American Women does not translate into inclusion in Wikipedia.   Influential individuals associated with female-dominated professions, social work and nursing, are less likely to be considered notable, as are those “leaders” in settlement houses or welfare work or “reformers” like peace advocates.   Perhaps due to edit-a-thons or Wikipedians-in-residence, female artists and female scientists have fared quite well.  Both Indian Women and Negro Women have the same percentage of missing women.

    Looking at the network of “Negro Women” by their Notable American Women classified entries, I noted their centrality. Frances Harper and Ida B. Wells are the most networked women in the volumes, which is representative of their position as bridge leaders (I also noted the centrality of Frances Gage, who does not have a Wikipedia entry yet, a fate she shares with the white abolitionists Sallie Holley and Caroline Putnam).

    negro network colors

    Visualizing further, I located two women who don’t have Wikipedia entries and are not included in Notable American Women:

    missing negro women

    Eva del Vakia Bowles was a long time YWCA worker who spent her life trying to improve interracial relations. She was the first black woman hired by the YWCA to head a branch. During WWI, Bowles had charge of Y’s established near war work factories to provide R & R for workers. Throughout her tenure at the Y, Bowles pressed the organization to promote black women to positions within the organization. In 1932 she resigned from her beloved Y in protest over policies she believed excluded black women from the decision making processes of the National Board.

    Addie D. Waites Hunton, also a Y worker and founding member of the NAACP, was an amazing woman who along with her friend Kathryn Magnolia Johnson authored Two Colored Women with the American Expeditionary Forces (1920), which details their time as Y workers in WWI where they were among the very first black women sent. Later, she became a field worker for the NAACP, a member of the WILPF, and was an observer in Haiti in 1926 as part of that group

    Finally, using a methodology I developed when working on the racially-biased History of Woman Suffrage, I scraped names from Mossell’s The Work of the Afro-American Woman to find women that should have appeared in Notable American Women and in Wikipedia. Although this is rough result of named extractions, it gave me a place to start.

    overlaps negro women

    Alice Dugged Cary does not appear in Notable American Women or Wikipedia.  She was born free in 1859 became president of the State Federation of Colored Women of Georgia, librarian of first branch for African Americans in Atlanta, established first free kindergartens for African American children in Georgia, nominated as honorary member in Zeta Phi Beta and was involved in its spread.

    Similarly, Lucy Ella Moten, born free in 1851, became principal of Miner Normal School, earned an M.D., and taught in the South during summer “vacations, appears in neither Notable American Women nor Wikipedia (or at least she didn’t until Mike Lyons started her page yesterday at the editathon!).

    _____

    Michelle Moravec (@ProfessMoravec) is Associate Professor of History at Rosemont College. She is a prominent digital historian and the digital history editor for Women and Social Movements. Her current project, The Politics of Women’s Culture, uses a combination of digital and traditional approaches to produce an intellectual history of the concept of women’s culture. She writes a monthly column for the Mid-Atlantic Regional Center for the Humanities, and maintains her own blog History in the City, at which an earlier version of this post first appeared.

    Back to the essay

  • Coding Bootcamps and the New For-Profit Higher Ed

    Coding Bootcamps and the New For-Profit Higher Ed

    By Audrey Watters
    ~
    After decades of explosive growth, the future of for-profit higher education might not be so bright. Or, depending on where you look, it just might be…

    In recent years, there have been a number of investigations – in the media, by the government – into the for-profit college sector and questions about these schools’ ability to effectively and affordably educate their students. Sure, advertising for for-profits is still plastered all over the Web, the airwaves, and public transportation, but as a result of journalistic and legal pressures, the lure of these schools may well be a lot less powerful. If nothing else, enrollment and profits at many for-profit institutions are down.

    Despite the massive amounts of money spent by the industry to prop it up – not just on ads but on lobbying and legal efforts, the Obama Administration has made cracking down on for-profits a centerpiece of its higher education policy efforts, accusing these schools of luring students with misleading and overblown promises, often leaving them with low-status degrees sneered at by employers and with loans students can’t afford to pay back.

    But the Obama Administration has also just launched an initiative that will make federal financial aid available to newcomers in the for-profit education sector: ed-tech experiments like “coding bootcamps” and MOOCs. Why are these particular for-profit experiments deemed acceptable? What do they do differently from the much-maligned for-profit universities?

    School as “Skills Training”

    In many ways, coding bootcamps do share the justification for their existence with for-profit universities. That is, they were founded in order to help to meet the (purported) demands of the job market: training people with certain technical skills, particularly those skills that meet the short-term needs of employers. Whether they meet students’ long-term goals remains to be seen.

    I write “purported” here even though it’s quite common to hear claims that the economy is facing a “STEM crisis” – that too few people have studied science, technology, engineering, or math and employers cannot find enough skilled workers to fill jobs in those fields. But claims about a shortage of technical workers are debatable, and lots of data would indicate otherwise: wages in STEM fields have remained flat, for example, and many who graduate with STEM degrees cannot find work in their field. In other words, the crisis may be “a myth.”

    But it’s a powerful myth, and one that isn’t terribly new, dating back at least to the launch of the Sputnik satellite in 1957 and subsequent hand-wringing over the Soviets’ technological capabilities and technical education as compared to the US system.

    There are actually a number of narratives – some of them competing narratives – at play here in the recent push for coding bootcamps, MOOCs, and other ed-tech initiatives: that everyone should go to college; that college is too expensive – “a bubble” in the Silicon Valley lexicon; that alternate forms of credentialing will be developed (by the technology sector, naturally); that the tech sector is itself a meritocracy, and college degrees do not really matter; that earning a degree in the humanities will leave you unemployed and burdened by student loan debt; that everyone should learn to code. Much like that supposed STEM crisis and skill shortage, these narratives might be powerful, but they too are hardly provable.

    Nor is the promotion of a more business-focused education that new either.

    Image credits

    Career Colleges: A History

    Foster’s Commercial School of Boston, founded in 1832 by Benjamin Franklin Foster, is often recognized as the first school established in the United States for the specific purpose of teaching “commerce.” Many other commercial schools opened on its heels, most located in the Atlantic region in major trading centers like Philadelphia, Boston, New York, and Charleston. As the country expanded westward, so did these schools. Bryant & Stratton College was founded in Cleveland in 1854, for example, and it established a chain of schools, promising to open a branch in every American city with a population of more than 10,000. By 1864, it had opened more than 50, and the chain is still in operation today with 18 campuses in New York, Ohio, Virginia, and Wisconsin.

    The curriculum of these commercial colleges was largely based around the demands of local employers alongside an economy that was changing due to the Industrial Revolution. Schools offered courses in bookkeeping, accounting, penmanship, surveying, and stenography. This was in marketed contrast to those universities built on a European model, which tended to teach topics like theology, philosophy, and classical language and literature. If these universities were “elitist,” the commercial colleges were “popular” – there were over 70,000 students enrolled in them in 1897, compared to just 5800 in colleges and universities – something that highlights what’s a familiar refrain still today: that traditional higher ed institutions do not meet everyone’s needs.

    Image credits

    The existence of the commercial colleges became intertwined in many success stories of the nineteenth century: Andrew Carnegie attended night school in Pittsburgh to learn bookkeeping, and John D. Rockefeller studied banking and accounting at Folsom’s Commercial College in Cleveland. The type of education offered at these schools was promoted as a path to become a “self-made man.”

    That’s the story that still gets told: these sorts of classes open up opportunities for anyone to gain the skills (and perhaps the certification) that will enable upward mobility.

    It’s a story echoed in the ones told about (and by) John Sperling as well. Born into a working class family, Sperling worked as a merchant marine, then attended community college during the day and worked as a gas station attendant at night. He later transferred to Reed College, went on to UC Berkeley, and completed his doctorate at Cambridge University. But Sperling felt as though these prestigious colleges catered to privileged students; he wanted a better way for working adults to be able to complete their degrees. In 1976, he founded the University of Phoenix, one of the largest for-profit colleges in the US which at its peak in 2010 enrolled almost 600,000 students.

    Other well-known names in the business of for-profit higher education: Walden University (founded in 1970), Capella University (founded in 1993), Laureate Education (founded in 1999), Devry University (founded in 1931), Education Management Corporation (founded in 1962), Strayer University (founded in 1892), Kaplan University (founded in 1937 as The American Institute of Commerce), and Corinthian Colleges (founded in 1995 and defunct in 2015).

    It’s important to recognize the connection of these for-profit universities to older career colleges, and it would be a mistake to see these organizations as distinct from the more recent development of MOOCs and coding bootcamps. Kaplan, for example, acquired the code school Dev Bootcamp in 2014. Laureate Education is an investor in the MOOC provider Coursera. The Apollo Education Group, the University of Phoenix’s parent company, is an investor in the coding bootcamp The Iron Yard.

    Image credits

    Promises, Promises

    Much like the worries about today’s for-profit universities, even the earliest commercial colleges were frequently accused of being “purely business speculations” – “diploma mills” – mishandled by administrators who put the bottom line over the needs of students. There were concerns about the quality of instruction and about the value of the education students were receiving.

    That’s part of the apprehension about for-profit universities’ (almost most) recent manifestations too: that these schools are charging a lot of money for a certification that, at the end of the day, means little. But at least the nineteenth century commercial colleges were affordable, UC Berkley history professor Caitlin Rosenthal argues in a 2012 op-ed in Bloomberg,

    The most common form of tuition at these early schools was the “life scholarship.” Students paid a lump sum in exchange for unlimited instruction at any of the college’s branches – $40 for men and $30 for women in 1864. This was a considerable fee, but much less than tuition at most universities. And it was within reach of most workers – common laborers earned about $1 per day and clerks’ wages averaged $50 per month.

    Many of these “life scholarships” promised that students who enrolled would land a job – and if they didn’t, they could always continue their studies. That’s quite different than the tuition at today’s colleges – for-profit or not-for-profit – which comes with no such guarantee.

    Interestingly, several coding bootcamps do make this promise. A 48-week online program at Bloc will run you $24,000, for example. But if you don’t find a job that pays $60,000 after four months, your tuition will be refunded, the startup has pledged.

    Image credits

    According to a recent survey of coding bootcamp alumni, 66% of graduates do say they’ve found employment (63% of them full-time) in a job that requires the skills they learned in the program. 89% of respondents say they found a job within 120 days of completing the bootcamp. Yet 21% say they’re unemployed – a number that seems quite high, particularly in light of that supposed shortage of programming talent.

    For-Profit Higher Ed: Who’s Being Served?

    The gulf between for-profit higher ed’s promise of improved job prospects and the realities of graduates’ employment, along with the price tag on its tuition rates, is one of the reasons that the Obama Administration has advocated for “gainful employment” rules. These would measure and monitor the debt-to-earnings ratio of graduates from career colleges and in turn penalize those schools whose graduates had annual loan payments more than 8% of their wages or 20% of their discretionary earnings. (The gainful employment rules only apply to those schools that are eligible for Title IV federal financial aid.)

    The data is still murky about how much debt attendees at coding bootcamps accrue and how “worth it” these programs really might be. According to the aforementioned survey, the average tuition at these programs is $11,852. This figure might be a bit deceiving as the price tag and the length of bootcamps vary greatly. Moreover, many programs, such as App Academy, offer their program for free (well, plus a $5000 deposit) but then require that graduates repay up to 20% of their first year’s salary back to the school. So while the tuition might appear to be low in some cases, the indebtedness might actually be quite high.

    According to Course Report’s survey, 49% of graduates say that they paid tuition out of their own pockets, 21% say they received help from family, and just 1.7% say that their employer paid (or helped with) the tuition bill. Almost 25% took out a loan.

    That percentage – those going into debt for a coding bootcamp program – has increased quite dramatically over the last few years. (Less than 4% of graduates in the 2013 survey said that they had taken out a loan). In part, that’s due to the rapid expansion of the private loan industry geared towards serving this particular student population. (Incidentally, the two ed-tech companies which have raised the most money in 2015 are both loan providers: SoFi and Earnest. The former has raised $1.2 billion in venture capital this year; the latter $245 million.)

    Image credits

    The Obama Administration’s newly proposed “EQUIP” experiment will open up federal financial aid to some coding bootcamps and other ed-tech providers (like MOOC platforms), but it’s important to underscore some of the key differences here between federal loans and private-sector loans: federal student loans don’t have to be repaid until you graduate or leave school; federal student loans offer forbearance and deferment if you’re struggling to make payments; federal student loans have a fixed interest rate, often lower than private loans; federal student loans can be forgiven if you work in public service; federal student loans (with the exception of PLUS loans) do not require a credit check. The latter in particular might help to explain the demographics of those who are currently attending coding bootcamps: if they’re having to pay out-of-pocket or take loans, students are much less likely to be low-income. Indeed, according to Course Report’s survey, the cost of the bootcamps and whether or not they offered a scholarship was one of the least important factors when students chose a program.

    Here’s a look at some coding bootcamp graduates’ demographic data (as self-reported):

    Age
    Mean Age 30.95
    Gender
    Female 36.3%
    Male 63.1%
    Ethnicity
    American Indian 1.0%
    Asian American 14.0%
    Black 5.0%
    Other 17.2%
    White 62.8%
    Hispanic Origin
    Yes 20.3%
    No 79.7%
    Citizenship
    Yes, born in the US 78.2%
    Yes, naturalized 9.7%
    No 12.2%
    Education
    High school dropout 0.2%
    High school graduate 2.6%
    Some college 14.2%
    Associate’s degree 4.1%
    Bachelor’s degree 62.1%
    Master’s degree 14.2%
    Professional degree 1.5%
    Doctorate degree 1.1%

    (According to several surveys of MOOC enrollees, these students also tend to be overwhelmingly male from more affluent neighborhoods, and MOOC students also tend to already possess Bachelor’s degrees. The median age of MITx registrants is 27.)

    It’s worth considering how the demographics of students in MOOCs and coding bootcamps may (or may not) be similar to those enrolled at other for-profit post-secondary institutions, particularly since all of these programs tend to invoke the rhetoric about “democratizing education” and “expanding access.” Access for whom?

    Some two million students were enrolled in for-profit colleges in 2010, up from 400,000 a decade earlier. These students are disproportionately older, African American, and female when compared to the entire higher ed student population. While one in 20 of all students are enrolled in a for-profit college, 1 in 10 African American students, 1 in 14 Latino students, and 1 in 14 first-generation college students are enrolled at a for-profit. Students at for-profits are more likely to be single parents. They’re less likely to enter with a high school diploma. Dependent students in for-profits have about half as much family income as students in not-for-profit schools. (This demographic data is drawn from the NCES and from Harvard University researchers David Deming, Claudia Goldin, and Lawrence Katz in their 2013 study on for-profit colleges.)

    Deming, Goldin, and Katz argue that

    The snippets of available evidence suggest that the economic returns to students who attend for-profit colleges are lower than those for public and nonprofit colleges. Moreover, default rates on student loans for proprietary schools far exceed those of other higher-education institutions.

    Image credits

    According to one 2010 report, just 22% of first- and full-time students pursuing Bachelor’s degrees at for-profit colleges in 2008 graduated, compared to 55% and 65% of students at public and private non-profit universities respectively. Of the more than 5000 career programs that the Department of Education tracks, 72% of those offered by for-profit institutions produce graduates who earn less than high school dropouts.

    For their part, today’s MOOCs and coding bootcamps also boast that their students will find great success on the job market. Coursera, for example, recently surveyed its students who’d completed one of its online courses and 72% who responded said they had experienced “career benefits.” But without the mandated reporting that comes with federal financial aid, a lot of what we know about their student population and student outcomes remains pretty speculative.

    What kind of students benefit from coding bootcamps and MOOC programs, the new for-profit education? We don’t really know… although based on the history of higher education and employment, we can guess.

    EQUIP and the New For-Profit Higher Ed

    On October 14, the Obama Administration announced a new initiative, the Educational Quality through Innovative Partnerships (EQUIP) program, which will provide a pathway for unaccredited education programs like coding bootcamps and MOOCs to become eligible for federal financial aid. According to the Department of Education, EQUIP is meant to open up “new models of education and training” to low income students. In a press release, it argues that “Some of these new models may provide more flexible and more affordable credentials and educational options than those offered by traditional higher institutions, and are showing promise in preparing students with the training and education needed for better, in-demand jobs.”

    The EQUIP initiative will partner accredited institutions with third-party providers, loosening the “50% rule” that prohibits accredited schools from outsourcing more than 50% of an accredited program. Since bootcamps and MOOC providers “are not within the purview of traditional accrediting agencies,” the Department of Education says, “we have no generally accepted means of gauging their quality.” So those organizations that apply for the experiment will have to provide an outside “quality assurance entity,” which will help assess “student outcomes” like learning and employment.

    By making financial aid available for bootcamps and MOOCs, one does have to wonder if the Obama Administration is not simply opening the doors for more of precisely the sort of practices that the for-profit education industry has long been accused of: expanding rapidly, lowering the quality of instruction, focusing on marketing to certain populations (such as veterans), and profiting off of taxpayer dollars.

    Who benefits from the availability of aid? And who benefits from its absence? (“Who” here refers to students and to schools.)

    Shawna Scott argues in “The Code School-Industrial Complex” that without oversight, coding bootcamps re-inscribe the dominant beliefs and practices of the tech industry. Despite all the talk of “democratization,” this is a new form of gatekeeping.

    Before students are even accepted, school admission officers often select for easily marketable students, which often translates to students with the most privileged characteristics. Whether through intentionally targeting those traits because it’s easier to ensure graduates will be hired, or because of unconscious bias, is difficult to discern. Because schools’ graduation and employment rates are their main marketing tool, they have a financial stake in only admitting students who are at low risk of long-term unemployment. In addition, many schools take cues from their professional developer founders and run admissions like they hire for their startups. Students may be subjected to long and intensive questionnaires, phone or in-person interviews, or be required to submit a ‘creative’ application, such as a video. These requirements are often onerous for anyone working at a paid job or as a caretaker for others. Rarely do schools proactively provide information on alternative application processes for people of disparate ability. The stereotypical programmer is once again the assumed default.

    And so, despite the recent moves to sanction certain ed-tech experiments, some in the tech sector have been quite vocal in their opposition to more regulations governing coding schools. It’s not just EQUIP either; there was much outcry last year after several states, including California, “cracked down” on bootcamps. Many others have framed the entire accreditation system as a “cabal” that stifles innovation. “Innovation” in this case implies alternate certificate programs – not simply Associate’s or Bachelor’s degrees – in timely, technical topics demanded by local/industry employers.

    Image credits

    The Forgotten Tech Ed: Community Colleges

    Of course, there is an institution that’s long offered alternate certificate programs in timely, technical topics demanded by local/industry employers, and that’s the community college system.

    Vox’s Libby Nelson observed that “The NYT wrote more about Harvard last year than all community colleges combined,” and certainly the conversations in the media (and elsewhere) often ignore that community colleges exist at all, even though these schools educate almost half of all undergraduates in the US.

    Like much of public higher education, community colleges have seen their funding shrink in recent decades and have been tasked to do more with less. For community colleges, it’s a lot more with a lot less. Open enrollment, for example, means that these schools educate students who require more remediation. Yet despite many community colleges students being “high need,” community colleges spend far less per pupil than do four-year institutions. Deep budget cuts have also meant that even with their open enrollment policies, community colleges are having to restrict admissions. In 2012, some 470,000 students in California were on waiting lists, unable to get into the courses they need.

    This is what we know from history: as the funding for public higher ed decreased – for two- and four-year schools alike, for-profit higher ed expanded, promising precisely what today’s MOOCs and coding bootcamps now insist they’re the first and the only schools to do: to offer innovative programs, training students in the kinds of skills that will lead to good jobs. History tells us otherwise…
    _____

    Audrey Watters is a writer who focuses on education technology – the relationship between politics, pedagogy, business, culture, and ed-tech. She has worked in the education field for over 15 years: teaching, researching, organizing, and project-managing. Although she was two chapters into her dissertation (on a topic completely unrelated to ed-tech), she decided to abandon academia, and she now happily fulfills the one job recommended to her by a junior high aptitude test: freelance writer. Her stories have appeared on NPR/KQED’s education technology blog MindShift, in the data section of O’Reilly Radar, on Inside Higher Ed, in The School Library Journal, in The Atlantic, on ReadWriteWeb, and Edutopia. She is the author of the recent book The Monsters of Education Technology (Smashwords, 2014) and working on a book called Teaching Machines. She maintains the widely-read Hack Education blog, on which an earlier version of this essay first appeared, and writes frequently for The b2 Review Digital Studies magazine on digital technology and education.

    Back to the essay

  • The Ground Beneath the Screens

    The Ground Beneath the Screens

    Jussi Parikka, A Geology of Media (University of Minnesota Press, 2015)Jussi Parikka, The Anthrobscene (University of Minnesota Press, 2015)a review of Jussi Parikka, A Geology of Media (University of Minnesota Press, 2015) and The Anthrobscene (University of Minnesota Press, 2015)
    by Zachary Loeb

    ~

     

     

     

     

    Despite the aura of ethereality that clings to the Internet, today’s technologies have not shed their material aspects. Digging into the materiality of such devices does much to trouble the adoring declarations of “The Internet Is the Answer.” What is unearthed by digging is the ecological and human destruction involved in the creation of the devices on which the Internet depends—a destruction that Jussi Parikka considers an obscenity at the core of contemporary media.

    Parikka’s tale begins deep below the Earth’s surface in deposits of a host of different minerals that are integral to the variety of devices without which you could not be reading these words on a screen. This story encompasses the labor conditions in which these minerals are extracted and eventually turned into finished devices, it tells of satellites, undersea cables, massive server farms, and it includes a dark premonition of the return to the Earth which will occur following the death (possibly a premature death due to planned obsolescence) of the screen at which you are currently looking.

    In a connected duo of new books, The Anthrobscene (referenced below as A) and A Geology of Media (referenced below as GM), media scholar Parikka wrestles with the materiality of the digital. Parikka examines the pathways by which planetary elements become technology, while considering the transformations entailed in the anthropocene, and artistic attempts to render all of this understandable. Drawing upon thinkers ranging from Lewis Mumford to Donna Haraway and from the Situationists to Siegfried Zielinski – Parikka constructs a way of approaching media that emphasizes that it is born of the Earth, borne upon the Earth, and fated eventually to return to its place of origin. Parikka’s work demands that materiality be taken seriously not only by those who study media but also by all of those who interact with media – it is a demand that the anthropocene must be made visible.

    Time is an important character in both The Anthrobscene and A Geology of Media for it provides the context in which one can understand the long history of the planet as well as the scale of the years required for media to truly decompose. Parikka argues that materiality needs to be considered beyond a simple focus upon machines and infrastructure, but instead should take into account “the idea of the earth, light, air, and time as media” (GM 3). Geology is harnessed as a method of ripping open the black box of technology and analyzing what the components inside are made of – copper, lithium, coltan, and so forth. The engagement with geological materiality is key for understanding the environmental implications of media, both in terms of the technologies currently in circulation and in terms of predicting the devices that will emerge in the coming years. Too often the planet is given short shrift in considerations of the technical, but “it is the earth that provides for media and enables it”, it is “the affordances of its geophysical reality that make technical media happen” (GM 13). Drawing upon Mumford’s writings about “paleotechnics” and “neotechnics” (concepts which Mumford had himself adapted from the work of Patrick Geddes), Parikka emphasizes that both the age of coal (paleotechnics) and the age of electricity (neotechnics) are “grounded in the wider mobilization of the materiality of the earth” (GM 15). Indeed, electric power is often still quite reliant upon the extraction and burning of coal.

    More than just a pithy neologism, Parikka introduces the term “anthrobscene” to highlight the ecological violence inherent in “the massive changes human practices, technologies, and existence have brought across the ecological board” (GM 16-17) shifts that often go under the more morally vague title of “the anthropocene.” For Parikka, “the addition of the obscene is self-explanatory when one starts to consider the unsustainable, politically dubious, and ethically suspicious practices that maintain technological culture and its corporate networks” (A 6). Like a curse word beeped out by television censors, much of the obscenity of the anthropocene goes unheard even as governments and corporations compete with ever greater élan for the privilege of pillaging portions of the planet – Parikka seeks to reinscribe the obscenity.

    The world of high tech media still relies upon the extraction of metals from the earth and, as Parikka shows, a significant portion of the minerals mined today are destined to become part of media technologies. Therefore, in contemplating geology and media it can be fruitful to approach media using Zielinski’s notion of “deep time” wherein “durations become a theoretical strategy of resistance against the linear progress myths that impose a limited context for understanding technological change” (GM 37, A 23). Deploying the notion of “deep time” demonstrates the ways in which a “metallic materiality links the earth to the media technological” while also emphasizing the temporality “linked to the nonhuman earth times of decay and renewal” (GM 44, A 30). Thus, the concept of “deep time” can be particularly useful in thinking through the nonhuman scales of time involved in media, such as the centuries required for e-waste to decompose.

    Whereas “deep time” provides insight into media’s temporal quality, “psychogeophysics” presents a method for thinking through the spatial. “Psychogeophysics” is a variation of the Situationist idea of “the psychogeographical,” but where the Situationists focused upon the exploration of the urban environment, “psychogeophysics” (which appeared as a concept in a manifesto in Mute magazine) moves beyond the urban sphere to contemplate the oblate spheroid that is the planet. What the “geophysical twist brings is a stronger nonhuman element that is nonetheless aware of the current forms of exploitation but takes a strategic point of view on the nonorganic too” (GM 64). Whereas an emphasis on the urban winds up privileging the world built by humans, the shift brought by “psychogeophysics” allows people to bear witness to “a cartography of architecture of the technological that is embedded in the geophysical” (GM 79).

    The material aspects of media technology consist of many areas where visibility has broken down. In many cases this is suggestive of an almost willful disregard (ignoring exploitative mining and labor conditions as well as the harm caused by e-waste), but in still other cases it is reflective of the minute scales that materiality can assume (such as metallic dust that dangerously fills workers’ lungs after they shine iPad cases). The devices that are surrounded by an optimistic aura in some nations, thus obtain this sheen at the literal expense of others: “the residue of the utopian promise is registered in the soft tissue of a globally distributed cheap labor force” (GM 89). Indeed, those who fawn with religious adoration over the newest high-tech gizmo may simply be demonstrating that nobody they know personally will be sickened in assembling it, or be poisoned by it when it becomes e-waste. An emphasis on geology and materiality, as Parikka demonstrates, shows that the era of digital capitalism contains many echoes of the exploitation characteristic of bygone periods – appropriation of resources, despoiling of the environment, mistreatment of workers, exportation of waste, these tragedies have never ceased.

    Digital media is excellent at creating a futuristic veneer of “smart” devices and immaterial sounding aspects such as “the cloud,” and yet a material analysis demonstrates the validity of the old adage “the more things change the more they stay the same.” Despite efforts to “green” digital technology, “computer culture never really left the fossil (fuel) age anyway” (GM 111). But beyond relying on fossil fuels for energy, these devices can themselves be considered as fossils-to-be as they go to rest in dumps wherein they slowly degrade, so that “we can now ask what sort of fossil layer is defined by the technical media condition…our future fossils layers are piling up slowly but steadily as an emblem of an apocalypse in slow motion” (GM 119). We may not be surrounded by dinosaurs and trilobites, but the digital media that we encounter are tomorrow’s fossils – which may be quite mysterious and confounding to those who, thousands of years hence, dig them up. Businesses that make and sell digital media thrive on a sense of time that consists of planned obsolescence, regular updates, and new products, but to take responsibility for the materiality of these devices requires that “notions of temporality must escape any human-obsessed vocabulary and enter into a closer proximity with the fossil” (GM 135). It requires a woebegone recognition that our technological detritus may be present on the planet long after humanity has vanished.

    The living dead that lurch alongside humanity today are not the zombies of popular entertainment, but the undead media devices that provide the screens for consuming such distractions. Already fossils, bound to be disposed of long before they stop working, it is vital “to be able to remember that media never dies, but remains as toxic residue,” and thus “we should be able to repurpose and reuse solutions in new ways, as circuit bending and hardware hacking practices imply” (A 41). We live with these zombies, we live among them, and even when we attempt to pack them off to unseen graveyards they survive under the surface. A Geology of Media is thus “a call for further materialization of media not only as media but as that bit which it consists of: the list of the geophysical elements that give us digital culture” (GM 139).

    It is not simply that “machines themselves contain a planet” (GM 139) but that the very materiality of the planet is becoming riddled with a layer of fossilized machines.

    * * *

    The image of the world conjured up by Parikka in A Geology of Media and The Anthrobscene is far from comforting – after all, Parikka’s preference for talking about “the anthrobscene” does much to set a funereal tone. Nevertheless, these two books by Parikka do much to demonstrate that “obscene” may be a very fair word to use when discussing today’s digital media. By emphasizing the materiality of media, Parikka avoids the thorny discussions of the benefits and shortfalls of various platforms to instead pose a more challenging ethical puzzle: even if a given social media platform can be used for ethical ends, to what extent is this irrevocably tainted by the materiality of the device used to access these platforms? It is a dark assessment which Parikka describes without much in the way of optimistic varnish, as he describes the anthropocene (on the first page of The Anthrobscene) as “a concept that also marks the various violations of environmental and human life in corporate practices and technological culture that are ensuring that there won’t be much of humans in the future scene of life” (A 1).

    And yet both books manage to avoid the pitfall of simply coming across as wallowing in doom. Parikka is not pining for a primal pastoral fantasy, but is instead seeking to provide new theoretical tools with which his readers can attempt to think through the materiality of media. Here, Parikka’s emphasis on the way that digital technology is still heavily reliant upon mining and fossil fuels acts as an important counter to gee-whiz futurism. Similarly Parikka’s mobilization of the notion of “deep time” and fossils acts as an important contribution to thinking through the lifecycles of digital media. Dwelling on the undeath of a smartphone slowly decaying in an e-waste dump over centuries is less about evoking a fearful horror than it is about making clear the horribleness of technological waste. The discussion of “deep time” seems like it can function as a sort of geological brake on accelerationist thinking, by emphasizing that no matter how fast humans go, the planet has its own sense of temporality. Throughout these two slim books, Parikka draws upon a variety of cultural works to strengthen his argument: ranging from the earth-pillaging mad scientist of Arthur Conan Doyle’s Professor Challenger, to the Coal Fired Computers of Yokokoji-Harwood (YoHa), to Molleindustria’s smartphone game “Phone Story” which plays out on a smartphone’s screen the tangles of extraction, assembly, and disposal that are as much a part of the smartphone’s story as whatever uses for which the final device is eventually used. Cultural and artistic works, when they intend to, may be able to draw attention to the obscenity of the anthropocene.

    The Anthrobscene and A Geology of Media are complementary texts, but one need not read both in order to understand the other. As part of the University of Minnesota Press’s “Forerunners” series, The Anthrobscene is a small book (in terms of page count and physical size) which moves at a brisk pace, in some ways it functions as a sort of greatest hits version of A Geology of Media – containing many of the essential high points, but lacking some of the elements that ultimately make A Geology of Media a satisfying and challenging book. Yet the duo of books work wonderfully together as The Anthrobscene acts as a sort of primer – that a reader of both books will detect many similarities between the two is not a major detraction, for these books tell a story that often goes unheard today.

    Those looking for neat solutions to the anthropocene’s quagmire will not find them in either of these books – and as these texts are primarily aimed at an academic audience this is not particularly surprising. These books are not caught up in offering hope – be it false or genuine. At the close of A Geology of Media when Parikka discusses the need “to repurpose and reuse solutions in new ways, as circuit bending and hardware hacking practices imply” (A 41) – this does not appear as a perfect panacea but as way of possibly adjusting. Parikka is correct in emphasizing the ways in which the extractive regimes that characterized the paleotechnic continue on in the neotechnic era, and this is a point which Mumford himself made regarding the way that the various “technic” eras do not represent clean breaks from each other. As Mumford put it, “the new machines followed, not their own pattern, but the pattern laid down by previous economic and technical structures” (Mumford 2010, 236) – in other words, just as Parikka explains, the paleotechnic survives well into the neotechnic. The reason this is worth mentioning is not to challenge Parikka, but to highlight that the “neotechnic” is not meant as a characterization of a utopian technical epoch that has parted ways with the exploitation that had characterized the preceding period. For Mumford the need was to move beyond the anthropocentrism of the neotechnic period and move towards what he called (in The Culture of Cities) the “biotechnic” a period wherein “technology itself will be oriented toward the culture of life” (Mumford 1938, 495). Granted, as Mumford’s later work and as these books by Parikka make clear – instead of arriving at the “biotechnic” what we might get is instead the anthrobscene. And reading these books by Parikka makes it clear that one could not characterize the anthrobscene as being “oriented toward the culture of life” – indeed, it may be exactly the opposite. Or, to stick with Mumford a bit longer, it may be that the anthrobscene is the result of the triumph of “authoritarian technics” over “democratic” ones. Nevertheless, the true dirge like element of Parikka’s books is that they raise the possibility that it may well be too late to shift paths – that the neotechnic was perhaps just a coat of fresh paint applied to hide the rusting edifice of paleotechnics.

    A Geology of Media and The Anthrobscene are conceptual toolkits, they provide the reader with the drills and shovels they need to dig into the materiality of digital media. But what these books make clear is that along with the pickaxe and the archeologist’s brush, if one is going to dig into the materiality of media one also needs a gasmask if one is to endure the noxious fumes. Ultimately, what Parikka shows is that the Situationist inspired graffiti of May 1968 “beneath the streets – the beach” needs to be rewritten in the anthrobscene.

    Perhaps a fitting variation for today would read: “beneath the streets – the graveyard.”
    _____

    Zachary Loeb is a writer, activist, librarian, and terrible accordion player. He earned his MSIS from the University of Texas at Austin, and is currently working towards an MA in the Media, Culture, and Communications department at NYU. His research areas include media refusal and resistance to technology, ethical implications of technology, infrastructure and e-waste, as well as the intersection of library science with the STS field. Using the moniker “The Luddbrarian,” Loeb writes at the blog Librarian Shipwreck. He is a frequent contributor to The b2 Review Digital Studies section.

    Back to the essay
    _____

    Works Cited

    Mumford, Lewis. 2010. Technics and Civilization. Chicago: University of Chicago Press.

    Mumford, Lewis. 1938. The Culture of Cities. New York: Harcourt, Brace and Company.

  • Men (Still) Explain Technology to Me: Gender and Education Technology

    Men (Still) Explain Technology to Me: Gender and Education Technology

    By Audrey Watters
    ~

    Late last year, I gave a similarly titled talk—“Men Explain Technology to Me”—at the University of Mary Washington. (I should note here that the slides for that talk were based on a couple of blog posts by Mallory Ortberg that I found particularly funny, “Women Listening to Men in Art History” and “Western Art History: 500 Years of Women Ignoring Men.” I wanted to do something similar with my slides today: find historical photos of men explaining computers to women. Mostly I found pictures of men or women working separately, working in isolation. Mostly pictures of men and computers.)

    Men Explain Technology

    So that University of Mary Washington talk: It was the last talk I delivered in 2014, and I did so with a sigh of relief, but also more than a twinge of frightened nausea—nausea that wasn’t nerves from speaking in public. I’d had more than a year full of public speaking under my belt—exhausting enough as I always try to write new talks for each event, but a year that had become complicated quite frighteningly in part by an ongoing campaign of harassment against women on the Internet, particularly those who worked in video game development.

    Known as “GamerGate,” this campaign had reached a crescendo of sorts in the lead-up to my talk at UMW, some of its hate aimed at me because I’d written about the subject, demanding that those in ed-tech pay attention and speak out. So no surprise, all this colored how I shaped that talk about gender and education technology, because, of course, my gender shapes how I experience working in and working with education technology. As I discussed then at the University of Mary Washington, I have been on the receiving end of threats and harassment for stories I’ve written about ed-tech—almost all the women I know who have a significant online profile have in some form or another experienced something similar. According to a Pew Research survey last year, one in 5 Internet users reports being harassed online. But GamerGate felt—feels—particularly unhinged. The death threats to Anita Sarkeesian, Zoe Quinn, Brianna Wu, and others were—are—particularly real.

    I don’t really want to rehash all of that here today, particularly my experiences being on the receiving end of the harassment; I really don’t. You can read a copy of that talk from last November on my website. I will say this: GamerGate supporters continue to argue that their efforts are really about “ethics in journalism” not about misogyny, but it’s quite apparent that they have sought to terrorize feminists and chase women game developers out of the industry. Insisting that video games and video game culture retain a certain puerile machismo, GamerGate supporters often chastise those who seek to change the content of videos games, change the culture to reflect the actual demographics of video game players. After all, a recent industry survey found women 18 and older represent a significantly greater portion of the game-playing population (36%) than boys age 18 or younger (17%). Just over half of all games are men (52%); that means just under half are women. Yet those who want video games to reflect these demographics are dismissed by GamerGate as “social justice warriors.” Dismissed. Harassed. Shouted down. Chased out.

    And yes, more mildly perhaps, the verb that grew out of Rebecca Solnit’s wonderful essay “Men Explain Things to Me” and the inspiration for the title to this talk, mansplained.

    Solnit first wrote that essay back in 2008 to describe her experiences as an author—and as such, an expert on certain subjects—whereby men would presume she was in need of their enlightenment and information—in her words “in some sort of obscene impregnation metaphor, an empty vessel to be filled with their wisdom and knowledge.” She related several incidents in which men explained to her topics on which she’d published books. She knew things, but the presumption was that she was uninformed. Since her essay was first published the term “mansplaining” has become quite ubiquitous, used to describe the particular online version of this—of men explaining things to women.

    I experience this a lot. And while the threats and harassment in my case are rare but debilitating, the mansplaining is more insidious. It is overpowering in a different way. “Mansplaining” is a micro-aggression, a practice of undermining women’s intelligence, their contributions, their voice, their experiences, their knowledge, their expertise; and frankly once these pile up, these mansplaining micro-aggressions, they undermine women’s feelings of self-worth. Women begin to doubt what they know, doubt what they’ve experienced. And then, in turn, women decide not to say anything, not to speak.

    I speak from experience. On Twitter, I have almost 28,000 followers, most of whom follow me, I’d wager, because from time to time I say smart things about education technology. Yet regularly, men—strangers, typically, but not always—jump into my “@-mentions” to explain education technology to me. To explain open source licenses or open data or open education or MOOCs to me. Men explain learning management systems to me. Men explain the history of education technology to me. Men explain privacy and education data to me. Men explain venture capital funding of education startups to me. Men explain the business of education technology to me. Men explain blogging and journalism and writing to me. Men explain online harassment to me.

    The problem isn’t just that men explain technology to me. It isn’t just that a handful of men explain technology to the rest of us. It’s that this explanation tends to foreclose questions we might have about the shape of things. We can’t ask because if we show the slightest intellectual vulnerability, our questions—we ourselves—lose a sort of validity.

    Yet we are living in a moment, I would contend, when we must ask better questions of technology. We neglect to do so at our own peril.

    Last year when I gave my talk on gender and education technology, I was particularly frustrated by the mansplaining to be sure, but I was also frustrated that those of us who work in the field had remained silent about GamerGate, and more broadly about all sorts of issues relating to equity and social justice. Of course, I do know firsthand that it can difficult if not dangerous to speak out, to talk critically and write critically about GamerGate, for example. But refusing to look at some of the most egregious acts easily means often ignoring some of the more subtle ways in which marginalized voices are made to feel uncomfortable, unwelcome online. Because GamerGate is really just one manifestation of deeper issues—structural issues—with society, culture, technology. It’s wrong to focus on just a few individual bad actors or on a terrible Twitter hashtag and ignore the systemic problems. We must consider who else is being chased out and silenced, not simply from the video game industry but from the technology industry and a technological world writ large.

    I know I have to come right out and say it, because very few people in education technology will: there is a problem with computers. Culturally. Ideologically. There’s a problem with the internet. Largely designed by men from the developed world, it is built for men of the developed world. Men of science. Men of industry. Military men. Venture capitalists. Despite all the hype and hope about revolution and access and opportunity that these new technologies will provide us, they do not negate hierarchy, history, privilege, power. They reflect those. They channel it. They concentrate it, in new ways and in old.

    I want us to consider these bodies, their ideologies and how all of this shapes not only how we experience technology but how it gets designed and developed as well.

    There’s that very famous New Yorker cartoon: “On the internet, nobody knows you’re a dog.” The cartoon was first published in 1993, and it demonstrates this sense that we have long had that the Internet offers privacy and anonymity, that we can experiment with identities online in ways that are severed from our bodies, from our material selves and that, potentially at least, the internet can allow online participation for those denied it offline.

    Perhaps, yes.

    But sometimes when folks on the internet discover “you’re a dog,” they do everything in their power to put you back in your place, to remind you of your body. To punish you for being there. To hurt you. To threaten you. To destroy you. Online and offline.

    Neither the internet nor computer technology writ large are places where we can escape the materiality of our physical worlds—bodies, institutions, systems—as much as that New Yorker cartoon joked that we might. In fact, I want to argue quite the opposite: that computer and Internet technologies actually re-inscribe our material bodies, the power and the ideology of gender and race and sexual identity and national identity. They purport to be ideology-free and identity-less, but they are not. If identity is unmarked it’s because there’s a presumption of maleness, whiteness, and perhaps even a certain California-ness. As my friend Tressie McMillan Cottom writes, in ed-tech we’re all supposed to be “roaming autodidacts”: happy with school, happy with learning, happy and capable and motivated and well-networked, with functioning computers and WiFi that works.

    By and large, all of this reflects who is driving the conversation about, if not the development of these technology. Who is seen as building technologies. Who some think should build them; who some think have always built them.

    And that right there is already a process of erasure, a different sort of mansplaining one might say.

    Last year, when Walter Isaacson was doing the publicity circuit for his latest book, The Innovators: How a Group of Hackers, Geniuses, and Geeks Created the Digital Revolution (2014), he’d often relate of how his teenage daughter had written an essay about Ada Lovelace, a figure whom Isaacson admitted that he’d never heard of before. Sure, he’d written biographies of Steve Jobs and Albert Einstein and Benjamin Franklin and other important male figures in science and technology, but the name and the contributions of this woman were entirely unknown to him. Ada Lovelace, daughter of Lord Byron and the woman whose notes on Charles Babbage’s proto-computer the Analytical Engine are now recognized as making her the world’s first computer programmer. Ada Lovelace, the author of the world’s first computer algorithm. Ada Lovelace, the person at the very beginning of the field of computer science.

    Ada Lovelace
    Augusta Ada King, Countess of Lovelace, now popularly known as Ada Lovelace, in a painting by Alfred Edward Chalon (image source: Wikipedia)

    “Ada Lovelace defined the digital age,” Isaacson said in an interview with The New York Times. “Yet she, along with all these other women, was ignored or forgotten.” (Actually, the world has been celebrating Ada Lovelace Day since 2009.)

    Isaacson’s book describes Lovelace like this: “Ada was never the great mathematician that her canonizers claim…” and “Ada believed she possessed special, even supernatural abilities, what she called ‘an intuitive perception of hidden things.’ Her exalted view of her talents led her to pursue aspirations that were unusual for an aristocratic woman and mother in the early Victorian age.” The implication: she was a bit of an interloper.

    A few other women populate Isaacson’s The Innovators: Grace Hopper, who invented the first computer compiler and who developed the programming language COBOL. Isaacson describes her as “spunky,” not an adjective that I imagine would be applied to a male engineer. He also talks about the six women who helped program the ENIAC computer, the first electronic general-purpose computer. Their names, because we need to say these things out loud more often: Jean Jennings, Marilyn Wescoff, Ruth Lichterman, Betty Snyder, Frances Bilas, Kay McNulty. (I say that having visited Bletchley Park where civilian women’s involvement has been erased, as they were forbidden, thanks to classified government secrets, from talking about their involvement in the cryptography and computing efforts there).

    In the end, it’s hard not to read Isaacson’s book without coming away thinking that, other than a few notable exceptions, the history of computing is the history of men, white men. The book mentions education Seymour Papert in passing, for example, but assigns the development of Logo, a programming language for children, to him alone. No mention of the others involved: Daniel Bobrow, Wally Feurzeig, and Cynthia Solomon.

    Even a book that purports to reintroduce the contributions of those forgotten “innovators,” that says it wants to complicate the story of a few male inventors of technology by looking at collaborators and groups, still in the end tells a story that ignores if not undermines women. Men explain the history of computing, if you will. As such it tells a story too that depicts and reflects a culture that doesn’t simply forget but systematically alienates women. Women are a rediscovery project, always having to be reintroduced, found, rescued. There’s been very little reflection upon that fact—in Isaacson’s book or in the tech industry writ large.

    This matters not just for the history of technology but for technology today. And it matters for ed-tech as well. (Unless otherwise noted, the following data comes from diversity self-reports issued by the companies in 2014.)

    • Currently, fewer than 20% of computer science degrees in the US are awarded to women. (I don’t know if it’s different in the UK.) It’s a number that’s actually fallen over the past few decades from a high in 1983 of 37%. Computer science is the only field in science, engineering, and mathematics in which the number of women receiving bachelor’s degrees has fallen in recent years. And when it comes to the employment not just the education of women in the tech sector, the statistics are not much better. (source: NPR)
    • 70% of Google employees are male. 61% are white and 30% Asian. Of Google’s “technical” employees. 83% are male. 60% of those are white and 34% are Asian.
    • 70% of Apple employees are male. 55% are white and 15% are Asian. 80% of Apple’s “technical” employees are male.
    • 69% of Facebook employees are male. 57% are white and 34% are Asian. 85% of Facebook’s “technical” employees are male.
    • 70% of Twitter employees are male. 59% are white and 29% are Asian. 90% of Twitter’s “technical” employees are male.
    • Only 2.7% of startups that received venture capital funding between 2011 and 2013 had women CEOs, according to one survey.
    • And of course, Silicon Valley was recently embroiled in the middle of a sexual discrimination trial involving the storied VC firm Kleiner, Smith, Perkins, and Caulfield filed by former executive Ellen Pao who claimed that men at the firm were paid more and promoted more easily than women. Welcome neither as investors nor entrepreneurs nor engineers, it’s hardly a surprise that, as The Los Angeles Times recently reported, women are leaving the tech industry “in droves.”

    This doesn’t just matter because computer science leads to “good jobs” or that tech startups lead to “good money.” It matters because the tech sector has an increasingly powerful reach in how we live and work and communicate and learn. It matters ideologically. If the tech sector drives out women, if it excludes people of color, that matters for jobs, sure. But it matters in terms of the projects undertaken, the problems tackled, the “solutions” designed and developed.

    So it’s probably worth asking what the demographics look like for education technology companies. What percentage of those building ed-tech software are men, for example? What percentage are white? What percentage of ed-tech startup engineers are men? Across the field, what percentage of education technologists—instructional designers, campus IT, sysadmins, CTOs, CIOs—are men? What percentage of “education technology leaders” are men? What percentage of education technology consultants? What percentage of those on the education technology speaking circuit? What percentage of those developing not just implementing these tools?

    And how do these bodies shape what gets built? How do they shape how the “problem” of education gets “fixed”? How do privileges, ideologies, expectations, values get hard-coded into ed-tech? I’d argue that they do in ways that are both subtle and overt.

    That word “privilege,” for example, has an interesting dual meaning. We use it to refer to the advantages that are are afforded to some people and not to others: male privilege, white privilege. But when it comes to tech, we make that advantage explicit. We actually embed that status into the software’s processes. “Privileges” in tech refer to whomever has the ability to use or control certain features of a piece of software. Administrator privileges. Teacher privileges. (Students rarely have privileges in ed-tech. Food for thought.)

    Or take how discussion forums operate. Discussion forums, now quite common in ed-tech tools—in learning management systems (VLEs as you call them), in MOOCs, for example—often trace their history back to the earliest Internet bulletin boards. But even before then, education technologies like PLATO, a programmed instruction system built by the University of Illinois in the 1970s, offered chat and messaging functionality. (How education technology’s contributions to tech are erased from tech history is, alas, a different talk.)

    One of the new features that many discussion forums boast: the ability to vote up or vote down certain topics. Ostensibly this means that “the best” ideas surface to the top—the best ideas, the best questions, the best answers. What it means in practice often is something else entirely. In part this is because the voting power on these sites is concentrated in the hands of the few, the most active, the most engaged. And no surprise, “the few” here is overwhelmingly male. Reddit, which calls itself “the front page of the Internet” and is the model for this sort of voting process, is roughly 84% male. I’m not sure that MOOCs, who’ve adopted Reddit’s model of voting on comments, can boast a much better ratio of male to female participation.

    What happens when the most important topics—based on up-voting—are decided by a small group? As D. A. Banks has written about this issue,

    Sites like Reddit will remain structurally incapable of producing non-hegemonic content because the “crowd” is still subject to structural oppression. You might choose to stay within the safe confines of your familiar subreddit, but the site as a whole will never feel like yours. The site promotes mundanity and repetition over experimentation and diversity by presenting the user with a too-accurate picture of what appeals to the entrenched user base. As long as the “wisdom of the crowds” is treated as colorblind and gender neutral, the white guy is always going to be the loudest.

    How much does education technology treat its users similarly? Whose questions surface to the top of discussion forums in the LMS (the VLE), in the MOOC? Who is the loudest? Who is explaining things in MOOC forums?

    Ironically—bitterly ironically, I’d say, many pieces of software today increasingly promise “personalization,” but in reality, they present us with a very restricted, restrictive set of choices of who we “can be” and how we can interact, both with our own data and content and with other people. Gender, for example, is often a drop down menu where one can choose either “male” or “female.” Software might ask for a first and last name, something that is complicated if you have multiple family names (as some Spanish-speaking people do) or your family name is your first name (as names in China are ordered). Your name is presented how the software engineers and designers deemed fit: sometimes first name, sometimes title and last name, typically with a profile picture. Changing your username—after marriage or divorce, for example—is often incredibly challenging, if not impossible.

    You get to interact with others, similarly, based on the processes that the engineers have determined and designed. On Twitter, you cannot direct message people, for example, that do not follow you. All interactions must be 140 characters or less.

    This restriction of the presentation and performance of one’s identity online is what “cyborg anthropologist” Amber Case calls the “templated self.” She defines this as “a self or identity that is produced through various participation architectures, the act of producing a virtual or digital representation of self by filling out a user interface with personal information.”

    Case provides some examples of templated selves:

    Facebook and Twitter are examples of the templated self. The shape of a space affects how one can move, what one does and how one interacts with someone else. It also defines how influential and what constraints there are to that identity. A more flexible, but still templated space is WordPress. A hand-built site is much less templated, as one is free to fully create their digital self in any way possible. Those in Second Life play with and modify templated selves into increasingly unique online identities. MySpace pages are templates, but the lack of constraints can lead to spaces that are considered irritating to others.

    As we—all of us, but particularly teachers and students—move to spend more and more time and effort performing our identities online, being forced to use preordained templates constrains us, rather than—as we have often been told about the Internet—lets us be anyone or say anything online. On the Internet no one knows you’re a dog unless the signup process demanded you give proof of your breed. This seems particularly important to keep in mind when we think about students’ identity development. How are their identities being templated?

    While Case’s examples point to mostly “social” technologies, education technologies are also “participation architectures.” Similarly they produce and restrict a digital representation of the learner’s self.

    Who is building the template? Who is engineering the template? Who is there to demand the template be cracked open? What will the template look like if we’ve chased women and people of color out of programming?

    It’s far too simplistic to say “everyone learn to code” is the best response to the questions I’ve raised here. “Change the ratio.” “Fix the leaky pipeline.” Nonetheless, I’m speaking to a group of educators here. I’m probably supposed to say something about what we can do, right, to make ed-tech more just not just condemn the narratives that lead us down a path that makes ed-tech less son. What we can do to resist all this hard-coding? What we can do to subvert that hard-coding? What we can do to make technologies that our students—all our students, all of us—can wield? What we can do to make sure that when we say “your assignment involves the Internet” that we haven’t triggered half the class with fears of abuse, harassment, exposure, rape, death? What can we do to make sure that when we ask our students to discuss things online, that the very infrastructure of the technology that we use privileges certain voices in certain ways?

    The answer can’t simply be to tell women to not use their real name online, although as someone who started her career blogging under a pseudonym, I do sometimes miss those days. But if part of the argument for participating in the open Web is that students and educators are building a digital portfolio, are building a professional network, are contributing to scholarship, then we have to really think about whether or not promoting pseudonyms is a sufficient or an equitable solution.

    The answer can’t be simply be “don’t blog on the open Web.” Or “keep everything inside the ‘safety’ of the walled garden, the learning management system.” If nothing else, this presumes that what happens inside siloed, online spaces is necessarily “safe.” I know I’ve seen plenty of horrible behavior on closed forums, for example, from professors and students alike. I’ve seen heavy-handed moderation, where marginalized voices find their input are deleted. I’ve seen zero-moderation, where marginalized voices are mobbed. We recently learned, for example, that Walter Lewin, emeritus professor at MIT, one of the original rockstar professors of YouTube—millions have watched the demonstrations from his physics lectures, has been accused of sexually harassing women in his edX MOOC.

    The answer can’t simply be “just don’t read the comments.” I would say that it might be worth rethinking “comments” on student blogs altogether—or rather the expectation that they host them, moderate them, respond to them. See, if we give students the opportunity to “own their own domain,” to have their own websites, their own space on the Web, we really shouldn’t require them to let anyone that can create a user account into that space. It’s perfectly acceptable to say to someone who wants to comment on a blog post, “Respond on your own site. Link to me. But I am under no obligation to host your thoughts in my domain.”

    And see, that starts to hint at what I think the answer here to this question about the unpleasantness—by design—of technology. It starts to get at what any sort of “solution” or “alternative” has to look like: it has to be both social and technical. It also needs to recognize there’s a history that might help us understand what’s done now and why. If, as I’ve argued, the current shape of education technologies has been shaped by certain ideologies and certain bodies, we should recognize that we aren’t stuck with those. We don’t have to “do” tech as it’s been done in the last few years or decades. We can design differently. We can design around. We can use differently. We can use around.

    One interesting example of this dual approach that combines both social and technical—outside the realm of ed-tech, I recognize—are the tools that Twitter users have built in order to address harassment on the platform. Having grown weary of Twitter’s refusal to address the ways in which it is utilized to harass people (remember, its engineering team is 90% male), a group of feminist developers wrote The Block Bot, an application that lets you block, en masse, a large list of Twitter accounts who are known for being serial harassers. That list of blocked accounts is updated and maintained collaboratively. Similarly, Block Together lets users subscribe to others’ block lists. Good Game Autoblocker, a tool that blocks the “ringleaders” of GamerGate.

    That gets, just a bit, at what I think we can do in order to make education technology habitable, sustainable, and healthy. We have to rethink the technology. And not simply as some nostalgia for a “Web we lost,” for example, but as a move forward to a Web we’ve yet to ever see. It isn’t simply, as Isaacson would posit it, rediscovering innovators that have been erased, it’s about rethinking how these erasures happen all throughout technology’s history and continue today—not just in storytelling, but in code.

    Educators should want ed-tech that is inclusive and equitable. Perhaps education needs reminding of this: we don’t have to adopt tools that serve business goals or administrative purposes, particularly when they are to the detriment of scholarship and/or student agency—technologies that surveil and control and restrict, for example, under the guise of “safety”—that gets trotted out from time to time—but that have never ever been about students’ needs at all. We don’t have to accept that technology needs to extract value from us. We don’t have to accept that technology puts us at risk. We don’t have to accept that the architecture, the infrastructure of these tools make it easy for harassment to occur without any consequences. We can build different and better technologies. And we can build them with and for communities, communities of scholars and communities of learners. We don’t have to be paternalistic as we do so. We don’t have to “protect students from the Internet,” and rehash all the arguments about stranger danger and predators and pedophiles. But we should recognize that if we want education to be online, if we want education to be immersed in technologies, information, and networks, that we can’t really throw students out there alone. We need to be braver and more compassionate and we need to build that into ed-tech. Like Blockbot or Block Together, this should be a collaborative effort, one that blends our cultural values with technology we build.

    Because here’s the thing. The answer to all of this—to harassment online, to the male domination of the technology industry, the Silicon Valley domination of ed-tech—is not silence. And the answer is not to let our concerns be explained away. That is after all, as Rebecca Solnit reminds us, one of the goals of mansplaining: to get us to cower, to hesitate, to doubt ourselves and our stories and our needs, to step back, to shut up. Now more than ever, I think we need to be louder and clearer about what we want education technology to do—for us and with us, not simply to us.
    _____

    Audrey Watters is a writer who focuses on education technology – the relationship between politics, pedagogy, business, culture, and ed-tech. She has worked in the education field for over 15 years: teaching, researching, organizing, and project-managing. Although she was two chapters into her dissertation (on a topic completely unrelated to ed-tech), she decided to abandon academia, and she now happily fulfills the one job recommended to her by a junior high aptitude test: freelance writer. Her stories have appeared on NPR/KQED’s education technology blog MindShift, in the data section of O’Reilly Radar, on Inside Higher Ed, in The School Library Journal, in The Atlantic, on ReadWriteWeb, and Edutopia. She is the author of the recent book The Monsters of Education Technology (Smashwords, 2014) and working on a book called Teaching Machines. She maintains the widely-read Hack Education blog, on which an earlier version of this review first appeared, and writes frequently for The b2 Review Digital Studies magazine on digital technology and education.

    Back to the essay

  • A Dark, Warped Reflection

    A Dark, Warped Reflection

    Charlie Brooker, writer & producer, Black Mirror (BBC/Zeppotron, 2011- )a review of Charlie Brooker, writer & producer, Black Mirror (BBC/Zeppotron, 2011- )
    by Zachary Loeb
    ~

    Depending upon which sections of the newspaper one reads, it is very easy to come away with two rather conflicting views of the future. If one begins the day by reading the headlines in the “International News” or “Environment” it is easy to feel overwhelmed by a sense of anxiety and impending doom; however, if one instead reads the sections devoted to “Business” or “Technology” it is easy to feel confident that there are brighter days ahead. We are promised that soon we shall live in wondrous “Smart” homes where all of our devices work together tirelessly to ensure our every need is met even while drones deliver our every desire even as we enjoy ever more immersive entertainment experiences with all of this providing plenty of wondrous investment opportunities…unless of course another economic collapse or climate change should spoil these fantasies. Though the juxtaposition between newspaper sections can be jarring an element of anxiety can generally be detected from one section to the next – even within the “technology” pages. After all, our devices may have filled our hours with apps and social networking sites, but this does not necessarily mean that they have left us more fulfilled. We have been supplied with all manner of answers, but this does not necessarily mean we had first asked any questions.

    [youtube https://www.youtube.com/watch?v=pimqGkBT6Ek&w=560&h=315]

    If you could remember everything, would you want to? If a cartoon bear lampooned the pointlessness of elections, would you vote for the bear? Would you participate in psychological torture, if the person being tortured was a criminal? What lengths would you turn to if you could not move-on from a loved one’s death? These are the types of questions posed by the British television program Black Mirror, wherein anxiety about the technologically riddled future, be it the far future or next week, is the core concern. The paranoid pessimism of this science-fiction anthology program is not a result of a fear of the other or of panic at the prospect of nuclear annihilation – but is instead shaped by nervousness at the way we have become strangers to ourselves. There are no alien invaders, occult phenomena, nor is there a suit wearing narrator who makes sure that the viewers understand the moral of each story. Instead what Black Mirror presents is dread – it holds up a “black mirror” (think of any electronic device when the power on the screen is off) to society and refuses to flinch at the reflection.

    Granted, this does not mean that those viewing the program will not flinch.

    [And Now A Brief Digression]

    Before this analysis goes any further it seems worthwhile to pause and make a few things clear. Firstly, and perhaps most importantly, the intention here is not to pass a definitive judgment on the quality of Black Mirror. While there are certainly arguments that can be made regarding how “this episode was better than that one” – this is not the concern here. Nor for that matter is the goal to scoff derisively at Black Mirror and simply dismiss of it – the episodes are well written, interestingly directed, and strongly acted. Indeed, that the program can lead to discussion and introspection is perhaps the highest praise that one can bestow upon a piece of widely disseminated popular culture. Secondly, and perhaps even more importantly (depending on your opinion), some of the episodes of Black Mirror rely upon twists and surprises in order to have their full impact upon the viewer. Oftentimes people find it highly frustrating to have these moments revealed to them ahead of time, and thus – in the name of fairness – let this serve as an official “spoiler warning.” The plots of each episode will not be discussed in minute detail in what follows – as the intent here is to consider broader themes and problems – but if you hate “spoilers” you should consider yourself warned.

    [Digression Ends]

    The problem posed by Black Mirror is that in building nervous narratives about the technological tomorrow the program winds up replicating many of the shortcomings of contemporary discussions around technology. Shortcomings that make such an unpleasant future seem all the more plausible. While Black Mirror may resist the obvious morality plays of a show like The Twilight Zone, the moral of the episodes may be far less oppositional than they at first seem. The program draws much of its emotional heft by narrowly focusing its stories upon specific individuals, but in so doing the show may function as a sort of precognitive “usage manual,” one that advises “if a day should arrive when you can technologically remember everything…don’t be like the guy in this episode.” The episodes of Black Mirror may call upon viewers to look askance at the future it portrays, but it also encourages the sort of droll inured acceptance that is characteristic of the people in each episode of the program. Black Mirror is a sleek, hip, piece of entertainment, another installment in the contemporary “golden age of television” wherein it risks becoming just another program that can be streamed onto any of a person’s black mirror like screens. The program is itself very much a part of the same culture industry of the YouTube and Twitter era that the show seems to vilify – it is ready made for “binge watching.” The program may be disturbing, but its indictments are soft – allowing viewers a distance that permits them to say aloud “I would never do that” even as they are subconsciously unsure.

    Thus, Black Mirror appears as a sort of tragic confirmation of the continuing validity of Jacques Ellul’s comment:

    “One cannot but marvel at an organization which provides the antidote as it distills the poison.” (Ellul, 378)

    For the tales that are spun out in horrifying (or at least discomforting) detail on Black Mirror may appear to be a salve for contemporary society’s technological trajectory – but the show is also a ready made product for the very age that it is critiquing. A salve that does not solve anything, a cultural shock absorber that allows viewers to endure the next wave of shocks. It is a program that demands viewers break away from their attachment to their black mirrors even as it encourages them to watch another episode of Black Mirror. This is not to claim that the show lacks value as a critique; however, the show is less a radical indictment than some may be tempted to give it credit for being. The discomfort people experience while watching the show easily becomes a masochistic penance that allows people to continue walking down the path to the futures outlined in the show. Black Mirror provides the antidote, but it also distills the poison.

    That, however, may be the point.

    [Interrogation 1: Who Bears Responsibility?]

    Technology is, of course, everywhere in Black Mirror – in many episodes it as much of a character as the humans who are trying to come to terms with what the particular device means. In some episodes (“The National Anthem” or “The Waldo Moment”) the technologies that feature prominently are those that would be quite familiar to contemporary viewers: social media platforms like YouTube, Twitter, Facebook and the like. Whilst in other episodes (“The Complete History of You,” “White Bear” and “Be Right Back”) the technologies on display are new and different: an implantable device that records (and can play back) all of one’s memories, something that can induce temporary amnesia, a company that has developed a being that is an impressive mix of robotics and cloning. The stories that are told in Black Mirror, as was mentioned earlier, focus largely on the tales of individuals – “Be Right Back” is primarily about one person’s grief – and though this is a powerful story-telling device (and lest there be any confusion – many of these are very powerfully told stories) one of the questions that lingers unanswered in the background of many of these episodes is: who is behind these technologies?

    In fairness, Black Mirror would likely lose some of its effectiveness in terms of impact if it were to delve deeply into this question. If “The Complete History of You” provided a sci-fi faux-documentary foray into the company that had produced the memory recording “grains” it would probably not have felt as disturbing as the tale of abuse, sex, violence and obsession that the episode actually presents. Similarly, the piece of science-fiction grade technology upon which “White Bear” relies, functions well in the episode precisely because the key device makes only a rather brief appearance. And yet here an interesting contrast emerges between the episodes set in, or closely around, the present and those that are set further down the timeline – for in the episodes that rely on platforms like YouTube, the viewer technically knows who the interests are behind the various platforms. The episode “The Complete History of You” may be intensely disturbing, but what company was it that developed and brought the “grains” to market? What biotechnology firm supplies the grieving spouse in “Be Right Back” with the robotic/clone of her deceased husband? Who gathers the information from these devices? Where does that information live? Who is profiting? These are important questions that go unanswered, largely because they go unasked.

    Of course, it can be simple to disregard these questions. Dwelling upon them certainly does take something away from the individual episodes and such focus diminishes the entertainment quality of Black Mirror. This is fundamentally why it is so essential to insist that these critical questions be asked. The worlds depicted in episodes of Black Mirror did not “just happen” but are instead a result of layers upon layers of decisions and choices that have wound up shaping these characters lives – and it is questionable how much say any of these characters had in these decisions. This is shown in stark relief in “The National Anthem” in which a befuddled prime minister cannot come to grips with the way that a threat uploaded to YouTube along with shifts in public opinion, as reflected on Twitter, has come to require him to commit a grotesque act; his despair at what he is being compelled to do is a reflection of the new world of politics created by social media. In some ways it is tempting to treat episodes like “The Complete History of You” and “Be Right Back” as retorts to an unflagging adoration for “innovation,” “disruption,” and “permissionless innovation” – for the episodes can be read as a warning that just because we can record and remember everything, does not necessarily mean that we should. And yet the presence of such a cultural warning does not mean that such devices will not eventually be brought to market. The denizens of the worlds of Black Mirror are depicted as being at the mercy of the technological current.

    Thus, and here is where the problem truly emerges, the episodes can be treated as simple warnings that state “well, don’t be like this person.” After all, the world of “The Complete History of You” seems to be filled with people who – unlike the obsessive main character – can use the “grain” productively; on a similar note it can be easy to imagine many people pointing to “Be Right Back” and saying that the idea of a robotic/clone could be wonderful – just don’t use it to replicate the recently dead; and of course any criticism of social media in “The Waldo Moment” or “The National Anthem” can be met with a retort regarding a blossoming of free expression and the ways in which such platforms can help bolster new protest movements. And yet, similar to the sad protagonist in the film Her, the characters in the story lines of Black Mirror rarely appear as active agents in relation to technology even when they are depicted as truly “choosing” a given device. Rather they have simply been reduced to consumers – whether they are consumers of social media, political campaigns, or an amusement park where the “show” is a person being psychologically tortured day after day.

    This is not to claim that there should be an Apple or Google logo prominently displayed on the “grain” or on the side of the stationary bikes in “Fifteen Million Merits,” nor is it to argue that the people behind these devices should be depicted as cackling corporate monsters – but it would be helpful to have at least some image of the people behind these devices. After all, there are people behind these devices. What were they thinking? Were they not aware of these potential risks? Did they not care? Who bears responsibility? In focusing on the small scale human stories Black Mirror ignores the fact that there is another all too human story behind all of these technologies. Thus what the program riskily replicates is a sort of technological determinism that seems to have nestled itself into the way that people talk about technology these days – a sentiment in which people have no choice but to accept (and buy) what technology firms are selling them. It is not so much, to borrow a line from Star Trek, that “resistance is futile” as that nobody seems to have even considered resistance to be an option in the first place. Granted, we have seen in the not too distant past that such a sentiment is simply not true – Google Glass was once presented as inevitable but public push-back helped lead to Google (at least temporarily) shelving the device. Alas, one of the most effective ways of convincing people that they are powerless to resist is by bludgeoning them with cultural products that tell them they are powerless to resist. Or better yet, convince them that they will actually like being “assimilated.”

    Therefore, the key thing to mull over after watching an episode of Black Mirror is not what is presented in the episode but what has been left out. Viewers need to ask the questions the show does not present: who is behind these technologies? What decisions have led to the societal acceptance of these technologies? Did anybody offer resistance to these new technologies? The “6 Questions to Ask of New Technology” posed by media theorist Neil Postman may be of use for these purposes, as might some of the questions posed in Riddled With Questions. The emphasis here is to point out that a danger of Black Mirror is that the viewer winds up being just like one of the characters : a person who simply accepts the technologically wrought world in which they are living without questioning those responsible and without thinking that opposition is possible.

    [Interrogation 2: Utopia Unhinged is not a Dystopia]

    “Dystopia” is a term that has become a fairly prominent feature in popular entertainment today. Bookshelves are filled with tales of doomed futures and many of these titles (particularly those aimed at the “young adult” audience) have a tendency to eventually reach the screens of the cinema. Of course, apocalyptic visions of the future are not limited to the big screen – as numerous television programs attest. For many, it is tempting to use terms such as “dystopia” when discussing the futures portrayed in Black Mirror and yet the usage of such a term seems rather misleading. True, at least one episode (“Fifteen Million Merits”) is clearly meant to evoke a dystopian far future, but to use that term in relation to many of the other installments seems a bit hyperbolic. After all, “The Waldo Moment” could be set tomorrow and frankly “The National Anthem” could have been set yesterday. To say that Black Mirror is a dystopian show risks taking an overly simplistic stance towards technology in the present as well as towards technology in the future – if the claim is that the show is thoroughly dystopian than how does one account for the episodes that may as well be set in the present? One can argue that the state of the present world is far less than ideal, one can cast a withering gaze in the direction of social media, one can truly believe that the current trajectory (if not altered) will lead in a negative direction…and yet one can believe all of these things and still resist the urge to label contemporary society a dystopia. Doom saying can be an enjoyably nihilistic way to pass an afternoon, but it makes for a rather poor critique.

    It may be that what Black Mirror shows is how a dystopia can actually be a private hell instead of a societal one (which would certainly seem true of “White Bear” or “The Complete History of You”), or perhaps what Black Mirror indicates is that a derailed utopia is not automatically a dystopia. Granted, a major criticism of Black Mirror could emphasize that the show has a decidedly “industrialized world/Western world” focus – we do not see the factories where “grains” are manufactured and the varieties of new smart phones seen in the program suggest that the e-waste must be piling up somewhere. In other words – the derailed utopia of some could still be an outright dystopia for countless others. That the characters in Black Mirror do not seem particularly concerned with who assembled their devices is, alas, a feature all too characteristic of technology users today. Nevertheless, to restate the problem, the issue is not so much the threat of dystopia as it is the continued failure of humanity to use its impressive technological ingenuity to bring about a utopia (or even something “better” than the present). In some ways this provides an echo of Lewis Mumford’s comment, in The Story of Utopias, that:

    “it would be so easy, this business of making over the world if it were only a matter of creating machinery.” (Mumford, 175)

    True, the worlds of Black Mirror, including the ones depicting the world of today, show that “creating machinery” actually is an easy way “of making over the world” – however this does not automatically push things in the utopian direction for which Mumford was pining. Instead what is on display is another installment of the deferred potential of technology.

    The term “another” is not used incidentally here, but is specifically meant to point to the fact that it is nothing new for people to see technology as a source for hope…and then to woefully recognize the way in which such hopes have been dashed time and again. Such a sentiment is visible in much of Walter Benjamin’s writing about technology – writing, as he was, after the mechanized destruction of WWI and on the eve of the technologically enhanced barbarity of WWII. In Benjamin’s essay “Eduard Fuchs, Collector and Historian ” he criticizes a strain in positivist/social democratic thinking that had emphasized that technological developments would automatically usher in a more just world, when in fact such attitudes woefully failed to appreciate the scale of the dangers. This leads Benjamin to note:

    “A prognosis was due, but failed to materialize. That failure sealed a process characteristic of the past century: the bungled reception of technology. The process has consisted of a series of energetic, constantly renewed efforts, all attempting to overcome the fact that technology serves this society only by producing commodities.” (Benjamin, 266)

    The century about which Benjamin was writing was not the twenty-first century, and yet these comments about “the bungled reception of technology” and technology which “serves this society only be producing commodities” seems a rather accurate description of the worlds depicted by Black Mirror. And yes, that certainly includes the episodes that are closer to our own day. The point of pulling out this tension; however, is to emphasize not the dystopian element of Black Mirror but to point to the “bungled reception” that is so clearly on display in the program – and by extension in the present day.

    What Black Mirror shows in episode after episode (even in the clearly dystopian one) is the gloomy juxtaposition between what humanity can possibly achieve and what it actually achieves. The tools that could widen democratic participation can be used to allow a cartoon bear to run as a stunt candidate, the devices that allow us to remember the past can ruin the present by keeping us constantly replaying our memories yesterday, the things that can allow us to connect can make it so that we are unable to ever let go – “energetic, constantly renewed efforts” that all wind up simply “producing commodities.” Indeed, in a tragic-comic turn, Black Mirror demonstrates that amongst the commodities we continue to produce are those that elevate the “bungled reception of technology” to the level of a widely watched and critically lauded television serial.

    The future depicted by Black Mirror may be startling, disheartening and quite depressing, but (except in the cases where the content is explicitly dystopian) it is worth bearing in mind that there is an important difference between dystopia and a world of people living amidst the continued “bungled reception of technology.” Are the people in “The National Anthem” paving the way for “White Bear” and in turn setting the stage for “Fifteen Million Merits?” It is quite possible. But this does not mean that the “reception of technology” must always be “bungled” – though changing our reception of it may require altering our attitude towards it. Here Black Mirror repeats its problematic thrust, for it does not highlight resistance but emphasizes the very attitudes that have “bungled” the reception and which continue to bungle the reception. Though “Fifteen Million Merits” does feature a character engaging in a brave act of rebellion, this act is immediately used to strengthen the very forces against which the character is rebelling – and thus the episode repeats the refrain “don’t bother resisting, it’s too late anyways.” This is not to suggest that one should focus all one’s hopes upon a farfetched utopian notion, or put faith in a sense of “hope” that is not linked to reality, nor does it mean that one should don sackcloth and begin mourning. Dystopias are cheap these days, but so are the fake utopian dreams that promise a world in which somehow technology will solve all of our problems. And yet, it is worth bearing in mind another comment from Mumford regarding the possibility of utopia:

    “we cannot ignore our utopias. They exist in the same way that north and south exist; if we are not familiar with their classical statements we at least know them as they spring to life each day in our minds. We can never reach the points of the compass; and so no doubt we shall never live in utopia; but without the magnetic needle we should not be able to travel intelligently at all.” (Mumford, 28/29)

    Black Mirror provides a stark portrait of the fake utopian lure that can lead us to the world to which we do not want to go – a world in which the “bungled reception of technology” continues to rule – but in staring horror struck at where we do not want to go we should not forget to ask where it is that we do want to go. The worlds of Black Mirror are steps in the wrong direction – so ask yourself: what would the steps in the right direction look like?

    [Final Interrogation – Permission to Panic]

    During “The Complete History of You” several characters enjoy a dinner party in which the topic of discussion eventually turns to the benefits and drawbacks of the memory recording “grains.” Many attitudes towards the “grains” are voiced – ranging from individuals who cannot imagine doing without the “grain” to a woman who has had hers violently removed and who has managed to adjust. While “The Complete History of You” focuses on an obsessed individual who cannot cope with a world in which everything can be remembered what the dinner party demonstrates is that the same world contains many people who can handle the “grains” just fine. The failed comedian who voices the cartoon bear in “The Waldo Moment” cannot understand why people are drawn to vote for the character he voices – but this does not stop many people from voting for the animated animal. Perhaps most disturbingly the woman at the center of “White Bear” cannot understand why she is followed by crowds filming her on their smart phones while she is hunted by masked assailants – but this does not stop those filming her from playing an active role in her torture. And so on…and so on…Black Mirror shows that in these horrific worlds, there are many people who are quite content with the new status quo. But that not everybody is despairing simply attests to Theodor Adorno and Max Horkheimer’s observation that:

    “A happy life in a world of horror is ignominiously refuted by the mere existence of that world. The latter therefore becomes the essence, the former negligible.” (Adorno and Horkheimer, 93)

    Black Mirror is a complex program, made all the more difficult to consider as the anthology character of the show makes each episode quite different in terms of the issues that it dwells upon. The attitudes towards technology and society that are subtly suggested in the various episodes are in line with the despairing aura that surrounds the various protagonists and antagonists of the episodes. Yet, insofar as Black Mirror advances an ethos it is one of inured acceptance – it is a satire that is both tragedy and comedy. The first episode of the program, “The National Anthem,” is an indictment of a society that cannot tear itself away from the horrors being depicted on screens in a television show that owes its success to keeping people transfixed to horrors being depicted on their screens. The show holds up a “black mirror” to society but what it shows is a world in which the tables are rigged and the audience has already lost – it is a magnificently troubling cultural product that attests to the way the culture industry can (to return to Ellul) provide the antidote even as it distills the poison. Or, to quote Adorno and Horkheimer again (swap out the word “filmgoers” with “tv viewers”):

    “The permanently hopeless situations which grind down filmgoers in daily life are transformed by their reproduction, in some unknown way, into a promise that they may continue to exist. The one needs only to become aware of one’s nullity, to subscribe to one’s own defeat, and one is already a party to it. Society is made up of the desperate and thus falls prey to rackets.” (Adorno and Horkheimer, 123)

    This is the danger of Black Mirror that it may accustom and inure its viewers to the ugly present it displays while preparing them to fall prey to the “bungled reception” of tomorrow – it inculcates the ethos of “one’s own defeat.” By showing worlds in which people are helpless to do anything much to challenge the technological society in which they have become cogs Black Mirror risks perpetuating the sense that the viewers are themselves cogs, that the viewers are themselves helpless. There is an uncomfortable kinship between the tv viewing characters of “The National Anthem” and the real world viewer of the episode “The National Anthem” – neither party can look away. Or, to put it more starkly: if you are unable to alter the future why not simply prepare yourself for it by watching more episodes of Black Mirror? At least that way you will know which characters not to imitate.

    And yet, despite these critiques, it would be unwise to fully disregard the program. It is easy to pull out comments from the likes of Ellul, Adorno, Horkheimer and Mumford that eviscerate a program such as Black Mirror but it may be more important to ask: given Black Mirror’s shortcomings, what value can the show still have? Here it is useful to recall a comment from Günther Anders (whose pessimism was on par with, or exceeded, any of the aforementioned thinkers) – he was referring in this comment to the works of Kafka, but the comment is still useful:

    “from great warnings we should be able to learn, and they should help us to teach others.” (Anders, 98)

    This is where Black Mirror can be useful, not as a series that people sit and watch, but as a piece of culture that leads people to put forth the questions that the show jumps over. At its best what Black Mirror provides is a space in which people can discuss their fears and anxieties about technology without worrying that somebody will, farcically, call them a “Luddite” for daring to have such concerns – and for this reason alone the show may be worthwhile. By highlighting the questions that go unanswered in Black Mirror we may be able to put forth the very queries that are rarely made about technology today. It is true that the reflections seen by staring into Black Mirror are dark, warped and unappealing – but such reflections are only worth something if they compel audiences to rethink their relationships to the black mirrored surfaces in their lives today and which may be in their lives tomorrow. After all, one can look into the mirror in order to see the dirt on one’s face or one can look in the mirror because of a narcissistic urge. The program certainly has the potential to provide a useful reflection, but as with the technology depicted in the show, it is all too easy for such a potential reception to be “bungled.”

    If we are spending too much time gazing at black mirrors, is the solution really to stare at Black Mirror?

    The show may be a satire, but if all people do is watch, then the joke is on the audience.

    _____

    Zachary Loeb is a writer, activist, librarian, and terrible accordion player. He earned his MSIS from the University of Texas at Austin, and is currently working towards an MA in the Media, Culture, and Communications department at NYU. His research areas include media refusal and resistance to technology, ethical implications of technology, infrastructure and e-waste, as well as the intersection of library science with the STS field. Using the moniker “The Luddbrarian,” Loeb writes at the blog Librarian Shipwreck. He is a frequent contributor to The b2 Review Digital Studies section.

    Back to the essay
    _____

    Works Cited

    • Adorno, Theodor and Horkheimer, Max. Dialectic of Enlightenment: Philosophical Fragments. Stanford: Stanford University Press, 2002.
    • Anders, Günther. Franz Kafka. New York: Hilary House Publishers LTD, 1960.
    • Benjamin, Walter. Walter Benjamin: Selected Writings. Volume 3, 1935-1938. Cambridge: The Belknap Press, 2002.
    • Ellul, Jacques. The Technological Society. New York: Vintage Books, 1964.
    • Mumford, Lewis. The Story of Utopias. Bibliobazaar, 2008.
  • The Automatic Teacher

    The Automatic Teacher

    By Audrey Watters
    ~

    “For a number of years the writer has had it in mind that a simple machine for automatic testing of intelligence or information was entirely within the realm of possibility. The modern objective test, with its definite systemization of procedure and objectivity of scoring, naturally suggests such a development. Further, even with the modern objective test the burden of scoring (with the present very extensive use of such tests) is nevertheless great enough to make insistent the need for labor-saving devices in such work” – Sidney Pressey, “A Simple Apparatus Which Gives Tests and Scores – And Teaches,” School and Society, 1926

    Ohio State University professor Sidney Pressey first displayed the prototype of his “automatic intelligence testing machine” at the 1924 American Psychological Association meeting. Two years later, he submitted a patent for the device and spent the next decade or so trying to market it (to manufacturers and investors, as well as to schools).

    It wasn’t Pressey’s first commercial move. In 1922 he and his wife Luella Cole published Introduction to the Use of Standard Tests, a “practical” and “non-technical” guide meant “as an introductory handbook in the use of tests” aimed to meet the needs of “the busy teacher, principal or superintendent.” By the mid–1920s, the two had over a dozen different proprietary standardized tests on the market, selling a couple of hundred thousand copies a year, along with some two million test blanks.

    Although standardized testing had become commonplace in the classroom by the 1920s, they were already placing a significant burden upon those teachers and clerks tasked with scoring them. Hoping to capitalize yet again on the test-taking industry, Pressey argued that automation could “free the teacher from much of the present-day drudgery of paper-grading drill, and information-fixing – should free her for real teaching of the inspirational.”

    pressey_machines

    The Automatic Teacher

    Here’s how Pressey described the machine, which he branded as the Automatic Teacher in his 1926 School and Society article:

    The apparatus is about the size of an ordinary portable typewriter – though much simpler. …The person who is using the machine finds presented to him in a little window a typewritten or mimeographed question of the ordinary selective-answer type – for instance:

    To help the poor debtors of England, James Oglethorpe founded the colony of (1) Connecticut, (2) Delaware, (3) Maryland, (4) Georgia.

    To one side of the apparatus are four keys. Suppose now that the person taking the test considers Answer 4 to be the correct answer. He then presses Key 4 and so indicates his reply to the question. The pressing of the key operates to turn up a new question, to which the subject responds in the same fashion. The apparatus counts the number of his correct responses on a little counter to the back of the machine…. All the person taking the test has to do, then, is to read each question as it appears and press a key to indicate his answer. And the labor of the person giving and scoring the test is confined simply to slipping the test sheet into the device at the beginning (this is done exactly as one slips a sheet of paper into a typewriter), and noting on the counter the total score, after the subject has finished.

    The above paragraph describes the operation of the apparatus if it is being used simply to test. If it is to be used also to teach then a little lever to the back is raised. This automatically shifts the mechanism so that a new question is not rolled up until the correct answer to the question to which the subject is responding is found. However, the counter counts all tries.

    It should be emphasized that, for most purposes, this second set is by all odds the most valuable and interesting. With this second set the device is exceptionally valuable for testing, since it is possible for the subject to make more than one mistake on a question – a feature which is, so far as the writer knows, entirely unique and which appears decidedly to increase the significance of the score. However, in the way in which it functions at the same time as an ‘automatic teacher’ the device is still more unusual. It tells the subject at once when he makes a mistake (there is no waiting several days, until a corrected paper is returned, before he knows where he is right and where wrong). It keeps each question on which he makes an error before him until he finds the right answer; he must get the correct answer to each question before he can go on to the next. When he does give the right answer, the apparatus informs him immediately to that effect. If he runs the material through the little machine again, it measures for him his progress in mastery of the topics dealt with. In short the apparatus provides in very interesting ways for efficient learning.

    A video from 1964 shows Pressey demonstrating his “teaching machine,” including the “reward dial” feature that could be set to dispense a candy once a certain number of correct answers were given:

    [youtube https://www.youtube.com/watch?v=n7OfEXWuulg?rel=0]

    Market Failure

    UBC’s Stephen Petrina documents the commercial failure of the Automatic Teacher in his 2004 article “Sidney Pressey and the Automation of Education, 1924–1934.” According to Petrina, Pressey started looking for investors for his machine in December 1925, “first among publishers and manufacturers of typewriters, adding machines, and mimeo- graph machines, and later, in the spring of 1926, extending his search to scientific instrument makers.” He approached at least six Midwestern manufacturers in 1926, but no one was interested.

    In 1929, Pressey finally signed a contract with the W. M. Welch Manufacturing Company, a Chicago-based company that produced scientific instruments.

    Petrina writes that,

    After so many disappointments, Pressey was impatient: he offered to forgo royalties on two hundred machines if Welch could keep the price per copy at five dollars, and he himself submitted an order for thirty machines to be used in a summer course he taught school administrators. A few months later he offered to put up twelve hundred dollars to cover tooling costs. Medard W. Welch, sales manager of Welch Manufacturing, however, advised a “slower, more conservative approach.” Fifteen dollars per machine was a more realistic price, he thought, and he offered to refund Pressey fifteen dollars per machine sold until Pressey recouped his twelve-hundred-dollar investment. Drawing on nearly fifty years experience selling to schools, Welch was reluctant to rush into any project that depended on classroom reforms. He preferred to send out circulars advertising the Automatic Teacher, solicit orders, and then proceed with production if a demand materialized.

    ad_pressey

    The demand never really materialized, and even if it had, the manufacturing process – getting the device to market – was plagued with problems, caused in part by Pressey’s constant demands to redefine and retool the machines.

    The stress from the development of the Automatic Teacher took an enormous toll on Pressey’s health, and he had a breakdown in late 1929. (He was still teaching, supervising courses, and advising graduate students at Ohio State University.)

    The devices did finally ship in April 1930. But that original sales price was cost-prohibitive. $15 was, as Petrina notes, “more than half the annual cost ($29.27) of educating a student in the United States in 1930.” Welch could not sell the machines and ceased production with 69 of the original run of 250 devices still in stock.

    Pressey admitted defeat. In a 1932 School and Society article, he wrote “The writer is regretfully dropping further work on these problems. But he hopes that enough has been done to stimulate other workers.”

    But Pressey didn’t really abandon the teaching machine. He continued to present on his research at APA meetings. But he did write in a 1964 article “Teaching Machines (And Learning Theory) Crisis” that “Much seems very wrong about current attempts at auto-instruction.”

    Indeed.

    Automation and Individualization

    In his article “Toward the Coming ‘Industrial Revolution’ in Education (1932), Pressey wrote that

    “Education is the one major activity in this country which is still in a crude handicraft stage. But the economic depression may here work beneficially, in that it may force the consideration of efficiency and the need for laborsaving devices in education. Education is a large-scale industry; it should use quantity production methods. This does not mean, in any unfortunate sense, the mechanization of education. It does mean freeing the teacher from the drudgeries of her work so that she may do more real teaching, giving the pupil more adequate guidance in his learning. There may well be an ‘industrial revolution’ in education. The ultimate results should be highly beneficial. Perhaps only by such means can universal education be made effective.”

    Pressey intended for his automated teaching and testing machines to individualize education. It’s an argument that’s made about teaching machines today too. These devices will allow students to move at their own pace through the curriculum. They will free up teachers’ time to work more closely with individual students.

    But as Pretina argues, “the effect of automation was control and standardization.”

    The Automatic Teacher was a technology of normalization, but it was at the same time a product of liberality. The Automatic Teacher provided for self- instruction and self-regulated, therapeutic treatment. It was designed to provide the right kind and amount of treatment for individual, scholastic deficiencies; thus, it was individualizing. Pressey articulated this liberal rationale during the 1920s and 1930s, and again in the 1950s and 1960s. Although intended as an act of freedom, the self-instruction provided by an Automatic Teacher also habituated learners to the authoritative norms underwriting self-regulation and self-governance. They not only learned to think in and about school subjects (arithmetic, geography, history), but also how to discipline themselves within this imposed structure. They were regulated not only through the knowledge and power embedded in the school subjects but also through the self-governance of their moral conduct. Both knowledge and personality were normalized in the minutiae of individualization and in the machinations of mass education. Freedom from the confines of mass education proved to be a contradictory project and, if Pressey’s case is representative, one more easily automated than commercialized.

    The massive influx of venture capital into today’s teaching machines, of course, would like to see otherwise…
    _____

    Audrey Watters is a writer who focuses on education technology – the relationship between politics, pedagogy, business, culture, and ed-tech. She has worked in the education field for over 15 years: teaching, researching, organizing, and project-managing. Although she was two chapters into her dissertation (on a topic completely unrelated to ed-tech), she decided to abandon academia, and she now happily fulfills the one job recommended to her by a junior high aptitude test: freelance writer. Her stories have appeared on NPR/KQED’s education technology blog MindShift, in the data section of O’Reilly Radar, on Inside Higher Ed, in The School Library Journal, in The Atlantic, on ReadWriteWeb, and Edutopia. She is the author of the recent book The Monsters of Education Technology (Smashwords, 2014) and working on a book called Teaching Machines. She maintains the widely-read Hack Education blog, on which an earlier version of this review first appeared.

    Back to the essay

  • Artificial Intelligence as Alien Intelligence

    Artificial Intelligence as Alien Intelligence

    By Dale Carrico
    ~

    Science fiction is a genre of literature in which artifacts and techniques humans devise as exemplary expressions of our intelligence result in problems that perplex our intelligence or even bring it into existential crisis. It is scarcely surprising that a genre so preoccupied with the status and scope of intelligence would provide endless variations on the conceits of either the construction of artificial intelligences or contact with alien intelligences.

    Of course, both the making of artificial intelligence and making contact with alien intelligence are organized efforts to which many humans are actually devoted, and not simply imaginative sites in which writers spin their allegories and exhibit their symptoms. It is interesting that after generations of failure the practical efforts to construct artificial intelligence or contact alien intelligence have often shunted their adherents to the margins of scientific consensus and invested these efforts with the coloration of scientific subcultures: While computer science and the search for extraterrestrial intelligence both remain legitimate fields of research, both AI and aliens also attract subcultural enthusiasms and resonate with cultic theology, each attracts its consumer fandoms and public Cons, each has its True Believers and even its UFO cults and Robot cults at the extremities.

    Champions of artificial intelligence in particular have coped in many ways with the serial failure of their project to achieve its desired end (which is not to deny that the project has borne fruit) whatever the confidence with which generation after generation of these champions have insisted that desired end is near. Some have turned to more modest computational ambitions, making useful software or mischievous algorithms in which sad vestiges of the older dreams can still be seen to cling. Some are simply stubborn dead-enders for Good Old Fashioned AI‘s expected eventual and even imminent vindication, all appearances to the contrary notwithstanding. And still others have doubled down, distracting attention from the failures and problems bedeviling AI discourse simply by raising its pitch and stakes, no longer promising that artificial intelligence is around the corner but warning that artificial super-intelligence is coming soon to end human history.

    alien planet

    Another strategy for coping with the failure of artificial intelligence on its conventional terms has assumed a higher profile among its champions lately, drawing support for the real plausibility of one science-fictional conceit — construction of artificial intelligence — by appealing to another science-fictional conceit, contact with alien intelligence. This rhetorical gambit has often been conjoined to the compensation of failed AI with its hyperbolic amplification into super-AI which I have already mentioned, and it is in that context that I have written about it before myself. But in a piece published a few days ago in The New York Times, “Outing A.I.: Beyond the Turing Test,” Benjamin Bratton, a professor of visual arts at U.C. San Diego and Director of a design think-tank, has elaborated a comparatively sophisticated case for treating artificial intelligence as alien intelligence with which we can productively grapple. Near the conclusion of his piece Bratton declares that “Musk, Gates and Hawking made headlines by speaking to the dangers that A.I. may pose. Their points are important, but I fear were largely misunderstood by many readers.” Of course these figures made their headlines by making the arguments about super-intelligence I have already rejected, and mentioning them seems to indicate Bratton’s sympathy with their gambit and even suggests that his argument aims to help us to understand them better on their own terms. Nevertheless, I take Bratton’s argument seriously not because of but in spite of this connection. Ultimately, Bratton makes a case for understanding AI as alien that does not depend on the deranging hyperbole and marketing of robocalypse or robo-rapture for its force.

    In the piece, Bratton claims “Our popular conception of artificial intelligence is distorted by an anthropocentric fallacy.” The point is, of course, well taken, and the litany he rehearses to illustrate it is enormously familiar by now as he proceeds to survey popular images from Kubrick’s HAL to Jonze’s Her and to document public deliberation about the significance of computation articulated through such imagery as the “rise of the machines” in the Terminator franchise or the need for Asimov’s famous fictional “Three Laws of Robotics.” It is easy — and may nonetheless be quite important — to agree with Bratton’s observation that our computational/media devices lack cruel intentions and are not susceptible to Asimovian consciences, and hence thinking about the threats and promises and meanings of these devices through such frames and figures is not particularly helpful to us even though we habitually recur to them by now. As I say, it would be easy and important to agree with such a claim, but Bratton’s proposal is in fact somewhat a different one:

    [A] mature A.I. is not necessarily a humanlike intelligence, or one that is at our disposal. If we look for A.I. in the wrong ways, it may emerge in forms that are needlessly difficult to recognize, amplifying its risks and retarding its benefits. This is not just a concern for the future. A.I. is already out of the lab and deep into the fabric of things. “Soft A.I.,” such as Apple’s Siri and Amazon recommendation engines, along with infrastructural A.I., such as high-speed algorithmic trading, smart vehicles and industrial robotics, are increasingly a part of everyday life.

    Here the serial failure of the program of artificial intelligence is redeemed simply by declaring victory. Bratton demonstrates that crying uncle does not preclude one from still crying wolf. It’s not that Siri is some sickly premonition of the AI-daydream still endlessly deferred, but that it represents the real rise of what robot cultist Hans Moravec once promised would be our “mind children” but here and now as elfen aliens with an intelligence unto themselves. It’s not that calling a dumb car a “smart” car is simply a hilarious bit of obvious marketing hyperbole, but represents the recognition of a new order of intelligent machines among us. Rather than criticize the way we may be “amplifying its risks and retarding its benefits” by reading computation through the inapt lens of intelligence at all, he proposes that we should resist holding machine intelligence to the standards that have hitherto defined it for fear of making its recognition “too difficult.”

    The kernel of legitimacy in Bratton’s inquiry is its recognition that “intelligence is notoriously difficult to define and human intelligence simply can’t exhaust the possibilities.” To deny these modest reminders is to indulge in what he calls “the pretentious folklore” of anthropocentrism. I agree that anthropocentrism in our attributions of intelligence has facilitated great violence and exploitation in the world, denying the dignity and standing of Cetaceans and Great Apes, but has also facilitated racist, sexist, xenophobic travesties by denigrating humans as beastly and unintelligent objects at the disposal of “intelligent” masters. “Some philosophers write about the possible ethical ‘rights’ of A.I. as sentient entities, but,” Bratton is quick to insist, “that’s not my point here.” Given his insistence that the “advent of robust inhuman A.I.” will force a “reality-based” “disenchantment” to “abolish the false centrality and absolute specialness of human thought and species-being” which he blames in his concluding paragraph with providing “theological and legislative comfort to chattel slavery” it is not entirely clear to me that emancipating artificial aliens is not finally among the stakes that move his argument whatever his protestations to the contrary. But one can forgive him for not dwelling on such concerns: the denial of an intelligence and sensitivity provoking responsiveness and demanding responsibilities in us all to women, people of color, foreigners, children, the different, the suffering, nonhuman animals compels defensive and evasive circumlocutions that are simply not needed to deny intelligence and standing to an abacus or a desk lamp. It is one thing to warn of the anthropocentric fallacy but another to indulge in the pathetic fallacy.

    Bratton insists to the contrary that his primary concern is that anthropocentrism skews our assessment of real risks and benefits. “Unfortunately, the popular conception of A.I., at least as depicted in countless movies, games and books, still seems to assume that humanlike characteristics (anger, jealousy, confusion, avarice, pride, desire, not to mention cold alienation) are the most important ones to be on the lookout for.” And of course he is right. The champions of AI have been more than complicit in this popular conception, eager to attract attention and funds for their project among technoscientific illiterates drawn to such dramatic narratives. But we are distracted from the real risks of computation so long as we expect risks to arise from a machinic malevolence that has never been on offer nor even in the offing. Writes Bratton: “Perhaps what we really fear, even more than a Big Machine that wants to kill us, is one that sees us as irrelevant. Worse than being seen as an enemy is not being seen at all.”

    But surely the inevitable question posed by Bratton’s disenchanting expose at this point should be: Why, once we have set aside the pretentious folklore of machines with diabolical malevolence, do we not set aside as no less pretentiously folkloric the attribution of diabolical indifference to machines? Why, once we have set aside the delusive confusion of machine behavior with (actual or eventual) human intelligence, do we not set aside as no less delusive the confusion of machine behavior with intelligence altogether? There is no question were a gigantic bulldozer with an incapacitated driver to swerve from a construction site onto a crowded city thoroughfare this would represent a considerable threat, but however tempting it might be in the fraught moment or reflective aftermath poetically to invest that bulldozer with either agency or intellect it is clear that nothing would be gained in the practical comprehension of the threat it poses by so doing. It is no more helpful now in an epoch of Greenhouse storms than it was for pre-scientific storytellers to invest thunder and whirlwinds with intelligence. Although Bratton makes great play over the need to overcome folkloric anthropocentrism in our figuration of and deliberation over computation, mystifying agencies and mythical personages linger on in his accounting however he insists on the alienness of “their” intelligence.

    Bratton warns us about the “infrastructural A.I.” of high-speed financial trading algorithms, Google and Amazon search algorithms, “smart” vehicles (and no doubt weaponized drones and autonomous weapons systems would count among these), and corporate-military profiling programs that oppress us with surveillance and harass us with targeted ads. I share all of these concerns, of course, but personally insist that our critical engagement with infrastructural coding is profoundly undermined when it is invested with insinuations of autonomous intelligence. In “Art in the Age of Mechanical Reproducibility,” Walter Benjamin pointed out that when philosophers talk about the historical force of art they do so with the prejudices of philosophers: they tend to write about those narrative and visual forms of art that might seem argumentative in allegorical and iconic forms that appear analogous to the concentrated modes of thought demanded by philosophy itself. Benjamin proposed that perhaps the more diffuse and distracted ways we are shaped in our assumptions and aspirations by the durable affordances and constraints of the made world of architecture and agriculture might turn out to drive history as much or even more than the pet artforms of philosophers do. Lawrence Lessig made much the same point when he declared at the turn of the millennium that “Code Is Law.”

    It is well known that special interests with rich patrons shape the legislative process and sometimes even explicitly craft legislation word for word in ways that benefit them to the cost and risk of majorities. It is hard to see how our assessment of this ongoing crime and danger would be helped and not hindered by pretending legislation is an autonomous force exhibiting an alien intelligence, rather than a constellation of practices, norms, laws, institutions, ritual and material artifice, the legacy of the historical play of intelligent actors and the site for the ongoing contention of intelligent actors here and now. To figure legislation as a beast or alien with a will of its own would amount to a fetishistic displacement of intelligence away from the actual actors actually responsible for the forms that legislation actually takes. It is easy to see why such a displacement is attractive: it profitably abets the abuses of majorities by minorities while it absolves majorities from conscious complicity in the terms of their own exploitation by laws made, after all, in our names. But while these consoling fantasies have an obvious allure this hardly justifies our endorsement of them.

    I have already written in the past about those who want to propose, as Bratton seems inclined to do in the present, that the collapse of global finance in 2008 represented the working of inscrutable artificial intelligences facilitating rapid transactions and supporting novel financial instruments of what was called by Long Boom digerati the “new economy.” I wrote:

    It is not computers and programs and autonomous techno-agents who are the protagonists of the still unfolding crime of predatory plutocratic wealth-concentration and anti-democratizing austerity. The villains of this bloodsoaked epic are the bankers and auditors and captured-regulators and neoliberal ministers who employed these programs and instruments for parochial gain and who then exonerated and rationalized and still enable their crimes. Our financial markets are not so complex we no longer understand them. In fact everybody knows exactly what is going on. Everybody understands everything. Fraudsters [are] engaged in very conventional, very recognizable, very straightforward but unprecedentedly massive acts of fraud and theft under the cover of lies.

    I have already written in the past about those who want to propose, as Bratton seems inclined to do in the present, that our discomfiture in the setting of ubiquitous algorithmic mediation results from an autonomous force over which humans intentions are secondary considerations. I wrote:

    [W]hat imaginary scene is being conjured up in this exculpatory rhetoric in which inadvertent cruelty is ‘coming from code’ as opposed to coming from actual persons? Aren’t coders actual persons, for example? … [O]f course I know what [is] mean[t by the insistence…] that none of this was ‘a deliberate assault.’ But it occurs to me that it requires the least imaginable measure of thought on the part of those actually responsible for this code to recognize that the cruelty of [one user’s] confrontation with their algorithm was the inevitable at least occasional result for no small number of the human beings who use Facebook and who live lives that attest to suffering, defeat, humiliation, and loss as well as to parties and promotions and vacations… What if the conspicuousness of [this] experience of algorithmic cruelty indicates less an exceptional circumstance than the clarifying exposure of a more general failure, a more ubiquitous cruelty? … We all joke about the ridiculous substitutions performed by autocorrect functions, or the laughable recommendations that follow from the odd purchase of a book from Amazon or an outing from Groupon. We should joke, but don’t, when people treat a word cloud as an analysis of a speech or an essay. We don’t joke so much when a credit score substitutes for the judgment whether a citizen deserves the chance to become a homeowner or start a small business, or when a Big Data profile substitutes for the judgment whether a citizen should become a heat signature for a drone committing extrajudicial murder in all of our names. [An] experience of algorithmic cruelty [may be] extraordinary, but that does not mean it cannot also be a window onto an experience of algorithmic cruelty that is ordinary. The question whether we might still ‘opt out’ from the ordinary cruelty of algorithmic mediation is not a design question at all, but an urgent political one.

    I have already written in the past about those who want to propose, as Bratton seems inclined to do in the present, that so-called Killer Robots are a threat that must be engaged by resisting or banning “them” in their alterity rather than by assigning moral and criminal responsibility on those who code, manufacture, fund, and deploy them. I wrote:

    Well-meaning opponents of war atrocities and engines of war would do well to think how tech companies stand to benefit from military contracts for ‘smarter’ software and bleeding-edge gizmos when terrorized and technoscientifically illiterate majorities and public officials take SillyCon Valley’s warnings seriously about our ‘complacency’ in the face of truly autonomous weapons and artificial super-intelligence that do not exist. It is crucial that necessary regulation and even banning of dangerous ‘autonomous weapons’ proceeds in a way that does not abet the mis-attribution of agency, and hence accountability, to devices. Every ‘autonomous’ weapons system expresses and mediates decisions by responsible humans usually all too eager to disavow the blood on their hands. Every legitimate fear of ‘killer robots’ is best addressed by making their coders, designers, manufacturers, officials, and operators accountable for criminal and unethical tools and uses of tools… There simply is no such thing as a smart bomb. Every bomb is stupid. There is no such thing as an autonomous weapon. Every weapon is deployed. The only killer robots that actually exist are human beings waging and profiting from war.

    “Arguably,” argues Bratton, “the Anthropocene itself is due less to technology run amok than to the humanist legacy that understands the world as having been given for our needs and created in our image. We hear this in the words of thought leaders who evangelize the superiority of a world where machines are subservient to the needs and wishes of humanity… This is the sentiment — this philosophy of technology exactly — that is the basic algorithm of the Anthropocenic predicament, and consenting to it would also foreclose adequate encounters with A.I.” The Anthropocene in this formulation names the emergence of environmental or planetary consciousness, an emergence sometimes coupled to the global circulation of the image of the fragility and interdependence of the whole earth as seen by humans from outer space. It is the recognition that the world in which we evolved to flourish might be impacted by our collective actions in ways that threaten us all. Notice, by the way, that multiculture and historical struggle are figured as just another “algorithm” here.

    I do not agree that planetary catastrophe inevitably followed from the conception of the earth as a gift besetting us to sustain us, indeed this premise understood in terms of stewardship or commonwealth would go far in correcting and preventing such careless destruction in my opinion. It is the false and facile (indeed infantile) conception of a finite world somehow equal to infinite human desires that has landed us and keeps us delusive ignoramuses lodged in this genocidal and suicidal predicament. Certainly I agree with Bratton that it would be wrong to attribute the waste and pollution and depletion of our common resources by extractive-industrial-consumer societies indifferent to ecosystemic limits to “technology run amok.” The problem of so saying is not that to do so disrespects “technology” — as presumably in his view no longer treating machines as properly “subservient to the needs and wishes of humanity” would more wholesomely respect “technology,” whatever that is supposed to mean — since of course technology does not exist in this general or abstract way to be respected or disrespected.

    The reality at hand is that humans are running amok in ways that are facilitated and mediated by certain technologies. What is demanded in this moment by our predicament is the clear-eyed assessment of the long-term costs, risks, and benefits of technoscientific interventions into finite ecosystems to the actual diversity of their stakeholders and the distribution of these costs, risks, and benefits in an equitable way. Quite a lot of unsustainable extractive and industrial production as well as mass consumption and waste would be rendered unprofitable and unappealing were its costs and risks widely recognized and equitably distributed. Such an understanding suggests that what is wanted is to insist on the culpability and situation of actually intelligent human actors, mediated and facilitated as they are in enormously complicated and demanding ways by technique and artifice. The last thing we need to do is invest technology-in-general or environmental-forces with alien intelligence or agency apart from ourselves.

    I am beginning to wonder whether the unavoidable and in many ways humbling recognition (unavoidable not least because of environmental catastrophe and global neoliberal precarization) that human agency emerges out of enormously complex and dynamic ensembles of interdependent/prostheticized actors gives rise to compensatory investments of some artifacts — especially digital networks, weapons of mass destruction, pandemic diseases, environmental forces — with the sovereign aspect of agency we no longer believe in for ourselves? It is strangely consoling to pretend our technologies in some fancied monolithic construal represent the rise of “alien intelligences,” even threatening ones, other than and apart from ourselves, not least because our own intelligence is an alienated one and prostheticized through and through. Consider the indispensability of pedagogical techniques of rote memorization, the metaphorization and narrativization of rhetoric in songs and stories and craft, the technique of the memory palace, the technologies of writing and reading, the articulation of metabolism and duration by timepieces, the shaping of both the body and its bearing by habit and by athletic training, the lifelong interplay of infrastructure and consciousness: all human intellect is already technique. All culture is prosthetic and all prostheses are culture.

    Bratton wants to narrate as a kind of progressive enlightenment the mystification he recommends that would invest computation with alien intelligence and agency while at once divesting intelligent human actors, coders, funders, users of computation of responsibility for the violations and abuses of other humans enabled and mediated by that computation. This investment with intelligence and divestment of responsibility he likens to the Copernican Revolution in which humans sustained the momentary humiliation of realizing that they were not the center of the universe but received in exchange the eventual compensation of incredible powers of prediction and control. One might wonder whether the exchange of the faith that humanity was the apple of God’s eye for a new technoscientific faith in which we aspired toward godlike powers ourselves was really so much a humiliation as the exchange of one megalomania for another. But what I want to recall by way of conclusion instead is that the trope of a Copernican humiliation of the intelligent human subject is already quite a familiar one:

    In his Introductory Lectures on Psychoanalysis Sigmund Freud notoriously proposed that

    In the course of centuries the naive self-love of men has had to submit to two major blows at the hands of science. The first was when they learnt that our earth was not the center of the universe but only a tiny fragment of a cosmic system of scarcely imaginable vastness. This is associated in our minds with the name of Copernicus… The second blow fell when biological research de­stroyed man’s supposedly privileged place in creation and proved his descent from the animal kingdom and his ineradicable animal nature. This revaluation has been accomplished in our own days by Darwin… though not without the most violent contemporary opposition. But human megalomania will have suffered its third and most wounding blow from the psychological research of the present time which seeks to prove to the ego that it is not even master in its own house, but must content itself with scanty information of what is going on un­consciously in the mind.

    However we may feel about psychoanalysis as a pseudo-scientific enterprise that did more therapeutic harm than good, Freud’s works considered instead as contributions to moral philosophy and cultural theory have few modern equals. The idea that human consciousness is split from the beginning as the very condition of its constitution, the creative if self-destructive result of an impulse of rational self-preservation beset by the overabundant irrationality of humanity and history, imposed a modesty incomparably more demanding than Bratton’s wan proposal in the same name. Indeed, to the extent that the irrational drives of the dynamic unconscious are often figured as a brute machinic automatism, one is tempted to suggest that Bratton’s modest proposal of alien artifactual intelligence is a fetishistic disavowal of the greater modesty demanded by the alienating recognition of the stratification of human intelligence by unconscious forces (and his moniker a symptomatic citation). What is striking about the language of psychoanalysis is the way it has been taken up to provide resources for imaginative empathy across the gulf of differences: whether in the extraordinary work of recent generations of feminist, queer, and postcolonial scholars re-orienting the project of the conspicuously sexist, heterosexist, cissexist, racist, imperialist, bourgeois thinker who was Freud to emancipatory ends, or in the stunning leaps in which Freud identified with neurotic others through psychoanalytic reading, going so far as to find in the paranoid system-building of the psychotic Dr. Schreber an exemplar of human science and civilization and a mirror in which he could see reflected both himself and psychoanalysis itself. Freud’s Copernican humiliation opened up new possibilities of responsiveness in difference out of which could be built urgently necessary responsibilities otherwise. I worry that Bratton’s Copernican modesty opens up new occasions for techno-fetishistic fables of history and disavowals of responsibility for its actual human protagonists.
    _____

    Dale Carrico is a member of the visiting faculty at the San Francisco Art Institute as well as a lecturer in the Department of Rhetoric at the University of California at Berkeley from which he received his PhD in 2005. His work focuses on the politics of science and technology, especially peer-to-peer formations and global development discourse and is informed by a commitment to democratic socialism (or social democracy, if that freaks you out less), environmental justice critique, and queer theory. He is a persistent critic of futurological discourses, especially on his Amor Mundi blog, on which an earlier version of this post first appeared.

    Back to the essay